• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

I think people may be missing something here - namely that if you can trust a person to do a job, why shouldn't you trust an AI - built to exactly replicate human behaviour - to do the same?

A human soldier could turn around and kill you if mistreated just as easily as an AI soldier could. A human president could start wars and destroy the world just as easily as an AI one could.

If we trust humans, then why shouldn't we trust AI? Why do we immediately assume that they could try to destroy us or turn against us, when we trust humans who could can do that anyway?
 
Why are ther pilots on civilian aircraft today?

Most of the time all they have to do is punch the autopilot on/off toggle. ;)
file_23.gif


Seriously though, in a couple of years automated take off/landing systems will be more reliable than the human crew.

Would you fly in that plane???
 
If it really is more reliable than a human crew, then sure I would.

I mean, what do you think an AI pilot would do, open the doors and kill everyone out of spite or inferiority complex?! ;) There's bound to be as many safeguards in place - if not more, until they're a proven technology - when AI is implemented in such situations as there are with humans. That's just a common sense thing to do, you wouldn't just let anyone - man or machine - run or operate something without some kind of backups or safeguards.
 
Originally posted by Sigg Oddra:
Why are ther pilots on civilian aircraft today?

Most of the time all they have to do is punch the autopilot on/off toggle. ;)
file_23.gif


Seriously though, in a couple of years automated take off/landing systems will be more reliable than the human crew.

Would you fly in that plane???
Not too long ago, at an airshow, a passenger airplane, with test pilots and a few observers aboard, overflew the runway in a demonstration/display pass.

This aircraft was considered quite advanced.

What no one on the ground knew was that the aircraft thought that because it was overflying a runway, that it should land. The pilot kept aborting the landing.

As the plane flew out over the end of the runway, and disappeared over some trees, a great fireball leapt into the sky.

The computer won it's battle versus the pilot and forced the aircraft down.
 
The computer won it's battle versus the pilot and forced the aircraft down.
That's just bad design though. Plus, the computer is dumb - all it knows is that it thinks that everything is OK and that it should land. It's the designers' fault for not allowing a simple manual override to be effective when first used. It's not like there was a "battle against the pilot", it was just that there weren't effective failsafes built into it.

I mean, a smart expert AI built/trained for the job (much like a smart human intelligence trained for the job) - would realise that something was wrong and not force a landing.

Films like Terminator have got a lot to answer for when it comes to giving AI a bad rep. Fact is, intelligence is intelligence - whether it's silicon or organic, if it reacts the same way then why should one be trusted more than the other?
 
You do pose an interesting question, Mal; why do people just not trust computers? Quite possibly Hollywood has little to do with it, and Redmond has everything to do with it!

But aside from that, aqssuming we did make a more reliable computer, there is still going to be some resistance to the idea. I believe the primary reason is that computers are machines, and machines do certain specific jobs for which they are designed. When circumstances exceed the design parameters of a machine, the machine usually has something bad happen to it, and generally that translates to the passengers too. However, a human has a much more flexible "programming", and can respond to a much greater range of things going wrong.

Nowadays, it's not a bad idea to have BOTH a good AI and a human, to be able to detect and rectify an even wider range of bad things happening.

Can a human go nuts? Sure, but those we entrust our safety to have given us reasonable assurance that things like that won't happen, and they are backed up by the 99.99999 or so percent of humans who have not had their brains crash, or at least schedule their crashes for times when my safety is not in their hands.

I do not 100% trust a human either, btw. All through my one and only helicopter ride, or on any of my few airplane rides, I wonder if something bad is going to happen. Though I suppose my worry was more that there would be a mechanical problem.

Anyway, I trust more in a human than in a computer (aside from the Redmond issue) because if he screws up, a human can be punished. What can you do to a computer to punish it? (I'd really like to know, so I can threaten MY computers with it to get them to behave!) A bad enough offense can result in death; computers do not fear death. They cannot be killed; you can copy them and run them elsewhere. What's to fear? At the worst, you can tell it it will have to waste some time catching up on some news, or that it will lose a few memories.

I suppose it goes back to something I mentioned earlier in the discussion, about my character not being able to figure out how to prove to a computer that he was sentient, except to spitefully destroy himself in the process of killing it.

I don't know how much of the above makes any sense, but it's what occurred to me upon reading your question.
 
Well, you assume that a digital mind is different to an organic one. Why should an AI not fear the same things that a human fears? Why should an AI not feel the same pride in the tasks they do as a human does?

AIs could have plenty to fear. Sure, our PCs don't fear being turned off, but that's because they're not sentient. And just because they may have 'ROM' memory that isn't lost when they're switched off, doesn't mean they wouldn't fear it. Maybe they might just see 'downtime' as sleep. But then, we humans usually assume we're going to wake up in the morning - unless the AI is set to a fixed, regular 'sleep cycle', it won't. You imagine going to sleep and not knowing when you'll wake up... or even if you'll wake up at all. And then wonder what that would do to your mind. This could of course be a strong reason to allow any AI to have a regular sleep cycle - or just no downtime at all. But then, if it didn't 'sleep', then what effect would that have on its psyche? Do all sentient beings need time to dream? Dr Chandra seemed to think so in the 2010 movie ;) .

But I digress (this is why I like talking about AI - it really makes one think about things one might otherwise completely take for granted). Could an AI be punished? Sure it could. You could 'turn it off' as described above. Or you could put it in isolation - a box with enough memory and power to run the AI, but no input or output. Imagine how that must feel. Or of course you could punish it like you do a human - lock it up. Put it in a standard humanoid body (equivalent in strength and ability to a normal human) and throw it in prison with no option to download or upload information beyond the normal senses. If an AI is sufficiently 'human', then it too can feel guilt and remorse for its actions, or ponder the consequences.

Again, it's a case of imagining an AI that is basically equivalent in every way to human - the only difference (beyond its obvious numbercrunchy abilities) is that it's artificial. So you'd treat it in exactly the same way as a human and get exactly the same sort of responses. If you are comfortable around a human - as unpredictable as we are - then you should be comfortable around such an AI.
 
I do not believe that, no matter how hard we try to design it that way, a computer AI will be much like a human intelligence. Your environment and how you are made shape how you think, at least as much (if not much more) than how you are raised.

AI is going to be ALIEN like nothing else we have experienced. They will make Ithklur look completely sane by comparison.

The punishments you suggest COULD be enough to turn an AI insane, effectively killing it, and thus rendering them useless as punishments. But you're talking about AIs being very humanlike, and treating them as humans MIGHT work. Still, they AREN'T humans, so I think the practice will not match the theory, even if they are very humanlike.

I don't think I would want a totally humanlike AI anyway. I view machines as machines; I do not want them overriding my orders except when it is important that they do so (like to prevent loss, injury, or death). I do not want this taken to an extreme, such that Colossus might decide to take over in order to end all wars and prevent all human destruction, at the cost of human freedom. (Colossus is an old movie with a very nearly identical theme to the recently released 'I, Robot' movie.)

I want a machine that knows what I mean and follows my orders, or otherwise does the job it is designed to do. I don't really need a full-blown AI that can learn to smart off and say 'No'. I get enough of that from the computers I already use.
 
Originally posted by TheDS:
It is a simple matter to "hardwire" something. Don't want it to kill? Don't give it any way to. No arms, no legs, no mobility <snip>
Whoa. A flashback to Boxing Helena just went through my mind.


Originally posted by TheDS:
You might limit who it comes into contact with too: keep away from raving lunatics who might be convinced to do something. No connection to environmental controls (I don't feel like steeping outside for a nice breath of fresh vacuum today), and we don't want it copying itself to the internet.
I don’t think anything limited in this manner could achieve sentience easily. It would have to be given a virtual environment to live in and interact with.


Originally posted by TheDS:
but BECAUSE of Contact, we will not assume they are all unfriendly. And so on.
I don’t know, the aliens from Contact may not have been the enemies of Earthlings per-se, but they definitely did some highly annoying things which would normally merit at least a spanking if they’d been small children playing at that sort of “ha ha” prank (though their relative technological position would make such impossible).
 
Originally posted by Malenfant:
</font><blockquote>quote:</font><hr />The computer won it's battle versus the pilot and forced the aircraft down.
That's just bad design though. Plus, the computer is dumb - all it knows is that it thinks that everything is OK and that it should land. It's the designers' fault for not allowing a simple manual override to be effective when first used. It's not like there was a "battle against the pilot", it was just that there weren't effective failsafes built into it.
</font>[/QUOTE]Oh, I completely agree, a very bad design. My point was that they designers thought it had been a good design; they'd done everything they could to assure it. And they were still wrong.


Originally posted by Malenfant:
I mean, a smart expert AI built/trained for the job (much like a smart human intelligence trained for the job) - would realise that something was wrong and not force a landing.
Yes, I'd agree with that, too.

The issue was one of trust and doubt.

When a child is born and grows, that person builds up trust in themself by doing things that build trust. AIs will be obliged to do the same.

The general fear is that AIs will be so much smarter than us, that regardless of what restraints we put on them, that they'll completely out think us, and then go their own way.

It's my personal belief (though it may be naive one), that the way to avoid this will be to treat AIs as our children, and while not all children look kindly upon their parents, most do.

It's hard to tell what we'll really get until we get a real AI and have some real experience to draw on.


Originally posted by Malenfant:
Films like Terminator have got a lot to answer for when it comes to giving AI a bad rep. Fact is, intelligence is intelligence - whether it's silicon or organic, if it reacts the same way then why should one be trusted more than the other?
There were two AIs in T2, one good, one bad.

T2 Movie Quote:
"If a machine can learn the value of a human life, maybe we can too."
 
Originally posted by TheDS:
Anyway, I trust more in a human than in a computer (aside from the Redmond issue) because if he screws up, a human can be punished. What can you do to a computer to punish it? (I'd really like to know, so I can threaten MY computers with it to get them to behave!) A bad enough offense can result in death; computers do not fear death. They cannot be killed; you can copy them and run them elsewhere. What's to fear? At the worst, you can tell it it will have to waste some time catching up on some news, or that it will lose a few memories.
These sorts of questions have been dealt with in good SF for a long time.

As highlights that come to mind first, I personally recommend:
</font>
  • John C. Wright’s: The Golden Age, The Phoenix Exultant, and The Golden Transcendence</font>
  • John Varley’s: The Ophiuchi Hotline</font>
 
Originally posted by TheDS:
You do pose an interesting question, Mal; why do people just not trust computers? Quite possibly Hollywood has little to do with it, and Redmond has everything to do with it!
That's it for me. I heard alot of talk about convergence in electronic devices, especially in the home. If you want, you can chuck your stereo in favor of a centralized, computer controlled setup. And there are plenty of people whose pc is their primary music player. But not me. I like the fact that my stereo works. If my stereo or cd player crashed as often as my current or previous pc did, I'd kick it to the curb. I'll stay retro and keep my appliances dumb. :D

And for those of you who might ask the implied question, my next pc (or maybe this one) will run Linux. I think that I've got the blue screen of death burned into my retinas.
toast.gif
 
Back
Top