• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

Originally posted by flykiller:
isn't non-determinism, including any quantum randomness, here nothing more than ignorance of input? isn't the machine's behavior, given a certain set of inputs, still fixed?
That depends on whether the program's input is fixed.

A program equipped to monitor (in hundreds or thousands of locations or more) with visual sensors, microphones, GPS locator, internet connection (with spiders running and direct simultaneous access to all existing search engines), radio scanner, TV tunner/receiver, biometric security scanners, etc. . . . getting all that data input into itself constantly, well, there's really no fixed input at all. Since there is no fixed input, there can be no fixed output.


Originally posted by flykiller:
can any machine be constructed, or any program code be written, that is not strictly deterministic?
So, in this context, I'm not to take "not strictly deterministic" as synonymous with "non-deterministic"?

Well then, as Mal asked above, what does "not strictly deterministic" mean?
 
Originally posted by flykiller:
isn't non-determinism, including any quantum randomness, here nothing more than ignorance of input?
That depends, as well. In classic QM, its randomness is 100% random. There is no "ignorance" of input. Non-determinism is where QM rests its heart and soul.

Now, QM still has nagging little problems, and doesn't explain everything.

On the other hand, the splinter theory "Bohm Mechanics" (espoused by David Joseph Bohm and some followers), states that the universe is 100% deterministic, and tries to provide explanations as to why.

Although interesting, Bohm's Mechanics are not generally accepted.
 
Hi !

Originally posted by Malenfant:
</font><blockquote>quote:</font><hr />Originally posted by TheEngineer:
So, I might ask: Where is the place, where an AI of minor "intellect" could be of value anyway ?
"AI is something that nobody needs"?! I think you just critically fumbled your "use your imagination" roll ;) .

You'd be better off asking where such an AI WOULDN'T be of value.
...
Another option is to get them to do the menial mechanical tasks that humans are wasted on ...
Factories could become entirely automatic...

There's LOTS of things an AI could be useful for.
</font>[/QUOTE]Perhaps I should have stressed the words "minor intelligence" a bit more.
Seriously, would you like to have a personal servant, who has a kind of intelligence, but still is dumb compared to an average human being ?
Actually both - mechanical tasks and factories - are already fully automized, perhaps not controlled by anything like an AI, but by classic hard & software of considerable complexity.

Flykiller wrote:
can any machine be constructed, or any program code be written, that is not strictly deterministic?

can a quantity of machinery together with a quantity of code be increased to the point where they are no longer strictly deterministic?
My anwer to both: No.
Just because everything we have to contruct something is finally based on an action/reaction principle and as such deterministic.
Again, this does not prevent a complex system to behave non-deterministic. But thats just because complexity blurs basic deterministic elements of the decision.

The funny thing with QM is, that its just the other way around.
If one electron goes thru a narrow opening nobody is able to tell in which direction this electron might move on.
But taking the big picture we can predict to distributions of large amounts of electrons with great precisicion.
At least all QM effects work in a similar way: single event - non-deterministic, macroscopic apprearance of event bundle - deterministic.
So the single process is partly non-deterministic, but the process bundle as a macroskopic effect appears to be deterministic.

And a comment of Rain:
....Since there is no fixed input, there can be no fixed output.
Taking a machine/program processing steps of narrowing, comparing, classifying serve to provide a fixed output on a unfixed set of input.
Dealing with process and software testmanagement you might use methods like the aquivalent classes analysation to break down complex input to less complex input.
Actually thats what the brain does by widely using stimulation level limits and synaptic redundancies.

And a last one regarding the fear of a AI:
</font><blockquote>quote:</font><hr /> --------------------------------------------------------------------------------
If human/AI relationships are to get off to a good start, then we can't begin with making our own creations second class citizens.
--------------------------------------------------------------------------------
To do that we'd somehow have to get over our own paranoia that AI will rise up and destroy us. That's going to take a lot of re-education (no thanks to Hollywood scifi like the Terminator or the Matrix). </font>[/QUOTE]It all depends on the very basic motivation implanted into the AI. If this motivation would be "stay alive and reproduce" I would be careful:).
What I never would be sure of is, that such a complex system could change...
And perhaps on the way the AI is created.
I have problems with the believe, that an AI is createable just like any other program.
IMHO we might be able to provide a kind of "boot level AI", which incorporates the abilty to evolve and which needs to move thru a cycle of training/education in order to reach a certain level of functioning.
Ok. It may be copied at a late state of development.

Regards,

Mert
 
Originally posted by TheEngineer:
Perhaps I should have stressed the words "minor intelligence" a bit more.
Seriously, would you like to have a personal servant, who has a kind of intelligence, but still is dumb compared to an average human being ?
I know it's fictional, but just how smart do you think HAL 9000 was? ;) Seriously. It could run an entire spacecraft, play a great game of chess, interact meaningfully with its crew, and be highly inventive in coming up with ways to kill them when its existence was threatened. But if you think about it, Hal actually had very limited intelligence. It was still obviously an AI - it didn't sound human, it couldn't master the nuances of normal human conversation, and we discovered in 2010 that it couldn't even understand or resolved its conflicting priorities, and thus went mad.

So I'm figuring that even the great HAL 9000 isn't all that smart. Now, something like David - from the movie A.I. - THAT'S a real AI IMO.
 
Hi Mal !

I generally agree with You about HAL.
But at least we only recognized his interface, and perhaps his intelligence wasnt shaped correctly to fulfill his job.
And to fall into an "unsolvable" conflict sadly is a thing, which happens to a large amount of people - intelligent ones too - everyday.

During this conversation I often remember one of these SF short stories about a person awaking and suddenly realizing, that there is no feedback from the senses, no sound, no vision, no feeling of heat or cold, just nothing except the "thought" itself just accompanied with an ever increasing fear and rising madness...
The last picture presented in the story was a bowl with a bit of wiring and a brain floating in some liquid.

O.K., what I want to express is, that it might be quite hard to find a fitting environment for a kind of intelligence in order to prevent it from becoming mad sooner or later.

Unfortunately I have not seen A.I. yet.
If its worth it I may try to do so...
 
Originally posted by Malenfant:
I know it's fictional, but just how smart do you think HAL 9000 was? ;) Seriously. It could run an entire spacecraft, play a great game of chess, interact meaningfully with its crew, and be highly inventive in coming up with ways to kill them when its existence was threatened.
Threatened by what? What reference anywhere inside the movie footage of 2001 tells us that HAL was threatened?

(I've read and seen 2010, and know of its patch-on explanation for HAL's actions, and that still didn't involve HAL being threatened.)
 
In a way HAL's existence was threatened. HAL was hardwired (see previous in this thread about hardwiring AI's and why it might be bad) to see the successful completion of the mission as his first priority. The fact that HAL didn't want to kill the crew was immaterial before this hardwiring. HAL wasn't threatened directly, his reason for existence was threatened. Remember that HAL played chess against his creator - a fairly intimate connection with his god - which may have swayed things somewhat.
 
Originally posted by RainOfSteel:
Threatened by what? What reference anywhere inside the movie footage of 2001 tells us that HAL was threatened?
When Frank and Dave were in the Pod, and HAL was lipreading that they were thinking of turning him off?
 
Originally posted by Malenfant:
</font><blockquote>quote:</font><hr />Originally posted by RainOfSteel:
Threatened by what? What reference anywhere inside the movie footage of 2001 tells us that HAL was threatened?
When Frank and Dave were in the Pod, and HAL was lipreading that they were thinking of turning him off? </font>[/QUOTE]D'oh! Got me. You are quite correct. I'd forgotten that. Yes, they did threaten HAL . . . but then, hadn't HAL already had some fairly serious problems by that point?
 
Originally posted by veltyen:
In a way HAL's existence was threatened. HAL was hardwired (see previous in this thread about hardwiring AI's and why it might be bad) to see the successful completion of the mission as his first priority.
How is the viewer of the movie supposed to figure that one out?

(Reading/watching 2010 is not an answer.)
 
I am pretty sure that it is implied in the film. But not directly confirmed. The deliberate confusion of HAL being programmed that way, or HAL truly going insane wasn't adressed as far as I am aware. It has been a while (about 10 years) since I've read or watched 2001, so my recollection isn't perfect.
 
Don't forget though, 2001 (the book) was rather different. For starters, the ship was going to Saturn*, not Jupiter (the monolith was actually on Iapetus, to be exact). And the stargate trip actually meant something in the book, rather than just being loads of trippy psychedelic lights in the movie. ;)

*: Apparently you can see the Saturn footage that they were going to use in 2001 (the film) in the movie "Silent Running". I forget why they changed planets... I thought it was because the Saturn effects were proving too expensive or something?
 
Originally posted by RainOfSteel:
So, would you agree that the movie is lacking without the contents provided by the book?
What movie version of a book doesn't differ from the source material? Segments are added, deleted, motivations change, sometimes the only thing the same is the title and general situation.

If I see a movie based on a book, I like to read the book to see what got redone. Sometimes the two are very close, sometimes they're very different animals.
 
In this case 2001 the book was written at the same time as the movie with the initial idea for the movie coming from a short story by Clark, "The Sentinel". Clarke had a high amount of influence during the film, esp. for a Kubrick film.


As for why the Saturn effects are used in Silent Running I believe it's from the same person, Douglas Trumbull doing FX work on both. According to IMDB.com's entry for 2001 (link) they weren't able to make convincing enough rings so Saturn was scrapped. I remember hearing something similar on the 2001 DVD.

Interesting tidbit of trivia: "Marvin Minsky, one of the pioneers of neural networks who was also an adviser to the filmmakers, almost got killed by a falling wrench on the set." Very glad that he wasn't! :eek:

Casey
 
Geez, I can't leave you guys alone for a minute or you add 4 pages to a discussion while I'm not looking! ;)

It is a simple matter to "hardwire" something. Don't want it to kill? Don't give it any way to. No arms, no legs, no mobility, and definitely, no nuclear-powered-slingshot attachements! You might limit who it comes into contact with too: keep away from raving lunatics who might be convinced to do something. No connection to environmental controls (I don't feel like steeping outside for a nice breath of fresh vacuum today), and we don't want it copying itself to the internet.

Another thing we can do is be sure that the stuff it's running on doesn't exist anywhere else in the world; that makes it a lot harder for it to transfer itself and start taking over, like in T3. Of course, Skynet had proven itself as reliable while still sub-sentient, and so was entrusted with a lot of things, but they didn't put in enough failsafes.

One must remember that the majority of sci-fi isn't truly to give a glimpse into the future, it is actually to warn us of potential problems (albeit in an entertaining way). BECAUSE of the Terminator movies, and those of its ilk, we are very unlikely to make such things; we will be much more careful. BECAUSE of ID4 or WotW, we will not instantly assume all aliens are friendly, but BECAUSE of Contact, we will not assume they are all unfriendly. And so on.

Crap. I had another minor epiphany while reading all your comments, and now I've forgotten what it was. You see what the last one did; this one would have been at least as big.

While I'm thinking of it, I recall the Engineer asking what would be the use of a limited AI would be. Well, I for one would not want to HAVE TO be nice to every bloody piece of equipment I own. I am nice to the expensive stuff, and to the dangerous stuff, and to the stuff I like, but there have been times when something pissed me off and I felt no compunction about breaking it or trashing it, aside from the monetary loss. I would not throw my favorite handgrenade against a wall if I got pissed at it, because I don't want it to blow up and kill me! By the same token, it's going to take a while to train an AI secretary to tell the difference between meaningful communications and spam and crap (spam being worse than crap), and that may get frustrating when it *accidentally* posts my contact information to every friggin spammer on the planet. :mad: That is most certainly an offense worthy of destruction, but if you give the bloody thing power over you, well, it makes it harder to give it its just deserts.

And I wouldn't necessarily feel the same about killing a real person, or at least some one I'd grown to like a little bit. :D

Plus, it's cheaper and more effective to have several devices that do one or two things really well than to have one device that supposedly does it all. In THEORY, the multi-purpose tool is better, but in practice, it's not. That's why computers have yet to replace all other things. It's just BETTER to have a TV, DVD player, blender, light switch, garage door opener, MP3 player, dog walker, and God knows what else, all in seperate packages, since apparently humans cannot make reliable multi-function tools even when paid to do it, but they seem pretty good at making single-purpose tools. (God help you if you entrust your spacecraft OS to MS!!! You'll get what you deserve, I know that much.)

Ah, frig-noodles, I still can't think of it, and I skimmed back over the posts that supposedly inspired me. Oh well, I guess the world just won't be a better place now. Not like that's not the first time.
file_22.gif
 
Plus, it's cheaper and more effective to have several devices that do one or two things really well than to have one device that supposedly does it all.
While not cheaper, I would argue that it is more effective to have both.

Take a house. There are a certain number of items that will be available. A communicator. An entertainment system. A kitchen. Each breaks down into further subcomponents. Each subcomponent and component can be interacted with directly. At that point a house intelligence is more like a voice activated universal remote, with very little function onto itself.
 
Originally posted by TheDS:
It is a simple matter to "hardwire" something. Don't want it to kill? Don't give it any way to. No arms, no legs, no mobility, and definitely, no nuclear-powered-slingshot attachements! You might limit who it comes into contact with too: keep away from raving lunatics who might be convinced to do something. No connection to environmental controls (I don't feel like steeping outside for a nice breath of fresh vacuum today), and we don't want it copying itself to the internet.
You also have to make sure it isn't able to broadcast commands. By that I mean, no built-in wireless functionality. If a potentially harmful device is capable of being controlled remotely, then a malevolent AI could just "sit" there while machines around it are running amok.

While I'm thinking of it, I recall the Engineer asking what would be the use of a limited AI would be. Well, I for one would not want to HAVE TO be nice to every bloody piece of equipment I own. I am nice to the expensive stuff, and to the dangerous stuff, and to the stuff I like, but there have been times when something pissed me off and I felt no compunction about breaking it or trashing it, aside from the monetary loss. I would not throw my favorite handgrenade against a wall if I got pissed at it, because I don't want it to blow up and kill me! By the same token, it's going to take a while to train an AI secretary to tell the difference between meaningful communications and spam and crap (spam being worse than crap), and that may get frustrating when it *accidentally* posts my contact information to every friggin spammer on the planet. :mad: That is most certainly an offense worthy of destruction, but if you give the bloody thing power over you, well, it makes it harder to give it its just deserts.

And I wouldn't necessarily feel the same about killing a real person, or at least some one I'd grown to like a little bit. :D
Another movie reference: The Animatrix. In one of the short films therein, we learn that AI slave robots didn't like being treated as property that could be smashed to bits at will. And that's how that whole mess got started.

Plus, it's cheaper and more effective to have several devices that do one or two things really well than to have one device that supposedly does it all. In THEORY, the multi-purpose tool is better, but in practice, it's not. That's why computers have yet to replace all other things. It's just BETTER to have a TV, DVD player, blender, light switch, garage door opener, MP3 player, dog walker, and God knows what else, all in seperate packages, since apparently humans cannot make reliable multi-function tools even when paid to do it, but they seem pretty good at making single-purpose tools. (God help you if you entrust your spacecraft OS to MS!!! You'll get what you deserve, I know that much.)
I couldn't agree more.
 
Re the Animatrix: That was because people made the mistake of allowing them to have feelings. Does a car key care if it snaps in your car door in the winter? Does the car? Does the cold? Nope, just you, and whoever was counting on you to get that car started and warmed up in the middle of that stupid mini-blizzard on Thanksgiving that I specifically ordered NOT to happen.

At the moment, my computers don't really care how loudly or profanely I yell at them when they screw up. They have no ears, among all the other reasons. :D

Many people assume that there is a simple threshold level of computing power that we must cross (and probably that we are close to crossing) at which point computers can start to have feelings and feel threatened.

Once you've crossed it with your experimental computers, DON'T DO IT with your production models!

Considering the amount of computing power that is out there, I think I would be safe in saying that, if properly organized or programmed, it could EASILY outperform a human in several tasks that we have always assumed would always be our province. At worst, this ultra computer would be like an idiot savant, and at best, be unimaginably smart; Singularity-level smart. *

In current supercomputers, we should be seeing the beginnings of stuff like this. Do we? Some of you are a lot closer to this than I am and can give better answers, so here is the real question: has anyone reported seeing sentient behaviors in computers?

* The "Singularity" is the point in history beyond which things cannot be predicted. The one of current interest is the computer-related one. In it, people have noticed that computer power doubles every 18 months or so. Assuming that continues, and we start using computers to make computers (which we do), then eventually we will reach a point at which computing power is so great that at that point, no one can predict what will happen. There have been other singularities in the past, things leading up to an event, beyond which the future could not reliably be predicted, The Renaissance being a prime example. No one knew what things we could discover, and few could have imagined the world today, 500 years ago. The next singularity is one of the main reasons we have so much trouble building believable futuristic games and movies.

I have not done the theory complete justice here; I recomment you look it up and find out what I did so poorly conveying.
 
Back
Top