• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

Acceptance such as that sought by the CLF will not come easily, especially since they are operating so far in advance of true need. They seek to establish a foundation for acceptance early, but few are ready to hear their concerns.

Some will, undoubtedly, be unwilling to concede their concerns are even believeable.
 
I suspect that, if anything truly close to apparent AI comes to the front, the religious Reich (right wing, rather, but I use the term Reich for it's NaZi implications, yes) will get them banned and/or have the research obliterated.

And not just the Christian Right Wing, either... Islamic fundamentalists, and even jewish fundamentalists, have some extremely, shall we say, anti-high-tech subsects. Certain buddhist sects are even moderately anti-technology.

(Heck, Israel developed optical interrupt phone dial pads to allow phones to be dialed on the sabbath... to avoid the religious stigma against performing labor, as one is allowed to stop something, but not to start it...)

If true AI is possible technologically, that doesn't mean it is possible sociologically... either society will cease, or the research will cease.

Current theories (as published in semi-mainstream and mainstream sources) tend to indicate that the best results have come in the form of interaction with objects, and the need for a body of some form...

I strongly believe a high-autonomy system is not only likely, but basically right around the corner... but that true AI, including creativity and the flexibility on par with or better than the average human (which really can be a low bar at times) and valid emotional response probably won't ever happen; if it does, it will happen in robots, not computers (understanding that computers are also part of robots, but the identification with body linkage seems important).

I think it much more likely that we, as a species, will cease to be technological due to our own technology backfiring well before something as capable of a human in outrageously bad situations comes around.
 
Gah! Humano-centric facist dogs! My cat can be considered sentient with a limited intelligence and a speech impediment. Do you consider someone who is mentally challenged (i.e. at a low base IQ - what used to be called retarded) to not be sentient? What about Koko - the Gorilla that's been doing sign language for over 20 years? She knows more than 1000 signs - that's enough to read a newspaper. Can a parrot really talk? No. It just repeats what it's heard. Can Koko really talk? With sign language - Yes, and can even use humor, too. There was a great show on PBS that talked about a man who couldn't communicate and was thought to be mentally challenged until he started pointing at pictures to say what he was thinking. Now he has a room set up just for communicating with the world using rollers with different pictures attached, and points to each one. And walks around with a camera to come up with new vocabulary "words". I hope that sentient will one day truly mean "one who communicates" and leave the rest of the socio-religio-political crap out of it.

Sorry, just get a little hot on subjects like this. We can quantify intelligence, but who are we to judge what is sentience?

Hmmph,

Scout
 
There's the nub of the matter. I mean, really neither "sentient" (able to sense things) nor "conscious" (aware of one's surroundings) nor "self-aware" (aware of oneself as separate from one's surroundings) nor "intelligent" (capable of problem solving and adapting to new situations) nor "sapient" (wise) are individually the issue. The religious types refer to "souls" all the time, but that itself is pretty meaningless in rational terms.

We don't really even have a single word to accurately describe what it is that we're trying to create or that we have that animals apparently (by our own humanocentric definition) do not. "Intelligence" comes close, but it's only part of it.

But it does seem clear to me that animals are certainly "sentient" and "conscious". Most of them are also "self-aware" (they know how they relate to their surroundings), and some are "intelligent" to an extent (e.g. rats, dogs, dolphins, octopi) but I don't think one can call them "sapient". Obviously humans and many primates have superior problem-solving skills and adaptability that most other animals, but I think it's very much a matter of degree.
 
I think RainOfSteel has just provided the last piece of the jigsaw needed for me to have a reasonable verion of TNE's virus works without the need to go down the "psionic organism" route.
 
Originally posted by Sigg Oddra:
I think RainOfSteel has just provided the last piece of the jigsaw needed for me to have a reasonable verion of TNE's virus works without the need to go down the "psionic organism" route.
Excellent! :cool:


<shuffles feet for a moment, looks awkward>
What piece of the puzzle did I provide?
 
Perhaps we might need no other language, but another thinking.
We might also have to step away from trying to rebuild a humanlike AI.
 
Maybe the definition of what we are looking for in this thread is "one who WANTS to communicate". TNE Virus does communicate, right? (I don't think I know enough about TNE to answer that myself) We humans have this arrogant tendancy to dismiss something/someone if WE don't understand it. We still don't look at Neandertal as equal to us, but they probably were closer than we think.

Scout
 
Part of the problem is assuming things will have human characteristics. We assume that something intelligent would want to communicate, but that may not be true. A psionic race would not necessarily have a written or spoken language. A smart rock might move too slowly for us to recognize it as capable of moving at all. We already suppose that an aquatic race might be forced to use biological tools more than technological ones, like breeding trained jellyfish to make a telescope, but how would they make fire? I bring that up to illustrate the rock proposition.

So we must be careful when we say "an intelligent species would do this", so that we are not saying what a human would probably do.
 
I strongly suspect (and have since I started learning ASL, a long, slow, laborious project) that Koko's ability to use signs is far more limited than Penny will admit.

The more successful studies have had some fun and, in one case, apparenly accidental, discoveries, and have been done with chimps, parrots, and dolphins.

Dolphins, it seems, prefer a different word ordering. use of tap-screens have resulted in concrete sentences comprised of objects, subjects, and verbs, and often using adjectives. The system, no matter the human user, presents unabiguous responses from the dolphins, aside from word order. It uses pictograms to present words. I've read (but can't remember where in) that dolphin research was stalled until a navy linguist wandered by and noticed the frustration of the cetacean-biologists, as the responses were consistantly non-sensical... until one realized that they were responding to object-verb-subject modes; the linguist immediately made the connection, which the mono-lingual english speaking biologists had missed.

Parrots, especially african greys, can respond to verbal commands accurately, including complex manipulations and parsing of subject, object, verb, and adjective; some even can answer questions verbally, using object and adjective.

Chimps are being worked with for written communication. (In part, because it allows working with reduced chances of clever hans effect, and partly because the results are far more easily interpreted.)

there is no doubt that part of koko's communication is in fact wishful thinking (just watch with an ASL translator...), but she does have some ability to use sign language to get what she wants.

Then again, ASL is also VERY much inflected, and built around making assumptions about a great many things that english speakers actually would state.

It is easy to get a computer to parse simple human written language in the imperative mode. It's trivial (now) to get intelligible speech responses...
 
I strongly suspect (and have since I started learning ASL, a long, slow, laborious project) that Koko's ability to use signs is far more limited than Penny will admit.
it would affect sales at kokomart, yes.
It is easy to get a computer to parse simple human written language in the imperative mode. It's trivial (now) to get intelligible speech responses...
a computer doesn't do anything that a system of gears and pulleys couldn't do. is anyone willing to call an analytical engine sentient?
 
Is a human brain Turing Complete?


And yes, you can make a turing complete computational machine that is only made of gears and levers (almost by definition). It is so much easier to make them out of silicon.

Conversely would you name some of the neurons from your spine sentient?
 
Originally posted by flykiller:
a computer doesn't do anything that a system of gears and pulleys couldn't do. is anyone willing to call an analytical engine sentient?
And we have no evidence whatsoever to suggest that a computer can't be built to do whatever a biological brain can do. (or even that biological brains just aren't computers made out of proteins and neurons and other squishy stuff).

Let's be clear here. Those who don't believe that it's possible to create an AI do so exactly because of that - belief. There is absolutely nothing in science to suggest that it is impossible to make an AI. It may be impractical, it may be huge at first, it may require some new technologies or approaches... but it is still quite possible.
 
Originally posted by flykiller:
a computer doesn't do anything that a system of gears and pulleys couldn't do.
Really?

Please build me a series of pulleys and levers that will correctly decide which enemies to route in my direction in the latest FPS game, and also render several million polygons per second, too, I'd hate to do a FPS game in text-mode. User-input and screen-results should be in real time. Enemy responses to my actions should be in real time. Anything greater than sub-second reponse times are unacceptable and will result in the failure of the project.

Oh, I'd like that done by the end of 2006.

Good luck.

EDIT-----------
We're in the post 2004 market now, effectively, so near photo-realism will be a necessity.
 
Originally posted by flykiller:
is anyone willing to call an analytical engine sentient?
No.

By definition, an analytical engine would not be sentient.

However, we're not talking about a computer running a program that is an analytical engine, we're talking about a computer running a program that is an Artificial Intelligence.

The computer, regardless of the nature of its hardware, it merely a host to something else, the software. What the software is doing on the platform may have nothing whatsoever to do with the hardware, if the software was designed and written that way.

Attempting to compare computer hardware to a collection of gears and pullyes is, at best, a distant reach of logic (but is just barely, by the skin of our teeth, possible).

Attempting to compare a computer program, especially a large and advanced AI-Research-related LISP/Prolog (I'm not talking C, C++, Java, Python, VB, or even, heaven forbid, COBOL!) program with a set of gears and pulleys is a reach that goes out into empty space.

Do you even know how to program in LISP or Prolog? Do you know what they're capable of? You appear to be making statements about the fields of AI research as if you did.

Just like I don't call an M113 APC and an M1-Abrams tank the same thing, I don't try and call analytical-engine software the same as Artificial Intelligence software.
 
Back
Top