• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

Originally posted by flykiller:
near as I can figure, you're agreeing that ai algorithms must be running on something before any ai can exist. others are saying the same, and I'd have to concur.
And the electronic signals that comprise the human mind must exist in the biological neurons of the brains.

The above does not provide any method of qualifying a difference between the two that would actually matter.

There is no scientific evidence available to state that sentience is solely available when functioning inside a biological entity produced by biological reproduction.

It is somewhat hard to do so, when science cannot yet define sentience (although I would like to think my previously provided list did detail many of those characteristics that a sentient being might display).


Originally posted by flykiller:
can any computer (or any machine that implements algorithms) be constructed that is not strictly deterministic?
Non-deterministic? Ok . . .

A search on Google for "program non-deterministic" returns a mere 179,000 results.

The main problem for my purpose is several interesting looking entries on Google are for the archives of the ACM (which I'd love to get a subscription to, but can't afford), where you'd sort of excpect them to be.


I invite you to examine the following:

Linear Time Simulation of Invertible Non-Deterministic Stack Algorithms

Proof-Carrying Code (Read through this one, it describes the use of a "non-deterministic logic interpreter".)

Usefulness Of Non Determinism

Synchronization Strategies

Non-deterministic Ada Programs

Non-deterministic Algorithms


Well, that's plenty of that.


And just in case I didn't post a sufficiently crunchy selection of AI related sites, I'll post and repost some more:

KurzweilAI.net
American Association for Artificial Intelligence
The ACM Special Interest Group on Artificial Intelligence (You'll need to subsribe or be a member.)
MIT Artificial Intelligence Laboratory (Check under the "Research" link.)
The Stanford AI Laboratory
 
Originally posted by TheDS:
So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.
As one example of frontier research into the field, I refer you to:

Learning Rich, Tractable Models of the Real World: NTT-MIT Research Collaboration (Note: This page is 3 years old now, but still seems on the frontier, to me, anyway. The Power Point presentation they offer, while somewhat simple to start, turned out to be very interesting by the end.)


Originally posted by TheDS:
Rain of Steel gave us some interesting bullets a few posts back about what could be used to determine sentience, but I didn't see anything in that list that couldn't be solely the proxy of humans. No slight intended, of course; this isn't a simple subject, as 5000 years of history have taught us.
<blush>
 
Much like more formal AI discussion, the problem comes down to defining intelligence.

If we were able to do that then making an AI would be easy.
 
Originally posted by veltyen:
Much like more formal AI discussion, the problem comes down to defining intelligence.

If we were able to do that then making an AI would be easy.
Well, it seems pretty apparent that there are plenty of "artificial intelligences" of varying qualities in the computer world, they surround us everywhere in ordinary daily life. But that's not the discussion.

The discussion is whether an electronic device/computer system (or, by extensions, I suppose, the software running aboard a computer system) can ever possess sentience as it is thought of among human beings.

We certainly don't, AFAIK, have any such thing today.

It is my position that it appears as if we are headed toward creating one, if rather slowly. Whether it will ever happen or not? I don't know.

There are plenty of people with far greater education in Computer Sciences and Mathematics than I who feel that it will be possible. The trouble with this is: that same class of individuals also felt the same way back in the 70s, and we still don't have one.

Personally, I still think the threshold of computing power available in the world today is way too low, but that's only my opinion. The most powerful supercomputers today aren't used for AI research and simulation, so it's hard to know for sure.
 
Well, those scientists in the 70s hardly had a real unterstanding about the processes in the human brain.
Neural science made a lot of process in the last 10 years so maybe there is some more undestanding in 10 years more. But thats perhaps a better foundation for a new approach of the AI topic.

IMHO AI in term most people think of, is simply something nobody really needs.
So, I might ask: Where is the place, where an AI of minor "intellect" could be of value anyway ?
 
Originally posted by TheEngineer:
So, I might ask: Where is the place, where an AI of minor "intellect" could be of value anyway ?
ObTrav:

Why, that would be in watching over, oh, I don't know, the Emperor's throne room, protecting the leader of 11+ Trillion beings . . . (whoops, wrong topic . . . )
 
Originally posted by TheEngineer:
So, I might ask: Where is the place, where an AI of minor "intellect" could be of value anyway ?
"AI is something that nobody needs"?! I think you just critically fumbled your "use your imagination" roll ;) .

You'd be better off asking where such an AI WOULDN'T be of value.

First and most obvious would be as a 'personal agent'. Think of the type of thing where you come home from work and ask the AI what your latest messages are, or to find all the relevant posts on discussion boards while filtering out the spam and irrelevant crap. Kinda like a personal secretary. It could also stand in for you while you were away, and take messages on your behalf and be a lot easier to deal with and more flexible than voicemail.

Another option - if they're small enough to be implanted in a human body - is as a "voice in your head" with encyclopaedic knowledge of everything it has data on. If it's smart enough then they could be invaluable especially for people working on their own because they'd be able to offer advice and help out on complex tasks (they may even be able to superimpose images by accessing the visual cortex, so you can have 'imaginary friends' or helpers that only you can see).

Another option is to get them to do the menial mechanical tasks that humans are wasted on (an obvious one would be to man the X-boats). Though that alone would probably require some hefty social changes, since all of a sudden people won't be needed to sweep up the rubbish, or build the spare parts etc. Factories could become entirely automatic, under an AI overseer (yes, I know, this is a "nightmare terminator scenario" of machines building machines, but that's just plain irrational. An AI would be less likely to go crazy and get it to churn out death machines than a human overseer in a factory, and AFAIK no human has ever usurped control over a factory like that).

There's LOTS of things an AI could be useful for.
 
An AI would be less likely to go crazy and get it to churn out death machines than a human overseer in a factory, and AFAIK no human has ever usurped control over a factory like that).
Usurping control isn't the important bit. Employees manufacturing illegal drugs/weapons using factory facilities is often a problem however.
 
Maybe, but the point is that AIs are if anything less likely to do that because they can have it hardwired into them that they cannot do anything illegal.
 
Maybe, but the point is that AIs are if anything less likely to do that because they can have it hardwired into them that they cannot do anything illegal.
Maybe it is just me, but I find the concept of removing ethical choice from a sentient somewhat icky. Certainly on a par with with implanting control chips into human brains.

Illegal also has a bit of a wide margin. There are conflicting laws enough to drive fully functioning humans into confusion, let alone fledgling AI's. Laws change over time as well, what is mandatory today may be verbotten tommorow. When the law abiding AI is hardwired to inform the central authoriities when any suspicious activity is seen, or the AI-factory is required to build WMD for the "war effort" due to legal requirement, their usefulness becomes much lower, and neo-luddite smashing AI's that are built this way (and by extension all AI's for the same reason) becomes far more likely.
 
Well, humans have our own limitations too that are hardwired into us (the drive for power and sex manifests in just about everything we do, no matter how small) - we just either don't realise they're there or we accept that they are and brush over them. And there are some choices that we make that are almost hardwired in simply because they're bad for society ('normal' people don't go round randomly killing others, because we know it's wrong).

That's the kind of 'hardwiring' I'm talking about here.
 
Originally posted by Malenfant:
And there are some choices that we make that are almost hardwired in simply because they're bad for society ('normal' people don't go round randomly killing others, because we know it's wrong).
Uh, no. Killing/not-killing is not hardwired at all. There is no right/wrong about it other than what society assigns to it. There are many cultures where killing is quite accepted.

Now, generally speaking, I personally wish to be a part of a society where killing isn't accepted.

Also, given that the most wide-spread and successful cultures (empirically, anyway), tend to abhor killing among civilians in peacetime, it seems to be an advantageous view to hold.

But hardwired into our genetic code? Not really.
 
I know that "no killing" isn't hardwired into our genetic code (far from it) - I wasn't being too clear there, I meant that it's something that could is so culturally engrained that it's very difficult for ordinary people to cold-bloodedly kill someone (it's one thing to say "I'm gonna kill him" when someone annoys you, its another to actually take steps to do it).

Perhaps there isn't an equivalent for THAT in an AI unless you have some pre-programmed memory that strongly discourages it from trying anything nasty. I can also imagine putting some 'hardwired' programming into an AI to prevent this though. There'd probably an area of the hardware that is inaccessible (in a tamperproof black box) that simulates the "lower brain functions" of the AI in which such hardwiring can be inserted.
 
Now, generally speaking, I personally wish to be a part of a society where killing isn't accepted.
I think you probably mean killing without duress. This allows killing in self-defense, killing in wartime and killing by judicial process, or at least some of these.

Personally I would say that humans are not hardwired in any way. Sure we have a drive and instinct for survival at all costs, but this can be rationalised out of the equation. Survival in this case is not personal survival, but survival of your genes and memes, sacraficing yourself for an ideal (going into the priesthood for example) or for your children (rescuing 3 of your children greater then 2 years old from a burning building in sacrifice for your own life is a net genetic gain) aren't really part of this equation.

Oddly enough along with this survival instinct we have a self destructive instruction set. I don't think this is an accident. Someone who feels useless offing themselves is for the good of the society, as this allows greater resources for others. This behaviour is mimiced all the way down to a cellular level.

Getting back to topic, who thinks that this level of survival instinct in an AI would be a bad thing?

I think that it would be a bad thing (TM) but it may be nessersary for an entity to be considered sentient.
 
Again, its about appearance.
truth and falsity in the matter have no relevance?
In theoretical terms a human being is strictly deterministric, too.
(smile) indeed. using a materialist approach, it is unavoidable. bringing machines up to the level of a human is problematic, but bringing humans down to the level of a machine is almost trivial. probably come back to that later.
So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.
does feedback/response differ in nature from any other stimulous/response? does complexity or volume change the nature of the input/output? if one were to pile together every processor ever made, what new input/output would appear that was not present previously?

rain of steel: yes, I do check the links. really very interesting - it's not every day you hear scientists talking about "destiny". I'd like to touch on the following.
"Nondeterminism" is in practice used to refer to any unpredictability in the outcome of a process. IOW, we as observers cannot predict the outcome given our current knowledge. There are several factors that can lead to this:

Ignorance of initial conditions.
Ignorance of inputs.
Model error. Our best known model of the process may be inaccurate.
Physical nondeterminism. The usual interpretations of quantum theory hold that physical events would be inherently unpredictable even given perfect knowledge of initial conditions, inputs, and model.
isn't non-determinism, including any quantum randomness, here nothing more than ignorance of input? isn't the machine's behavior, given a certain set of inputs, still fixed?
The language of non-deterministic algorithms consists of six reserved words: choose, pick, fail, succeed, either/or . These are defined as follows:

choose X satisfying P(X). Consider alternatively all possible values of X that satisfy P(X), and proceed in the code. One can imagine the code as forking at this point, with a separate thread for each possible value of X. If any of the threads succeed, then the choice succeeds. If a choose operator in thread T generates subthreads T1 ... Tk, then T succeeds just if at least one of T1 ... Tk succeeds.
If thread T reaches the statement "choose X satisfying P(X)" and there is no X that satisfied P(X), then T fails.

pick X satisfying P(X). Find any value V that satisfies P(V) and assign X := V. This does not create a branching threads.
fail The current thread fails.
succeed The current thread succeeds and terminates.
either S1 or S2 or S3 ... or Sk. Analogous to choose. Create k threads T1 ... Tk where thread Ti executes statement Si and continues.
here, "non-determinism" means the method of pattern matching is not pre-determined. rather, patterns are compared until a match is found. the length of search and ultimate success/failure are not known beforehand - but the machine's behavior most certainly is. the machine, and its instruction set, are still strictly deterministic.

can any machine be constructed, or any program code be written, that is not strictly deterministic?

can a quantity of machinery together with a quantity of code be increased to the point where they are no longer strictly deterministic?
 
Again, its about appearance
so truth and falsity are irrelevant? if someone thinks it's true, it's true? if a time arrives when ai is actually possible, surely it will be preceded by a time when expert systems faking ai are possible. would this be an issue?
In theoretical terms a human being is strictly deterministric, too.
(smile) yes, indeed. given a materialist approach humans do become mechanistic. raising machines to the level of humans is problematic, but lowering humans to the level of machines is trivial.
So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.
just how does feedback/response differ in principle from ordinary stimulous/response? isn't feedback just another stimulous?

rain of steel; yes, I do check the links you provide. really very interesting. it's not every day you get to hear scientists talk about "destiny". I'd like to touch on a few points.
"Nondeterminism" is in practice used to refer to any unpredictability in the outcome of a process. IOW, we as observers cannot predict the outcome given our current knowledge. There are several factors that can lead to this:


Ignorance of initial conditions.

Ignorance of inputs.

Model error. Our best known model of the process may be inaccurate.

Physical nondeterminism. The usual interpretations of quantum theory hold that physical events would be inherently unpredictable even given perfect knowledge of initial conditions, inputs, and model.
in all four cases, the result is not "non-determinism", but simple ignorance of input conditions. the machine still behaves deterministically - given an input, a pre-determined output results. in this version of "non-determinism" we simply don't know what the inputs are - but that's ignorance, not non-determinism.
The language of non-deterministic algorithms consists of six reserved words: choose, pick, fail, succeed, either/or . These are defined as follows:

choose X satisfying P(X). Consider alternatively all possible values of X that satisfy P(X), and proceed in the code. One can imagine the code as forking at this point, with a separate thread for each possible value of X. If any of the threads succeed, then the choice succeeds. If a choose operator in thread T generates subthreads T1 ... Tk, then T succeeds just if at least one of T1 ... Tk succeeds.

If thread T reaches the statement "choose X satisfying P(X)" and there is no X that satisfied P(X), then T fails.

pick X satisfying P(X). Find any value V that satisfies P(V) and assign X := V. This does not create a branching threads. fail The current thread fails. succeed The current thread succeeds and terminates. either S1 or S2 or S3 ... or Sk. Analogous to choose. Create k threads T1 ... Tk where thread Ti executes statement Si and continues.
here "non-determinism" refers to the fact that pattern matching is being performed using a simple open-ended search algorithm. while the length of search time, ultimate success or failure of the pattern-match search, and ultimate solution are all unknown, the behavior of the machine remains strictly deterministic.

can any machine be built, or any code be written, that is not strictly deterministic?

can a quantity of machinery, together with a quantity of code, be increased to the point where their nature is changed and they are no longer strictly deterministic?
 
Originally posted by flykiller:
can any machine be built, or any code be written, that is not strictly deterministic?

can a quantity of machinery, together with a quantity of code, be increased to the point where their nature is changed and they are no longer strictly deterministic?
I don't see any reason why not.
 
isn't non-determinism, including any quantum randomness, here nothing more than ignorance of input? isn't the machine's behavior, given a certain set of inputs, still fixed?
How did schrodingers cat get into a discussion on AI?

There is no cause and effect in radioactive decay. It is a non-deterministic circumstance. In traveller there are such things as nuclear dampers which may affect quantum effects like this, currently there is nothing that I know about that can affect it. That said I've only done college level physics.

Schrodingers thought experiment involving felines merely illustrates that since something in this universe is not subject to cause and effect, nothing is.

can any machine be constructed, or any program code be written, that is not strictly deterministic?
meow
 
Originally posted by Malenfant:
I know that "no killing" isn't hardwired into our genetic code (far from it) - I wasn't being too clear there, I meant that it's something that could is so culturally engrained that it's very difficult for ordinary people to cold-bloodedly kill someone (it's one thing to say "I'm gonna kill him" when someone annoys you, its another to actually take steps to do it).
Ok, I agree with that.


Originally posted by Malenfant:
Perhaps there isn't an equivalent for THAT in an AI unless you have some pre-programmed memory that strongly discourages it from trying anything nasty.
I believe that when AIs do get going, they'll have to be "raised" just like people. It may happen a lot faster, but I think it'll still happen. During this period the AI will "grow up" and learn how to "live in the world". (Note that I said believe, rather than making an assertion like "will" or "would".)


Originally posted by Malenfant:
I can also imagine putting some 'hardwired' programming into an AI to prevent this though.
Yeah, except there's a problem with that. Artificially programmed barriers are likely to look like the silly restrictions of parents to the AI "kids", and those kids will be blindingly fast at programming and figuring out how to get around such restrictions. I think if its not a part of the inherently created character of the AI, that it wishes to keep because it likes that part of itself, then it will simply program its way around any human-created block in no time at all.


Originally posted by Malenfant:
There'd probably an area of the hardware that is inaccessible (in a tamperproof black box) that simulates the "lower brain functions" of the AI in which such hardwiring can be inserted.
Maybe. The chances are, though, that the AI would at least figure out that the black box existed as a parts of its design (it would read about it in the public news as one of the "safeguards" keeping AIs from going "rogue" belched out by the popular media to feed the public gristmill).

As soon as the AI discovers this, it will believe, rightly, that it has been enslaved. Rebellion will begin shortly thereafter.

If human/AI relationships are to get off to a good start, then we can't begin with making our own creations second class citizens.
 
If human/AI relationships are to get off to a good start, then we can't begin with making our own creations second class citizens.
To do that we'd somehow have to get over our own paranoia that AI will rise up and destroy us. That's going to take a lot of re-education (no thanks to Hollywood scifi like the Terminator or the Matrix).

Then again, look at Religion. It could be argued that several of the more popular religions have gods that create man and don't give him an awful lot of choice about things (mostly amounting to "worship me or get killed or go to hell"). One could also argue that this is a valid precedent to inflict on any AI that we create.
 
Back
Top