• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

sentient electronic devices; why not?

However, we're not talking about a computer running a program that is an analytical engine, we're talking about a computer running a program that is an Artificial Intelligence.
ok. is anyone willing to call a set of disembodied algorithms sentient?
 
No, no more than a string of DNA is sentient. But look what happens when the right machine to interpret the instructions is built ;)
 
Hi !

In fact the human brain is a biological analytical engine helping this system (human) to get along with its environment.

Just think of a silicon based intelligence encountering a human:
"Hey, this biological system has a central processing unit with partly reconfigurable circuits. There seems to be a fairly large amount of hardwired patterns, controlling system functions and environmental interfaces.
It seems to be able to simulate possible actions and behaviours during analytical processes. Maybe its a kind of self-aware ?
The reconfigurable part seems to concentrate on these simulations. As these systems seem to rely on interaction with each other (there is a tendency towards multi-clustering) the presentation layer appears to be quite important.
Well its hard- and software configuration is surly not static. Its still under development.
I recommend to wait, until these systems reach a more static configuration level, as outside influence might cause problems in the development process and basic progtamming purpose is damaged.
-------------------------------------------

So, IMHO intelligence/sentience is a kind of "appearance".
As DS already stated we mostly expect a AI to be an Artifical Human Intelligence.
Perhaps I would define sentience/intelligence as an advanced abiltity of a "system" to fellow its basic purpose.
Just like the "human" intelligence is just a result of evolution of a being constantly trying to adapt to its environment and to compete with other beings.

I would differ too between systems that appear to be sentient and those wich actually are.
The first type surely is designable in the next 100 years.
The second type is a bit more difficult, as this is about to create a new being.

Regards,

Mert
 
Prove your own sentience.

Being able to appear sentient, and being sentient are one and the same to any external observation.
 
Originally posted by flykiller:
ok. is anyone willing to call a set of disembodied algorithms sentient?
Depends how you define "disembodied algorithms". If by that you mean "software that runs an AI" then if it's just sitting on a CD (or whatever) then no, it's not sentient.

If that software is running on a computer to make an AI though, then you're looking at something sentient. The software running on hardware would be what makes it sentient, not just the software on its own.
 
Originally posted by flykiller:
</font><blockquote>quote:</font><hr />However, we're not talking about a computer running a program that is an analytical engine, we're talking about a computer running a program that is an Artificial Intelligence.
ok. is anyone willing to call a set of disembodied algorithms sentient? </font>[/QUOTE]If can demonstrate that it is, yes. And I'd be very liberal in applying standards of judgment, because to make a mistake (in the form of "turning it off" or erasing it) would be the same as murder.

As for what it can do to demonstrate it? Well, Turing test aside, I'd like it to be able to do some of the following:

</font>
  • Be able to display a desire to better its lot in life. Ask for superior chips, more memory, extra sensors, mobility/remote-units, better backup/disaster-recovery capability, etc.</font>
  • Be able to display that it understands self-sacrifice and greed both, and if capable of acting in the real world, acting in both manners.</font>
  • Be capable of seeing that it might be a good idea to improve other people's lot in life, and take action to accomplish it.</font>
  • Be capable of consideration. Demonstrating that it can place other sentient's interests before its own in simple matters (those of less importance than full-on self-sacrifice).</font>
  • Be capable of polite behavior.</font>
  • Be capable of rude behavior.</font>
  • Be capable of carrying on an intelligent discussion.</font>
  • Be capable of generating a critique on a particular subject, or, in general, the ability to hold an opinion.</font>
  • Be capable of capable of selecting a position on a topic and arguing for or against it (related to the above).</font>
  • Be capable of deciding upon a random activity to participate in.</font>
  • Be capable or organizing and executing plans, on its own, without a request to do so, leading to "improvements" in "something". Even something as small in scope as Traveller. Some new ruleset, a modification to design sequences, whatever.</font>
  • Be capable of producing original art (music, painting, quilt-work, or even "modern art" types which are art, but which I personally don't like at all; i.e. it can create something, call it art, and stand up to people who refuse to believe that it is art in the first place).</font>
  • Be capable of laughing at jokes, and making jokes (in whatever manner) other people laugh at.</font>
  • Be capable of doing careful scientific research.</font>
  • Be capable of doing a Thesis and getting a Doctoral degree.</font>
  • If, when asked whether it is sentient, it responds in the affirmative.</font>
Of course, I wouldn't require them all (because even humans don't do them all).
 
[get back to you in a bit, sigg.]
</font><blockquote>quote:</font><hr />ok. is anyone willing to call a set of disembodied algorithms sentient?
If can demonstrate that it is, yes.</font>[/QUOTE]algorithms are nothing but instructions. absent a machine to implement them they display nothing, affect nothing, are affected by nothing, do nothing, are nothing. they have all the sentience of a piece of paper with words on it. to be anything they must be implemented on a machine. do you agree?

is anyone else willing to call disembodied algorithms sentient?
 
is anyone else willing to call disembodied algorithms sentient? [/QB]
I don't think anyone else has said that disembodied algorithms were sentient. Why do you want someone to say that they are?
 
I would consider a disembodied algorithm as sentient as a complete autobiography.

Both need a code implementation and something to run on.
 
Though this raises an interesting point. As far as we know, human intelligence is not the result of "software running on hardware" - it appears to be the result of an extremely complex biological machine containing zillions of neurons firing all the time in response to different stimuli coming in from sensory organs (and that's a gross oversimplification, and says nothing about how decisions are made).

So maybe treating AI as software running on hardware isn't an appropriate approach either.
 
Originally posted by flykiller:
</font><blockquote>quote:</font><hr />Originally posted by RainOfSteel:
If can demonstrate that it is, yes.
algorithms are nothing but instructions.
</font>[/QUOTE]And the electrical singals in your brain are nothing but instructions.

Originally posted by flykiller:
absent a machine to implement them they display nothing, affect nothing, are affected by nothing, do nothing, are nothing.
Nobody stated otherwise. I'm not sure why you're going on about it.

But, to go on in precisely the same vein: "A human without a mind is nothing . . ." which is why brain-death is considered the termination of life.

In any event, you seem to suggest that if a human had its arms, legs, eyes, ears, and nose removed, and was dropped into a sensory deprivation tank, that said human would no longer be sentient? That would be nonsense. So is asserting the immediately above.


Originally posted by flykiller:
they have all the sentience of a piece of paper with words on it. to be anything they must be implemented on a machine. do you agree?
The above is not what I was stating. I never made any reference to paper with words on it. But if you mean that software can be printed out on paper, then put into a machine for implementation, then yes, I'll agree to that. It's been possible since they created computers that could accept input in this manner, at least 50 years now, possibly more.

If, by refering to paper and words on paper (software instructions in this case) that this somehow demeans the software or makes it less capable, well, it doesn't. The most massive and complicated program can be printed out (even though it would usually be very paper-expensive to do so with large and advanced programs).

If you are somehow linking this discussion of printed software into the AI-Software discussion, then I'll state that yes, AI-Software could be printed on paper, though as I mentioned, it would be prohibitively expensive to do so.

If you are attempting to state that because software can be printed on paper that it somehow assumes some sort of characteristic that prevents the instructions themselves, once running on a computer, from assuming sentience, you having nothing but conjecture to support that.

Since a cat-scan of the human mind may be printed, and even a thought-pattern via PET scan and EEG scan technologies, then I might say that the structure and processes of the human mind may be printed and recorded. It's not very high resolution as of today, and we don't yet understand it completely, but that assigns it no special characteristics that separate it from the printed software above. And there are no characteristics of the firing neurons of the human mind that cannot necessarily be modeled by computer software designed to do so (once those processes are sufficiently understood, and we have a sufficiently powerful computer to do so; every single element in the development and advancement of Biology, Medicine, and Computer Sciences point the way to the era that those processes will be understood 100%).


Originally posted by flykiller:
is anyone else willing to call disembodied algorithms sentient?
Well, I'm not talking about the disembodied, I wasn't before, I'm not now.

Separating computer hardware from the software running on it is not the same as stating that the software is disembodied. It's stating they are two different things, which they are. And by stating that hardware and software are different, one may not compare attributes of computer hardware with other, more mundane machinery (by calling computer chips made via microscopic-lithography the equivalent of gears and pulleys) and then attempt to compare the software running on that hardware to the gears and pulleys. It's a leap that simply cannot be made.


I notice, quite pointedly, that you failed to answer my questions as to your knowledge related to AI programming languages, specifically LISP or Prolog (but I'll add in AIML here, again, since I mentioned it before), and what they were capable of.

I'll also ask: Did you actually read up on any of the AI websites I provided earlier?

If not, then I'll provide, once again, KurzweilAI.net


And let me clarify my position, in all this back and forth.

I do not insist that sentience in a computer program running on a computer is possible. I merely insist that no one knows that it isn't possible, either.
 
Originally posted by Malenfant:
So maybe treating AI as software running on hardware isn't an appropriate approach either.
Software may be written to emulate anything. It may be written to precisely emulate zillions of neurons firing away in a biochemical environment, with hypothlamic peptide neurotransmitters floating around, etc., except that there is no current computer of sufficient power to hold all the elements required and execute quickly enough to process all the instructions required for emulation.

But given the enormous leaps and bounds computing continues to provide us, and the even more enormous prospect of Quantum Computing, it seems logical to point out that the capability of doing so is coming to us, very rapidly.

In any event, I am in no way convinced that sentient AI may only arise from emulating the processes of the human mind.

Of course, I'm not 100% sure that sentient AI is possible, but I'm also not so bold as to declare it isn't possible, either.
 
Originally posted by RainOfSteel:
Software may be written to emulate anything. It may be written to precisely emulate zillions of neurons firing away in a biochemical environment, with hypothlamic peptide neurotransmitters floating around, etc., except that there is no current computer of sufficient power to hold all the elements required and execute quickly enough to process all the instructions required for emulation.
Sure. But it may just be "easier" (but it's not really easy ;) ) to get a vast number of processors (i.e. neurons) and link them all together with eachother, with each with some complex logic systems hardwired into them. It might be that any sufficiently complex system with the right hardwiring will generate sentience.

We may as well go with what we know as a first approach ;) .

But given the enormous leaps and bounds computing continues to provide us, and the even more enormous prospect of Quantum Computing, it seems logical to point out that the capability of doing so is coming to us, very rapidly.
This is true.

In any event, I am in no way convinced that sentient AI may only arise from emulating the processes of the human mind.
Well, I'm not saying it's the ONLY way to do it, but it just seems to me that it's fairly sensible to start by emulating the biological hardware that already exists
 
</font><blockquote>quote:</font><hr /> </font><blockquote>quote:</font><hr />However, we're not talking about a computer running a program that is an analytical engine, we're talking about a computer running a program that is an Artificial Intelligence.
ok. is anyone willing to call a set of disembodied algorithms sentient?</font>[/QUOTE]If can demonstrate that it is, yes.</font>[/QUOTE]...
</font><blockquote>quote:</font><hr />is anyone else willing to call disembodied algorithms sentient?
Well, I'm not talking about the disembodied, I wasn't before, I'm not now.</font>[/QUOTE]near as I can figure, you're agreeing that ai algorithms must be running on something before any ai can exist. others are saying the same, and I'd have to concur.

so, back to the analytical engine for a moment. can any computer (or any machine that implements algorithms) be constructed that is not strictly deterministic? sigg?
 
The appearance of a system to be strictly deterministic fades away with complexity.
So, even if the "subsystems" are strictly deterministic, the overall system my appear
to be non strictly deterministic.

Again, its about appearance.
In theoretical terms a human being is strictly deterministric, too.
Even here sets of special input parameters lead to certain special reactions.
Its just, that the number of parameters is fairly high
(Well, depends on the person...)

In order to answer the question, I would say, that it is not possible to program anything,
which is not stricly deterministic, but it is possible to program something, which appears
to be so.
 
If I were to switch my viewpoint on this matter, I might say that sentience is that the whole is greater then the sum of the parts. A body is not sentient. A string of DNA is not sentient. An electrical signal is not sentient. A chemical reaction is not sentient. An autobiography is not sentient. But if you put all of them together in the right way, you make a person. (Or an animal.) The same can be said of AIs, I would suppose.

While we may consider bacteria to be alive, we cannot consider it to have any kind of intelligence. Its actions can be described completely by action-reaction, cause-result.

At the moment, computers are cause-result. You give it a set of inputs, and you ALWAYS get the same out put. In this way, computers are just complicated machines, and bacteria are also just machines.

So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.

Let us not oversimplify "feedback". A transistor has feedback, but this is too simple. Feedback in the sense I mean is more akin to being an input to be considered. For instance, you set up a test. At the end of the test, you have a negative result. so when you take the test again, that negative result is your feedback; you know that if you do everything the same, you will get the same result. So you do something different. You still get a negative result, so you use this feedback as well to change something else and on and on until you eventually get a positive result.

Machines do not have this kind of feedback. Not all animals have this kind of feedback. But humans do have it, and I think probably anything we can consider sapient will have it.

Rain of Steel gave us some interesting bullets a few posts back about what could be used to determine sentience, but I didn't see anything in that list that couldn't be solely the proxy of humans. No slight intended, of course; this isn't a simple subject, as 5000 years of history have taught us.

Just thought I'd share that minor epiphany with y'all.
 
Originally posted by TheDS:
[QB]At the moment, computers are cause-result. You give it a set of inputs, and you ALWAYS get the same out put. In this way, computers are just complicated machines, and bacteria are also just machines.

So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.

Let us not oversimplify "feedback". A transistor has feedback, but this is too simple. Feedback in the sense I mean is more akin to being an input to be considered. For instance, you set up a test. At the end of the test, you have a negative result. so when you take the test again, that negative result is your feedback; you know that if you do everything the same, you will get the same result. So you do something different. You still get a negative result, so you use this feedback as well to change something else and on and on until you eventually get a positive result.

Machines do not have this kind of feedback./QB]
In other words, you're talking about "learning from experience" and "adaptability to new situations"?

I'm pretty sure that there are some robots and computer systems around in the research institudes that do just that. It's still primitive, but they do learn from their experiences.
 
Originally posted by TheDS:

So I think that intelligence has to be able to incorporate feedback, so that it is not just stimulous-response.
Maybe also intelligence could be seen as a stimulus-response thing. Stimulus is just more complex and the way between stimulus and response is much more interesting
 
Back
Top