• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

The Robots Are Coming ...

I dunno.

Visions of "Two Faces of Tomorrow". Where the AI is trying to make some eggs. The first attempt, the AI gently puts the entire egg in the frying pan. When they tell the AI its ok to break the egg, the AI proceeds to crush the egg entirely in its hand, and plops the mashed mess, shells and all, into the frying pan.
 
Yep, they are learning watching human men.:)

Saudi Arabia's first male humanoid robot inappropriately touched a female reporter last week.
 
I dunno.

Visions of "Two Faces of Tomorrow". Where the AI is trying to make some eggs. The first attempt, the AI gently puts the entire egg in the frying pan. When they tell the AI its ok to break the egg, the AI proceeds to crush the egg entirely in its hand, and plops the mashed mess, shells and all, into the frying pan.
You're assuming that the AI doesn't learn from watching (video of) humans do the task correctly. 👨‍🍳👩‍🍳
Yep, they are learning watching human(s).
(l added an (s) to human in Vargr Breath post)

At the end of the video, it mentions 'the question regarding the ethics in androids'. Do robots/androids have ethics? CAN robots/androids have ethics? At the end of the day, robots/androids are built by humans and must therefore get their ethics from us. Maybe something hardwired into them like the Three Laws of Robotics from Asimov that would cause them to shut down if they broke one of the laws? Or something like a combination of Self Preservation & Coexistence with our builders that would cause them to consider consequences for both parties involved instead of only making the logical choice?

Looking at whartung & Spinward Flow posts, I can't help but think that our current technology and understanding of A.I. would only cause a 🗑️ of 🪱's if we aren't careful with these Artificial People and the ethics we contrive to give them.

Data from Star Trek is an example of an android that even I wouldn't mind getting to know as a friend, but even Data had his moments that caused great fear in some of the Beings around him.

Note: I hope I didn't cross any lines.
 
Maybe something hardwired into them like the Three Laws of Robotics from Asimov that would cause them to shut down if they broke one of the laws?
Interestingly enough, there's an anime on simulcast right now that delves into that very notion rather explicitly (it's one of the central world building points that the plot is currently revolving around). The anime is titled Metallic Rouge, in case you're interested. The Asimov Code comes up repeatedly ... and the results are rarely pretty ... :sick:
 
At the end of the video, it mentions 'the question regarding the ethics in androids'. Do robots/androids have ethics? CAN robots/androids have ethics?
"AI" as it's being presented today, can't really have ethics because it doesn't know what "what it does" actually means in the real world. Or even that there is a world...

To an extent, they can have rules constraining their output, to the extent that someone can't come up with a prompt that can evade them. But they won't know "why," and it will only be piecemeal.
 
"AI" as it's being presented today, can't really have ethics because it doesn't know what "what it does" actually means in the real world. Or even that there is a world...

To an extent, they can have rules constraining their output, to the extent that someone can't come up with a prompt that can evade them. But they won't know "why," and it will only be piecemeal.
That darn Plato cave…
 
That darn Plato cave…
It's not even that. The folks in the cave could understand that there's a real world (that is, that they're in a cave and things inside the cave -- including themselves -- were real, while only inferring the outer world from sound and shadow).

AIs (as they are now) do not and cannot understand that what they create represents objects or concepts in the real world. Or that there is a "real world" in the first place. The text or graphics output is an optimally-weighted assembly of elements that have been tagged appropriately, it is not "an object" or "a story" or whatever, from the perspective of the machine.

You've seen this, I'm sure.
MagrittePipe.jpg

Magritte's The Treachery of Images, 1929. (Image credit: Wikipedia)

We know that the apparent contradiction here isn't an actual contradiction, because while the image is not an actual pipe, it is a picture of a pipe.

A current AI wouldn't know that there are actual (smoking) pipes that the painting is a picture of, only that there are source image elements tagged as "pipe (smoking)" or some such, and that this image corresponds to them.

At best, they could come up with a definition of the word "pipe (smoking paraphernalia)" from their database, but this would be a string of text to it, not an understanding of the real-world object that is being defined.

Science fictional AIs can be different, of course. :)
 
Back
Top