• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Robot ships

I didn't mean combat. I meant VERY skilled in piloting. It would be impossible to have a 4 level (look at what situations you use the Pilot skill level when rolling skill check) and have the judgement of an infant. An infant would NOT know to turn away from the upcoming mountainside for instance.
Again, it would only be impossible for a human because someone with the judgement of an infant would be killed or grounded before they could develop the piloting skill.

Using the literal example of a robot with Pilot-4, if the human tells it to fly into a mountain it is going to do just that; fly into the mountain. If there are events that might prevent it from flying into the mountain it will try to avoid them, and quite skillfully. It will fly around obstacles that might be in the way. It will perform lovely banked turns. It will fly smooth and level with all the skill of a highly trained pilot, adjusting for crosswind, turbulence, and imbalances in the engines.

Then it will slam into the mountain.

It's a great pilot but it wasn't allowed the judgement not to fly into the mountain and so it does so with all the skill it possesses.

Now translate that from 'not allowed to' to 'unable to make a decision to avoid it'. Both Low Data and High Data FLPs automatically fall into that category. They rely on explicit instructions. If you say 'fly this way' and there happens to be a mountain in the way they will fly that way, straight into the mountain.

What about a Low Autonomous robot? Well, now it is capable of interpreting instructions and so it will probably figure out that you meant you didn't want it to fly into the mountain. However when it makes that decision it can only make fairly simple decisions. It might decide to make the smallest turn to avoid the mountain and end up on the wrong side of a mountain range. It might decide to stay on the right side of the range and fly into bad weather. This is especially possible if the bad weather is only starting to form.

Or possibly it will just fly in circles until it gets new instructions. That would actually be the safest option, after all.

High Autonomous robots would do something similar, although they would be better at making those choices. They might consider things like the fact that they would wind up on the wrong side of the mountains or that the course might take them through bad weather that's starting to form. However they still lack a human's judgement (don't get that until AI levels) so they could still make mistakes a trained pilot wouldn't make.

Example:
Player. "I think my character will take a short cut through that asteroid field and enter orbit at x speed."

GM. Roll against your Pilot skill

NOT: GM. Roll against your judgement level.

If you've ever GM'ed this game much you can't HONESTLY answer differently.
I'll give you a counter example:

Player. "I think my character will take a short cut through that asteroid field and enter orbit at x speed."

GM. The asteroid field is incredibly dense. It is almost like a gravel pit floating in space. The average gap between chunks is measured in centimeters and the asteroids are multi-megaton boulders.

Player. Yes, but I've got Pilot-4.

GM. I understand that, but at the speed you're talking about you would need to roll over....(consults charts and add things up) 27.

Player. Yes, but I have Pilot-4. I'm a great pilot. (rolls dice) 8

GM. Ok. You slam into an asteroid at several kilometers a second, killing you and everyone on board.

Player. Oh. But I have Pilot-4. I wouldn't have done anything like that.

This is a wonderful illustration of someone possessing Pilot-4 who makes a very poor decision. High piloting skill does not make one immune to very poor decisions.

And if you think that having a skill in operating a vehicle means someone won't ever make a poor decision the Internet is full of proof that they might. Usually this poor decision will be found on Youtube and somewhere within the first 15 seconds the words "Check this out" will be uttered.
 
Nastiness!

So I decide to work on a simple robot builder. First thing comes out of it is - if I have the rules right, which is not guaranteed - I can build a Pilot-3 bot using nothing but linear CPU and standard storage. It's an idiot savant, about as bright as a dog, I think, but it can fly as well as any human Pilot-3. So now I've got what amounts to a dog that can fly ships.

And that's a little scary.

I can see some sense in it. You can put something like that at the core of an inexpensive probe, launch it to go check out that odd sensor return in the outer system, and not feel too bad when it gets zapped. Heck, it's the basis for a missile guidance, in some ways. On the other hand, the only thing I can see that stops someone from putting this thing at the controls of something valuable is the gamemaster's good sense; there's no rules saying it performs any worse than a high AI or a human.

Should there be such rules? Should there, for example, be rules for rolls that the bots make during use of such skills to see if they encounter something that tests their intelligence?

Jeez, the first time it spots a squirrel ... :rofl:

(What precisely is ACS? T5 thing?)
 
Nastiness!

So I decide to work on a simple robot builder. First thing comes out of it is - if I have the rules right, which is not guaranteed - I can build a Pilot-3 bot using nothing but linear CPU and standard storage. It's an idiot savant, about as bright as a dog, I think, but it can fly as well as any human Pilot-3. So now I've got what amounts to a dog that can fly ships.

And that's a little scary.

I can see some sense in it. You can put something like that at the core of an inexpensive probe, launch it to go check out that odd sensor return in the outer system, and not feel too bad when it gets zapped. Heck, it's the basis for a missile guidance, in some ways. On the other hand, the only thing I can see that stops someone from putting this thing at the controls of something valuable is the gamemaster's good sense; there's no rules saying it performs any worse than a high AI or a human.

Should there be such rules? Should there, for example, be rules for rolls that the bots make during use of such skills to see if they encounter something that tests their intelligence?

Jeez, the first time it spots a squirrel ... :rofl:

(What precisely is ACS? T5 thing?)

That's precisely my point. No, it doesn't fly any worse than a human or AI, but I'm assuming you're using a low data or high data FLP. So while it doesn't fly any worse than a human it is only going to follow explicit instructions given to it. You tell it to crash into something? It will make no effort to avoid being shot down. If something gets in the way it will make no attempt to avoid the collision, because you didn't explicitly tell it to do so.

And explicit commands mean you can't tell it things like 'avoid enemy fire'. That's not actually an explicit command. How should it avoid enemy fire? Turn away? Well, that now means that if someone with a handgun in the target area starts shooting at it it won't be able to crash like you wanted because it will be busy turning away from "enemy fire". Never mind the fact that it is completely ineffective.

In a lot of ways the explicit command will be handled like a D&D wish. It isn't that the robot is going to try and twist the command into something else, but the robot isn't going to do anything it wasn't explicitly told to do.

As for ACS, that is a T5 thing and stands for Adventure Class Ship. Basically they are ships built along rules similar to the CT Book 2 rules, though with considerably more weapons. Because they use fixed hull sizes ACS are limited to (I think) 2400 tons.
 
That's precisely my point. No, it doesn't fly any worse than a human or AI, but I'm assuming you're using a low data or high data FLP. So while it doesn't fly any worse than a human it is only going to follow explicit instructions given to it. You tell it to crash into something? It will make no effort to avoid being shot down. If something gets in the way it will make no attempt to avoid the collision, because you didn't explicitly tell it to do so.

Low data, worst-case analysis, although I don't see much difference other than that High can analyze the results and learn from them.

I have to think the basic Pilot skill itself includes such things as, "avoid hitting something as you go from point A to point B." Wouldn't be much of a Pilot program without that; that much is within reach of even current programming. However, Fido the robopilot only knows how to maneuver a ship from point A to point B. The poor whelp is likely to be at a complete loss if his drive starts behaving erratically or a sensor starts giving erroneous feedback - he'll just keep struggling to get to point B, or at best he'll give it up and broadcast an error code. And, yes, if you instruct Fido to "avoid enemy fire," Fido will react to fire much like a sheep reacts to sheepdogs, and hampered by his primary instruction to boot: after turning away, his programming would be telling him to resume course to get to B, and the attacker would have an easy time predicting his movements. (A High Data might learn after a couple or three tries that it wasn't really a good idea to resume course for B while the attacker was around, but he'd still be basically playing sheep.)

Occurs to me that what the Imperials really need is a really good "Do no harm" program.

...As for ACS, that is a T5 thing and stands for Adventure Class Ship. Basically they are ships built along rules similar to the CT Book 2 rules, though with considerably more weapons. Because they use fixed hull sizes ACS are limited to (I think) 2400 tons.

Ah. Thanks!

Question for those who've dealt with the Book-8 robot design rules:

The rules seem to be able to generate a negative intelligence: robots below TL12 with minimal "brains" and such; -4 is possible. Random robot generation on page 52 seems to support this, with the 1d6-2 result potentially generating a -1 intelligence. However, skills can't exceed intel plus education. One could conceivably design a bot with lots of storage units to compensate - standard storage is bulky but cheaper and about twice as effective at generating Edu as linear CPU is at generating Intel (x/10 instead of x/20) - but the URP isn't exactly designed for negative codes. There's a rule for robots with 0 intelligence: "A robot with intelligence 0 will not perform properly unless it is instructed by someone with Robot Operation skill." Do we assume the minimum Robot intelligence is 0?
 
Low data, worst-case analysis, although I don't see much difference other than that High can analyze the results and learn from them.
That is exactly the difference between high and low data. Both of them have to be given explicit orders but the high data FLP can actually improve its skill over time through learning.

I have to think the basic Pilot skill itself includes such things as, "avoid hitting something as you go from point A to point B." Wouldn't be much of a Pilot program without that
No. Pilot skill is things like not stalling the craft, maintaining altitude, taking off, landing, etc.. 'Avoid hitting something' is a decision process. You can see this because people who are not pilots will not run into things as they walk. There is, however, an aspect of Pilot skill that intersects with the decision process and that is knowing what to do to avoid hitting something. With a good pilot skill you will know when you have to turn to avoid hitting something and when it is far enough away that you don't have to turn yet, but that part of the skill is only accessed when the decision not to run into something has been made.

. . .that much is within reach of even current programming.
Yes and no. For the most part all they really have are a set of explicit commands that will keep them safe in general conditions. They won't descend below a certain height (unless they are landing), when they do descend they won't descend past a certain rate. If they have a collision avoidance system then they will start to turn a certain distance away from an obstacle they would run into.

But this is not like how a human pilot avoids a collision. If someone actually tries to run into a plane flying like that they will probably succeed because the limited set of explicit commands are meant to deal with normal safety concerns. They're not meant to deal with trying to evade a homicidal human.

However, Fido the robopilot only knows how to maneuver a ship from point A to point B. The poor whelp is likely to be at a complete loss if his drive starts behaving erratically or a sensor starts giving erroneous feedback - he'll just keep struggling to get to point B, or at best he'll give it up and broadcast an error code.
That is more or less what I'm saying. Yes, Fido has a high pilot skill, but it still isn't capable of flying a ship without human intervention because it won't know how to deal with unexpected occurences.

And, yes, if you instruct Fido to "avoid enemy fire," Fido will react to fire much like a sheep reacts to sheepdogs, and hampered by his primary instruction to boot: after turning away, his programming would be telling him to resume course to get to B, and the attacker would have an easy time predicting his movements. (A High Data might learn after a couple or three tries that it wasn't really a good idea to resume course for B while the attacker was around, but he'd still be basically playing sheep.)
I would think it would be worse than that. High and low data can only follow explicit commands. They can't make any judgments. All three words of 'avoid enemy fire' are imprecise. How should it avoid? Dive towards the ground? Do a 180 and fly away at top speed? Do a barrel roll? And what constitutes 'fire'? Any missile? It's going to try and dodge thrown rocks that can't even reach it. Missiles that have the potential to reach it? Now it is no longer dodging rocks but it will still dodge small arms fire which really can't hurt it. Missiles over a certain caliber? That's good as long as they don't use lasers. Even enemy is imprecise. Is an enemy anyone who shoots at it or just some people (most likely it would handle anyone firing at it, but you still need to be explicit in the command).

Occurs to me that what the Imperials really need is a really good "Do no harm" program.

I would actually assume that a well programmed robot, even one with only low data, would have something like that. The only problem is that they are kind of limited in how well they can do that. As long as things are working smoothly a robot should be able to 'do no harm' but they would have trouble with more chaotic situations (such as someone trying to ram the ship) and in many cases the 'do no harm' choice may be less then stellar (I'm about to run into a mountain, so I'll stop and tell my human I need new instructions).

This leaves robots as I see them in the Traveller universe as things that are only capable of assisting people, not replacing them.
 
Using LBB:8 you can build a robot brain at TL12 with an Int of 9 an a skill level of 4 in every ship skill.

Let me fix you phrase here: Using LBB:8 you can build a robot brain at TL12 with an apparent Int of 9 an a skill level of 4 in every ship skill.

And this Word, apparent is the key here. Until IA is available, all robot intelligence is apparent, representing (as I understand) speed of data processing, but not intuitive intelligence, as necessary as speed of processing for most tasks.

Of course, this is quite difficult to represent in a game where only 6 stats are given per UPP, and is up to the referee to represent it on the game.

What kills more people today in air accidents? Situations the autopilot can't handle or pilot error?

Not a fair comparison. Autopilots are used in much more controlled situations. Asking such a question is akin to asking whether more people are killed in normal driving situations or Formula-1 races and then concluding that Formula-1 racing is safer since there are fewer deaths.

Fully agree with esampson here. Autopilots have few accidents, but they are mostly (if not only) used for routine maneuvers, while any unexpected situation resorts to manual control (and there is where the accident may happen).
 
Last edited:
...No. Pilot skill is things like not stalling the craft, maintaining altitude, taking off, landing, etc.. 'Avoid hitting something' is a decision process. You can see this because people who are not pilots will not run into things as they walk. There is, however, an aspect of Pilot skill that intersects with the decision process and that is knowing what to do to avoid hitting something. With a good pilot skill you will know when you have to turn to avoid hitting something and when it is far enough away that you don't have to turn yet, but that part of the skill is only accessed when the decision not to run into something has been made. ...

I have to disagree. We're discussing piloting spaceships within a game milieu. If a robot with Pilot-3 can't avoid a simple asteroid identified on long range sensors while flying from A to B, then it's not even performing as well as this modified Roomba:

http://www.youtube.com/watch?v=EtayNNa1l3E

We're looking at TL8 robotics; our hypothetical robot has a hundred-word command vocabulary, which is about a hundred better than that Roomba. A version can, under the game rules, be programmed to move about a lounge and serve drinks and canapes. A Cargo Bot with an intelligence of 2 (page 40) can navigate the dock and cargo bay to load and retrieve cargo. We've got real-world airborne collision avoidance systems now that warn pilots and in some cases suggest a course to avoid the collision. Heck, you could train a mouse to respond to a collision signal by pushing a switch to alter the ship's course. And we're saying if I put a bot with Pilot-3 and IQ-3 at the helm, it's going to fly dead-on into the collision because Pilot-3 does not include learning how to avoid objects while going from point A to point B?? If that's the case, then both the steward bot and the cargo bot are hopeless cases because they too have to be able to navigate an environment and avoid obstacles to get their jobs done.

...High and low data can only follow explicit commands. ...

High and low data can only follow explicit commands within the scope of the skill in question (and within the scope of its command program, of course). If you tell a pilot bot to prepare a vodka martini, it's probably going to return an error code. On the other hand, a given skill must include enough functionality to support use of the skill. You do not need to explicitly tell a steward bot how to deliver a vodka martini to table-3 - navigating a lounge is part of the steward program, and that would by necessity include avoiding wandering passengers. You do not need to tell a cargo bot explicitly how to navigate the dock from the point where the crate is to the ship's ramp - that has to be within the scope of the program, or the bot and program alike are pretty much worthless. By the same token, to suggest a pilot bot is utterly blind to the environment in which it is intended to apply its skills, while other dumb bots navigate their environments without needing to have their hands held, would amount to a bit of a contradiction within the structure of the game.
 
I have to disagree. We're discussing piloting spaceships within a game milieu. If a robot with Pilot-3 can't avoid a simple asteroid identified on long range sensors while flying from A to B, then it's not even performing as well as this modified Roomba:

Correct. We already have planes that can self pilot combat missions without intervention. Including dog fighting. To conceive that Trav level tech isn't at LEAST as good is just .. beyond words.
 
Correct. We already have planes that can self pilot combat missions without intervention. Including dog fighting. To conceive that Trav level tech isn't at LEAST as good is just .. beyond words.


If a robot pilot can do everything a human pilot can at TL 10 or 12 (since you are claiming that capability at TL 7 or 8), then what does AI mean (the Traveller TL 17 variety)?

It sounds from the sidelines like a rewrite of the basic TLs, and if so, then count me out.
 
I have to disagree. We're discussing piloting spaceships within a game milieu. If a robot with Pilot-3 can't avoid a simple asteroid identified on long range sensors while flying from A to B, then it's not even performing as well as this modified Roomba:

http://www.youtube.com/watch?v=EtayNNa1l3E

That's all well and good, but there's an enormous difference in a Roomba slowly crawling through a room and avoiding running into things and an autopilot trying to fly at speeds probably at least ten thousand times faster with things moving around.

We're looking at TL8 robotics; our hypothetical robot has a hundred-word command vocabulary, which is about a hundred better than that Roomba. A version can, under the game rules, be programmed to move about a lounge and serve drinks and canapes.
Yes, but the difference is that if the robot gets into trouble there will be someone with better decision making ability around to deal with the problem. With a 100 word vocabulary the robot is really only capable of very basic functions like repeating a customer's order, delivering their order, and a few other simple function. It probably won't be capable of doing things such as preparing the drinks because the 100 word vocabulary would be too limited.

A Cargo Bot with an intelligence of 2 (page 40) can navigate the dock and cargo bay to load and retrieve cargo.
Yes. It probably could. I'll address that again, below.
We've got real-world airborne collision avoidance systems now that warn pilots and in some cases suggest a course to avoid the collision. Heck, you could train a mouse to respond to a collision signal by pushing a switch to alter the ship's course.
The reason the collision avoidance system warns the pilot and doesn't just turn the plane itself is because there's too much danger of the collision avoidance system making a poor judgement and turning at the wrong time. You don't need to train a mouse. We could wire the collision avoidance systems up to take control, but you know what? We don't, because robots don't always make good decisions. (We are starting to give robots certain ability to make decisions and take control, as is the case with cars that can automatically apply the brakes, but there's a world of difference in allowing a robot to stop a car and allowing it to take control and swerve).
And we're saying if I put a bot with Pilot-3 and IQ-3 at the helm, it's going to fly dead-on into the collision because Pilot-3 does not include learning how to avoid objects while going from point A to point B?? If that's the case, then both the steward bot and the cargo bot are hopeless cases because they too have to be able to navigate an environment and avoid obstacles to get their jobs done.

High and low data can only follow explicit commands within the scope of the skill in question (and within the scope of its command program, of course). If you tell a pilot bot to prepare a vodka martini, it's probably going to return an error code. On the other hand, a given skill must include enough functionality to support use of the skill. You do not need to explicitly tell a steward bot how to deliver a vodka martini to table-3 - navigating a lounge is part of the steward program, and that would by necessity include avoiding wandering passengers. You do not need to tell a cargo bot explicitly how to navigate the dock from the point where the crate is to the ship's ramp - that has to be within the scope of the program, or the bot and program alike are pretty much worthless. By the same token, to suggest a pilot bot is utterly blind to the environment in which it is intended to apply its skills, while other dumb bots navigate their environments without needing to have their hands held, would amount to a bit of a contradiction within the structure of the game.
Sorry, but that's not quite what I've been trying to say. What I've been trying to say is that a bot with Pilot-3 could fly dead on into a collision, not that it automatically would.

To use a modern analogy, if you are in an aircraft and you set the autopilot to maintain an altitude of 5000 feet ASL and let the autopilot do it's thing it will fly into the side of a tall enough mountain. Why? Because the explicit command you gave it was 'stay at 5000 feet ASL'. You didn't tell it to maintain a minimum altitude above the ground or look at the GPS.

So now you set the autopilot and you tell it to stay at 5000 feet ASL and to keep a minimum distance from the ground of 1000 feet and if the ground is closer then that it should gain altitude. Now the plane will fly over most mountains. No crash. You've just given the Low/High Data FLP an explicit command that will prevent the robot from flying into most mountains.

However, what you haven't done is make a robot that will avoid crashing into obstructions such as Half Dome in Yosemite. It will happily fly along at 5000 feet ASL and when the base of Half Dome rises up enough that it is no longer keeping a minimum distance from the ground of 1000 feet it will start to climb. Unfortunately since the face of Half Dome is a sheer wall about 1000 feet high it won't have enough time to climb and it will end up slamming into the side of the mountain.

So what you've got is a robot that is a good pilot that isn't going to fly straight into the side of any mountain that gets in its way but which is still far less safe than a human pilot. So you give it the explicit command to stay at 5000 feet ASL and to keep a minimum distance from the ground of 1000 feet and if the ground is closer than that it should gain altitude and if it detects a horizontal instruction within a quarter mile it should bank right until it no longer detects an obstruction.

Now the explicit command is starting to get complicated and the robot still isn't as smart as a dog. It won't try to get back on course after making a correction, it will fly until fuel runs out and then it will keep trying to fly even as the plane is falling. It will fly out over oceans it has no ability to cross. If it flies over the right geographic condition it could drop into a huge crater and then just fly around and around in circles because it keeps detecting obstructions in front of it (though not close enough that it ever crashes into them).

So while a robot using a Low or High Data FLP will not just automatically fly into a mountainside it is possible that it will crash into something that could be easily avoided because of its limitations and even if it doesn't crash it could wind up in situations where a person is needed to give it new instructions.

What about the steward bot and the cargo bot? They've got an easier time of it than the pilot bot. They travel slower and they can easily stop in place if there's an obstacle. If a steward bot runs into someone it is inconvenient and in the case of a cargo bot they probably have certain areas they work in and it is up to the humans around to be aware of it (it will still be programmed not to run people over but it will be up to the humans working around it not to do things that might make them 'edge cases' in the commands the robot has been given)

Correct. We already have planes that can self pilot combat missions without intervention. Including dog fighting. To conceive that Trav level tech isn't at LEAST as good is just .. beyond words.
Please give an example of one of these aircraft. I am only aware of UCAV that are in the prototype or technology demonstrator stages, none in actual deployment, and those are only semi-autonomous.
 
That's all well and good, but there's an enormous difference in a Roomba slowly crawling through a room and avoiding running into things and an autopilot trying to fly at speeds probably at least ten thousand times faster with things moving around.

No, not really. Space is big, empty, and the destination points small. It's actually rather HARD to hit something, but it's the kind of hard that computers excel at and people don't. A roomba is smarter than some of the autopilots used by NASA. Not so accurate with the timing, but programmed for better reactions.

A modern naval ship's autopilot, or a modern jetliner's, needs people only because the laws require it; we'd be safer if they didn't.
 
No, not really. Space is big, empty, and the destination points small. It's actually rather HARD to hit something, but it's the kind of hard that computers excel at and people don't. A roomba is smarter than some of the autopilots used by NASA. Not so accurate with the timing, but programmed for better reactions.

When you are out in space, yes, it is big and empty and you don't even really need a robot to fly the ship at that point. You could just lock the controls in place and you wouldn't run into anything, however the comment in relation to the ship flying through an asteroid field (which again in reality would be extremely empty but we were working in the context of a hazard, so perhaps a debris field instead of an asteroid field).

A modern naval ship's autopilot, or a modern jetliner's, needs people only because the laws require it; we'd be safer if they didn't.
I suspect Chesley Sullenberger's passengers would disagree with you there.
 
When you are out in space, yes, it is big and empty and you don't even really need a robot to fly the ship at that point. You could just lock the controls in place and you wouldn't run into anything, however the comment in relation to the ship flying through an asteroid field (which again in reality would be extremely empty but we were working in the context of a hazard, so perhaps a debris field instead of an asteroid field) ...

World of difference between a debris field and an asteroid field (and as I recall, I mentioned an asteroid, not an asteroid field). I'll agree that a bot with little more intelligence than my german shepherd would be seriously challenged by a debris field and likely wouldn't survive the encounter. Faced with one obstruction on an easily calculated vector, it is no great feat to include programming to calculate a straightforward course to avoid the obstruction. Faced with multiple obstructions on multiple intersecting paths, it may be - it is likely to be - that robo-fido will meet with calamity.

I thought I was being pretty clear that Robo should be competent for normal flight, which includes avoiding the rare obstacle or other ship and possibly take-off and landing under more or less normal circumstances. Something that a normal pilot would find challenging - something that represented a significant variation from the norm - would, short of blind luck, kill little Robo.
 
World of difference between a debris field and an asteroid field (and as I recall, I mentioned an asteroid, not an asteroid field). I'll agree that a bot with little more intelligence than my german shepherd would be seriously challenged by a debris field and likely wouldn't survive the encounter. Faced with one obstruction on an easily calculated vector, it is no great feat to include programming to calculate a straightforward course to avoid the obstruction. Faced with multiple obstructions on multiple intersecting paths, it may be - it is likely to be - that robo-fido will meet with calamity.
And no one has suggested that a robot with Pilot-4 wouldn't be able to negotiate the routine problems of operating a starship, so why are you arguing that it should be? What's been suggested is that it would be unable to cope with unanticipated problems. The sort of thing that comes up once a decade was what I implied when I talked about one serious accident per decade being enough to nullify any savings that a robot pilot would provide in daily operations costs. Though I actually think that's far more frequent than needed to make a robot crew a false economy. One serious accident in the ship's entire 40 year service life would probably do it.

I thought I was being pretty clear that Robo should be competent for normal flight, which includes avoiding the rare obstacle or other ship and possibly take-off and landing under more or less normal circumstances. Something that a normal pilot would find challenging - something that represented a significant variation from the norm - would, short of blind luck, kill little Robo.

It remains to decide how frequently such a situation would crop up, and there I admit that I have no basis for anything other than a pure guess. But then again, I don't think it can be proven that the frequency wouldn't be once a decade on the average.


Hans
 
That's all well and good, but there's an enormous difference in a Roomba slowly crawling through a room and avoiding running into things and an autopilot trying to fly at speeds probably at least ten thousand times faster with things moving around.

Google's robot car can handle LA & SF traffic real time. Compared to empty space, the judgement & decision speed to keep from hitting peds & cars a few feet from it, etc., is MUCH higher than that needed for point to point space travel. BTW, the F-22 can dog fight without pilot input. The X-47B has completed autonomous combat testing and is just getting cert'ed on carrier deployment.

So, if we can make a Google car NOW, there is no logic argument against pilot-less space ships, fast forward a few thousand years from now. Case closed.
 
Last edited:
Google's robot car can handle LA & SF traffic real time. Compared to empty space, the judgement & decision speed to keep from hitting peds & cars a few feet from it, etc., is MUCH higher than that needed for point to point space travel. BTW, the F-22 can dog fight without pilot input. The X-47B has completed autonomous combat testing and is just getting cert'ed on carrier deployment.

So, if we can make a Google car NOW, there is no logic argument against pilot-less space ships, fast forward a few thousand years from now.

"...no one has suggested that a robot with Pilot-4 wouldn't be able to negotiate the routine problems of operating a starship. What's been suggested is that it would be unable to cope with unanticipated problems. The sort of thing that comes up once a decade was what I implied when I talked about one serious accident per decade being enough to nullify any savings that a robot pilot would provide in daily operations costs. Though I actually think that's far more frequent than needed to make a robot crew a false economy. One serious accident in the ship's entire 40 year service life would probably do it."​

Case closed.

Who could possibly come up with a counter-argument of equal wit for that? Oh, wait, I got one: Oh no it isn't.


Hans
 
Last edited:
Its funny, I keep following this and wanting to reply ...
... but everything that I might want to say has already been said (often several times).


One side argues that autonomous ships should be possible in the near future, let alone the far future, based on real life ... which is probably true.
The other side argues that the definitions of autonomous and AI and TLs in the rules indicate that an autonomous ship belongs in the Far Far Future, and only semi-autonomous should be permitted under the rules ... which is where I planted my flag as well.


If there is to be any additional signal for this discussion (as opposed to just noise), then I think that both sides probably need to discuss what can and cannot be done with the robot programming and brains at a few critical TLs:
What are the capabilities and limitations of a TL 8 Robot Pilot with Low Data Logic and Limited Basic Command?
How will the capabilities and limitations of a TL 12 Robot Pilot with Low Autonomous Logic and Full Command be different?
How will the capabilities and limitations of a TL 17 Robot Pilot with Low AI Logic be different from either of the others?

So far one side is focusing on limitations while the other side is focusing on the capabilities. Perhaps it is time to trade offense for defense:
Do those who favor autonomous ships available in a decade see any limitations to robot pilot capabilities at TL 8?
Do those who see autonomous ships only after we have a Commander Data see any robot pilot capabilities at TL 8?
 
So, if we can make a Google car NOW, there is no logic argument against pilot-less space ships, fast forward a few thousand years from now. Case closed.
The 'few thousand years' argument is a little misleading. In that time, two empires rise and fall (the Second and Third Imperium). If the Earth were reduced to Paleolithic technology today, then in 'few thousand years' we would be lucky to have a Grain Empire.

Those Dark Ages really stink. ;)

I will grant that the real world will probably have a 'Commander Data' long before it has either a FTL Drive or a Matter Transporter, so your argument has some merit ... I am only challenging your hyperbole. :)
 
Back
Top