• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Robot ships

I was trying to think of an analogy for a computer with a "skill program" but little or no "Logic/Autonomy":

Would a computer with Pilot-4 and no autonomy (like the Low Data Logic) be compared to a sophisticated fly-by-wire computer system?
It is superlative at executing the mechanics of flight (far beyond the skill of a human pilot), but poor at deciding when and where to fly ... that's for the 'Autonomous Unit' that moves the joystick to decide.
 
If a robot pilot can do everything a human pilot can at TL 10 or 12 (since you are claiming that capability at TL 7 or 8), then what does AI mean (the Traveller TL 17 variety)?

It sounds from the sidelines like a rewrite of the basic TLs, and if so, then count me out.
Reading LBB:8 on page 5 we find:

reliable speech recognition TL10 - a tad high IMHO but there it is

primitive (non-creative) AI TL 11

primitive AI TL12

On page 19 we are told:

TL 11 - synaptic processors are bulky and slow but have advantages for certain artificial intelligence applications

TL12 - more reliable synaptic processors allow true self-programming (heuristic or self-teaching) AI software

TL13 - robot brains possess a crude form of AI

TL 15 - computers become "alive"

TL16 - low artificial intelligence possible

TL17 - computers are now self aware.

Now there has to be errata since the design sequence moves low AI to TL17 and high AI to TL18.

It has long been my view that TL17+ AI should more correctly be termed artificial sentience, since the term AI is introduced at the more believable TL11.
 
I was trying to think of an analogy for a computer with a "skill program" but little or no "Logic/Autonomy":

Would a computer with Pilot-4 and no autonomy (like the Low Data Logic) be compared to a sophisticated fly-by-wire computer system?
It is superlative at executing the mechanics of flight (far beyond the skill of a human pilot), but poor at deciding when and where to fly ... that's for the 'Autonomous Unit' that moves the joystick to decide.
The best comparison I can come up with is an autopilot. You give it an explicit instruction and it follows it. That explicit instruction could be very short (maintain altitude) or it could include some complexities (turn away from obstructions detected at distances of X or less) in the case of a more advanced autopilot.

The key here, IMO, is that while it is perfectly capable of taking over the plane for a while it isn't something you would generally turn over control of the plane for the entire trip. A modern high end autopilot can be programmed to take off, fly to a destination and land, but you still need a human pilot on board to take control if conditions outside its explicit instructions occur.
 
Who tells a robot pilot when and where to fly?

So let's say it has safely landed at the right place on the starport. How long does the ship stay there? When is it time to take off? Those don't seem like "innate pilot skill" decisions that would be covered by the pilot-4 program [ ... with or without the optional 'dogfight' or 'fly through a debris field' packages. ;) ]

I think this might be where the 'Low Data', 'Low Autonomy' and 'Low AI' logic packages might prove useful.
 
I agree completely, the TL17+ entry in LBB3 should be read as artificial sentience not artificial intelligence as we understand the term today.
Artificial Intelligence is a 'squishy' term. People ask when we are going to get artificially intelligent machines.

By the definitions of the 60's and 70's we already do. Many of the tasks that were 'AI' tasks of the day (optically reading something, voice recognition, routing) are so commonplace that we don't even think of them, yet people still want to know when we will 'get AI'.
 
So what would be the cost of a pilot robot with low autonomy, high autonomy, and Traveller-style AI? And which one would be able to replace a human pilot under what circumstances?


Hans
 
TL 11 - synaptic processors are bulky and slow but have advantages for certain artificial intelligence applications

TL12 - more reliable synaptic processors allow true self-programming (heuristic or self-teaching) AI software

TL13 - robot brains possess a crude form of AI

And yet in the same book, the stat is defined as apparent intelligence, not as intelligence...

Artificial Intelligence is a 'squishy' term.

I'd better say: [o]Atrificial[/o] Intelligence is a 'squishy' term :devil:.
 
Last edited:
Who tells a robot pilot when and where to fly?

So let's say it has safely landed at the right place on the starport. How long does the ship stay there? When is it time to take off? Those don't seem like "innate pilot skill" decisions that would be covered by the pilot-4 program [ ... with or without the optional 'dogfight' or 'fly through a debris field' packages. ;) ]

I think this might be where the 'Low Data', 'Low Autonomy' and 'Low AI' logic packages might prove useful.

I was going to say you had to have at least Low AI for the robot to make those sorts of decisions, but I take it back. It would be possible for a robot with Low or High Autonomous to make similar decisions provided they were part of some larger order it had been given (fly from Regina to Jewell). It is concievable that the ship could request clearance from a starport, land, request refueling, request clearance, and then continue on its way.

However I think such a robot would still be in position to make bad mistakes because of poor judgement and so you would still want there to be human pilots on board (or somehow able to take control once the ship has exited jump).
 
Perhaps we should approach this from a slightly different angle. (And my apologies up front - I do not have Book 8 in front of me and have not looked at it in a long time).

I am going to make the presumption that skill for a robot and a skill for a sophont are essentially the same (i.e. it is both knowledge and the ability to implement that knowledge). The difference between robots and sophonts is the FLP/AI issue for robots versus Organic Intelligence (INT) for Sophonts.

That being said, if skill is the same between the two, what would I expect either a sophont or a robot with Pilot-3 (for example) to be able to do? The skill/ability should be identical for either. How would either one be different, say, from a robot or person with Pilot-1, or Pilot-5?

The differences between the robot and sophont will then be what is encompassed by the FLP or Organic Intelligence, respectively.

The basic observation I am trying to make is that it does not make much sense to rate a robot as having a skill-3 in a particular ruleset if in fact it cannot perform what is expected of a person with skill-3 in that ruleset. Otherwise, the skill level of a robot should be limited or "capped" by the sophistication of the particular FLP.

Skill should be independent of whether the user is sophont or robot. What would matter should be how FLP or Organic INT, respectively, modifies the results of a skill roll. (Or alternately, FLP or Organic INT considerations should be things that are independent of skill).

Those are my thoughts on the matter.
 
True, a robot with skill 3 may do anything a sophont with skill 3 for those tasks the robot is programed to do. The problem comes when something unexpected occurs that need a decision taken outside the program parameters, then the difference among a non true AI robot and a sophont (or a true AI robot) appears, as the true intelligence may improvise, something a robot with high autonomous or lower FLP cannot.
 
The basic observation I am trying to make is that it does not make much sense to rate a robot as having a skill-3 in a particular ruleset if in fact it cannot perform what is expected of a person with skill-3 in that ruleset. Otherwise, the skill level of a robot should be limited or "capped" by the sophistication of the particular FLP.
It applies to two different querstions. Skill is about how to do something. Judgement is about when to do what. A robot with Pilot-4 is just as capable of performing a pilot task as a human with Pilot-4. A robot without autonomy is unable to decide to perform the task without instructions; it has to be told specifically. A robot with low autonomy can decide what to do on its own under a selection of simple circumstances. A robot with high autonomy can decide in a a wide selection of more complex circumstances. A robot with AI can decide what to do in unforseen circumstances with some chance of coming up with the correct response.


Hans
 
So what would be the cost of a pilot robot with low autonomy, high autonomy, and Traveller-style AI? And which one would be able to replace a human pilot under what circumstances?


Hans

Cost depends on lots of variables, but like I said a brain capable of supporting High Autonomous is over 150,000 CR.

Any of them would be capable of replacing a human pilot in certain situations. The specific situations would depend on the level of FLP.

A Low/High Data FLP would function like a really good autopilot (the only different in Low and High Data is that the High Data gets better at its job). It would be capable of taking off when a human tells it that it is time to take off, fly to a destination, and land as long as nothing goes wrong. If something goes wrong it could probably make some minor corrections and in more severe cases alert the pilot.

Low and High Autonomous would function much the same. The only thing is that while the Low/High Data versions are using rigid preprogrammed limits the Autonomous Versions would be more flexible in interpretation (it might decide to fly over an obstacle rather than around it if the obstacle is too long, or around it rather than over it if it is too tall). The difference in Low and High Autonomous is that the High Autonomous would be better at this sort of 'fuzzy logic'.

Neither one can really originate an idea, however, so in the case of an emergency they probably wouldn't be able to make a decision like 'Hey, I could use that river as a runway' and even if their programming covered that they wouldn't be able to handle 'hmmm....there are boats on that river, so if I land there I will cause damage, but if I don't I'm going to crash into the Cleon the I Memorial Orphanage'.
 
However I think such a robot would still be in position to make bad mistakes because of poor judgement and so you would still want there to be human pilots on board (or somehow able to take control once the ship has exited jump).
Don't forget a fundamental feature of robots TL12+, they can learn.

They gain experience and can increase their Edu and application skill level by doing stuff.

A TL12 robot can start at all space skills 1 and low Edu and after years of experience doing stuff its Edu stat is much higher and its skill level could be 4 or higher in everything, including ship and fleet tactics.

Now here is a nasty thought - what is to stop you copying the data to a new machine, or a few thousand of them ;)

(This is the train of thought that lead to my machine exodus variant for MTU which was the true cause of the collapse of the ROM)
 
Cost depends on lots of variables, but like I said a brain capable of supporting High Autonomous is over 150,000 CR.

That's still going to be cheaper than a human crewmember. You're going to need some added risk of actual damage to the ship to counter that.


Hans
 
Perhaps we should approach this from a slightly different angle. (And my apologies up front - I do not have Book 8 in front of me and have not looked at it in a long time).

I am going to make the presumption that skill for a robot and a skill for a sophont are essentially the same (i.e. it is both knowledge and the ability to implement that knowledge). The difference between robots and sophonts is the FLP/AI issue for robots versus Organic Intelligence (INT) for Sophonts.

That being said, if skill is the same between the two, what would I expect either a sophont or a robot with Pilot-3 (for example) to be able to do? The skill/ability should be identical for either. How would either one be different, say, from a robot or person with Pilot-1, or Pilot-5?

The differences between the robot and sophont will then be what is encompassed by the FLP or Organic Intelligence, respectively.

The basic observation I am trying to make is that it does not make much sense to rate a robot as having a skill-3 in a particular ruleset if in fact it cannot perform what is expected of a person with skill-3 in that ruleset. Otherwise, the skill level of a robot should be limited or "capped" by the sophistication of the particular FLP.

Skill should be independent of whether the user is sophont or robot. What would matter should be how FLP or Organic INT, respectively, modifies the results of a skill roll. (Or alternately, FLP or Organic INT considerations should be things that are independent of skill).

Those are my thoughts on the matter.

It's an interesting idea, but not wholly accurate.

As long as the robot is operating under a 'good choice' it would have full access to its skills, just like any other sophont would. The problem that arises isn't so much that the robot will fail to use its skill well. It is that it will make a 'bad choice'.

As an example, you've got two pilots. One of them flies in a very controlled an cautions manner. The other is a reckless hot shot. For the sake of argument we will call them Iceman and Maverick.

When Iceman brings in his plane to land he maintains the proper glideslope, drops his gear well before hand, keeps in communication with the tower and responds to their directions so he makes a normal piloting roll to land.

Maverick on the other hand has a higher pilot skill but he comes in much steeper than the recommended glideslope, flares at the last moment and drops his landing gear while he's flaring. His entire contact with the tower is 'Look out, boys! Here I come!'.

Maverick still makes the piloting roll with his full skill. Just because he makes stupid decisions doesn't mean he doesn't get full use of his skills. It just means he earns himself some penalties.

If you really want to quantitize the FLPs for rules reasons then what you might want to do is generate some kind of random encounter table. On a certain roll everyone has to make a piloting check. On a different roll only certain FLPs have to make a check (because they let themselves get into a position where they had to do things that the human pilot doesn't since he avoided the situation). In severe cases the pilot might have to make a piloting roll the land safely in a river or empty field while the robots have to make a check based on their FLP. If they pass the check then they can make the same piloting roll but if they fail then they automatically suffer the 'failed pilot' roll since they didn't even think to land in the river.
 
That's still going to be cheaper than a human crewmember. You're going to need some added risk of actual damage to the ship to counter that.


Hans
Sure, and so robot bartenders are very likely since the possibility of damage is in the neighborhood of bruised toes and spilled drinks.

Even the cheapest starships are in the tens of millions of credits, and the damage a crashing starship could inflict could be in the billions if it crashed into the center of a large city.
 
Sure, and so robot bartenders are very likely since the possibility of damage is in the neighborhood of bruised toes and spilled drinks.

Even the cheapest starships are in the tens of millions of credits, and the damage a crashing starship could inflict could be in the billions if it crashed into the center of a large city.

Con permiso, I hop sideways just a bit. Imperial attitudes about robots have been discussed, and Imperial attitudes about robots are going to play a key role in what does and does not happen in space. However, a semi-related question comes to mind regarding member worlds within their own territory.

Let's take Ffudh - 900 million people on a nearly airless world 40% covered in ice, TL C, impersonal bureaucracy with law level D. Plan your next vacation there, so you can come away grateful for wherever it is that you actually live. It's a place that thought 1984 was an example of how governments were supposed to work.

Ffudh government MIGHT be as paranoid about bots as any other Imperial: here's a thing that might be programmed to undermine our authority. On the other hand, if someone gets accidentally killed by a Ffudh security robot: "Well, he must have been doing SOMETHING wrong, else the bot wouldn't have shot him!" There may be a certain appeal in repressive governments for an idiot savant who will blindly follow orders, can't be bribed, and can be loaded with facial recognition software and hardwired orders to never, ever, ever hurt the ruling authorities. Since those governments may hold life in little regard, the possibility of accidentally killing some serf or low-level functionary might be considered an acceptable level of damage.
 
Let's make some PRACTICAL ship's autopilot brains. Book 8 designs.

For comparison, a Pilot 2 Navigator 2 robot brain with various levels of capability... Minimum CPU/storage for command and skill.

Low Data, Limited Basic Command, TL8 Cr24,900 12L 4.5kg Int-1 Edu-1
High Data, Limited Basic Command, TL9 Cr78,250 15.2L 5.2kg Int-2 Edu-1
High Data, Basic Command, TL9 Cr79,500 15.9L 5.4kg Int-3 Edu-1
Low Autonomous, Basic Command, TL12 Cr177,750 20.8L 6kg Int-4 Edu-2
Low Autonomous, Full Command, TL12 Cr183,000 22.4L 6.4kg Int-5 Edu-3
High Autonomous, Full Command, TL13 Cr286,500 22.7L 6.7kg Int-6 Edu-3

The high autonomous TL13 is probably passable as a crewmember. As in, able to replace your pilot full time, under the supervision of a human who probably doesn't need to do much more than give directions. And it's good enough to function as both pilot and navigator at once.

TL8 - probably needs the targets set. May need a human to interface with Traffic Control
TL9 LBC: Smarter, and adaptive. If working the same few worlds, begins to learn certain patterns.
TL9 Basic Command: Smarter still, clueless, but adaptive. Can probably interface with Traffic Control if TC is aware it's a bot.
TL12 Basic Command: smarter, more adaptive version. Many PC's are not as useful
TL12 Full Command: it's almost vulcan like interface can pass itself off on radio, provided no non-operational chatter is expected.
TL13, Full command: can pass as emontionless humanoid.

Adding emotion simulation adds Cr1400 0.4L 0.2kg.

Combat capability would need Gunnery (Cr1400 0.4L 0.2kg per level) and possibly ship tactics (Cr4800 1.6L 0.8kg per level). I'd require gunnery to half a level to allow it to fire a weapon...

Not true ai, but give it a directive, and the TL 13+ are almost people... but not innovative, and don't learn new levels, just new methods within the same levels of skill.
 
Let's make some PRACTICAL ship's autopilot brains. Book 8 designs.

For comparison, a Pilot 2 Navigator 2 robot brain with various levels of capability... Minimum CPU/storage for command and skill.

Low Data, Limited Basic Command, TL8 Cr24,900 12L 4.5kg Int-1 Edu-1
High Data, Limited Basic Command, TL9 Cr78,250 15.2L 5.2kg Int-2 Edu-1
High Data, Basic Command, TL9 Cr79,500 15.9L 5.4kg Int-3 Edu-1
Low Autonomous, Basic Command, TL12 Cr177,750 20.8L 6kg Int-4 Edu-2
Low Autonomous, Full Command, TL12 Cr183,000 22.4L 6.4kg Int-5 Edu-3
High Autonomous, Full Command, TL13 Cr286,500 22.7L 6.7kg Int-6 Edu-3

The high autonomous TL13 is probably passable as a crewmember. As in, able to replace your pilot full time, under the supervision of a human who probably doesn't need to do much more than give directions. And it's good enough to function as both pilot and navigator at once.

TL8 - probably needs the targets set. May need a human to interface with Traffic Control
TL9 LBC: Smarter, and adaptive. If working the same few worlds, begins to learn certain patterns.
TL9 Basic Command: Smarter still, clueless, but adaptive. Can probably interface with Traffic Control if TC is aware it's a bot.
TL12 Basic Command: smarter, more adaptive version. Many PC's are not as useful
TL12 Full Command: it's almost vulcan like interface can pass itself off on radio, provided no non-operational chatter is expected.
TL13, Full command: can pass as emontionless humanoid.

Adding emotion simulation adds Cr1400 0.4L 0.2kg.

Combat capability would need Gunnery (Cr1400 0.4L 0.2kg per level) and possibly ship tactics (Cr4800 1.6L 0.8kg per level). I'd require gunnery to half a level to allow it to fire a weapon...

Not true ai, but give it a directive, and the TL 13+ are almost people... but not innovative, and don't learn new levels, just new methods within the same levels of skill.

Yeah. This is pretty much what I was trying to get at. At TL 13 you can produce a robot that is a darn good pilot and which is probably safer than most human pilots (I'm assuming the average skill of a human pilot is less than 4). However such a robot is not completely self sufficient and you need to have a human on hand in the event that something happens which confuses the robot. It isn't so much to take control from the robot because the person can fly through the debris field better. It's to tell the robot not to try and fly through the debris field in the first place (an extreme example as the robot would avoid the debris field in most cases, but possibly other conditions are going on that make the robot think its the best choice when there is another better choice that a human is able to spot).

The only thing I would say about the gunner is that rather than making the robot try and fly and shoot at the same time it would probably be better to have two robots so it isn't splitting its skills (I forget how to handle doing multiple things in Traveller at the same time, but even outside of the meta aspects of the rules a design with two separate robots might be better). The two robots could stay in high speed communication so that the gunner knows what the pilot is doing and the pilot knows who the gunner is targeting.

Depending upon interfaces the two 'robots' might even share a single body. They just have separate brain interfaces (or even share a brain interface) to the ship's controls.
 
Back
Top