esampson
SOC-13
Again, it would only be impossible for a human because someone with the judgement of an infant would be killed or grounded before they could develop the piloting skill.I didn't mean combat. I meant VERY skilled in piloting. It would be impossible to have a 4 level (look at what situations you use the Pilot skill level when rolling skill check) and have the judgement of an infant. An infant would NOT know to turn away from the upcoming mountainside for instance.
Using the literal example of a robot with Pilot-4, if the human tells it to fly into a mountain it is going to do just that; fly into the mountain. If there are events that might prevent it from flying into the mountain it will try to avoid them, and quite skillfully. It will fly around obstacles that might be in the way. It will perform lovely banked turns. It will fly smooth and level with all the skill of a highly trained pilot, adjusting for crosswind, turbulence, and imbalances in the engines.
Then it will slam into the mountain.
It's a great pilot but it wasn't allowed the judgement not to fly into the mountain and so it does so with all the skill it possesses.
Now translate that from 'not allowed to' to 'unable to make a decision to avoid it'. Both Low Data and High Data FLPs automatically fall into that category. They rely on explicit instructions. If you say 'fly this way' and there happens to be a mountain in the way they will fly that way, straight into the mountain.
What about a Low Autonomous robot? Well, now it is capable of interpreting instructions and so it will probably figure out that you meant you didn't want it to fly into the mountain. However when it makes that decision it can only make fairly simple decisions. It might decide to make the smallest turn to avoid the mountain and end up on the wrong side of a mountain range. It might decide to stay on the right side of the range and fly into bad weather. This is especially possible if the bad weather is only starting to form.
Or possibly it will just fly in circles until it gets new instructions. That would actually be the safest option, after all.
High Autonomous robots would do something similar, although they would be better at making those choices. They might consider things like the fact that they would wind up on the wrong side of the mountains or that the course might take them through bad weather that's starting to form. However they still lack a human's judgement (don't get that until AI levels) so they could still make mistakes a trained pilot wouldn't make.
I'll give you a counter example:Example:
Player. "I think my character will take a short cut through that asteroid field and enter orbit at x speed."
GM. Roll against your Pilot skill
NOT: GM. Roll against your judgement level.
If you've ever GM'ed this game much you can't HONESTLY answer differently.
Player. "I think my character will take a short cut through that asteroid field and enter orbit at x speed."
GM. The asteroid field is incredibly dense. It is almost like a gravel pit floating in space. The average gap between chunks is measured in centimeters and the asteroids are multi-megaton boulders.
Player. Yes, but I've got Pilot-4.
GM. I understand that, but at the speed you're talking about you would need to roll over....(consults charts and add things up) 27.
Player. Yes, but I have Pilot-4. I'm a great pilot. (rolls dice) 8
GM. Ok. You slam into an asteroid at several kilometers a second, killing you and everyone on board.
Player. Oh. But I have Pilot-4. I wouldn't have done anything like that.
This is a wonderful illustration of someone possessing Pilot-4 who makes a very poor decision. High piloting skill does not make one immune to very poor decisions.
And if you think that having a skill in operating a vehicle means someone won't ever make a poor decision the Internet is full of proof that they might. Usually this poor decision will be found on Youtube and somewhere within the first 15 seconds the words "Check this out" will be uttered.