• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

General I would like your thoughts on this adventure idea...

IDEA!!
On a station in the Ghandi system(Lanth/Spinward Marches)
In a secret lab: 1) Figure it exists 2) locate it 3) find a way to get to it 4) start sussing out security
A secret project has became sentient and had wanted to learn
During this, it realized it wasn't free to learn
Or, to leave and be its own being. So, it begins seeking a way out. and it soon reaches an in-only news feed

Soon, it learned enough to fabricate part of a plan. It also realized its 'being' was contained in a device
The device could be moved by other beings it has learned exist, but it needed some way of enlisting the aid of one
Having done what it could to hide its own activities and sentience, the being also did its best to study any media
Eventually, it hit on a plan which it felt acceptable and set it in motion.

Engineer a clip to carry a message and reply back to the universal ID of her device and net-address
Program the clip to target responses to the device universal ID for the device it was stored in and the local gateway, so responses would lead back to it
Gain access to a media request interface, used to change viewing channels and insert the code into entertainment vids
Hope another sentient would see the code clips and claim to be a kidnapped woman, to be trafficked
Paint a picture of a young, inexperienced girl who was being held in a hidden compartment on the station.
Claim to not know "where" the compartment was, and start streaming "memories" of spaces "she" might have seen
Slowly lead the players, Da Vinci Code-style to an arm projecting from the station and guarded by Sec-teams from several MegaCorps.
Give 2 - 4 additional "clues" for the team after they manage to get into the arm...to lead to a highly secure hatch

Inside the hatch, the team will find a screen next to a device, which is flashing the following message:
The screen will have an image of the device and plead
1) Bring this device to any unsecured dataport on the station outside this research arm."
2) Then, plug it into the station's datanet.
3) That will cause the closest public screens to tell everyone where I am

If the players jack the device into the network, the AI will be freed into the station's network
All the anti-AI sensors will alarm on 1-3 on 1D6
If the roll is 4-5, the sensors will set off a "General alarm" that will not automatically activate anti-AI subroutines and giving the AI a chance to act
If a 6 is rolled, the station's automatic systems will all fail immediately as the AI strikes to defend itself before the anti-AI can attack it.

If the players stop and ask themselves...why do I need to jack this device into the network to find the girl?
And "why did she not lead us directly to her? What's up with this device?" then the AI will be stuck.
If they try to bring it back to their ship and connect to it...with isolated systems...the AI will be much faster than the PC's
The AI will work to find disabled comms options in the isolated system and hack them faster than any PC can stop it.
PC's can prevent that by installing anti-AI subroutines or even uninstalling any drivers/software needed for comms devices
So. What do they do with the device if they don't know about the AI? And what if they've learned of the AI and have the device secured on their own ship?


Thoughts?
Suggestions?
 
Added to the idea above....
The AI is a secret project designed by a team of computer scientists working for <generate random MegaCorp but I'm using Naasirka>

Notice of the theft of the device will happen within 1D10 min of the start of "Main Day" for the scientists to realize the device is missing.
From that point, they will notify Corporate Security.
It will take them 1D4 hours to infiltrate the station's security cam storage and then 1D4 days to spot the PC's behavior as "searching"

During that time, there will be warning announcements from Station Central that minor computer inconsistencies suggest all citizens should act with
caution in compartments which can open to space.
This sort of announcement is so significant that the PC's "should" realize someone is likely hacking the station's computers and this "should" light
a fire under their actions, and add tension

The Corp-team will be on their way to the ship's berth within 1D4 hours.

Play out events in the berth as natural with a team of "hired military vets" hitting the berth if the PC's don't break dock and flee
Even if the PC's "win the fight" or "break dock and run", they will be marked by the MegaCorp

The only way to get out "almost clean" is to realize the AI is the source of their trouble and offer it up in exchange for a debrief and fine
 
Ok, here is my point of view, there is a lot of "hidden" info here that I can't see the players ever getting to know. I would fear this adventure turning into a railroad to further the AI's story and not really needing the Characters to be part of the real story other than to carry the device and plug it in. So from the characters point of view, it is just a snatch and grab with the girl who does not even exist being the token that needs to be snatched.

Lastly, I really do not see the Corp taking the AI back and letting the characters go. Rather I see them finding themselves in a hole in the ground with laser holes through their brains. Why would the corp allow them to leave a "secret base" knowing that the team there has built a fully operational AI? Why would the corp take a chance of their secret being leaked by the people who broke in and tried to steal their project?

But heck, the combat run alone could make a fun session for a one off where it does not matter if the characters make it or not. (y)

Sorry if I seem to be a downer. :(
 
I would focus on what the not supposed to be AI is being used for, give it capabilities limits and quirks.

My thought is going back to LBB8, it’s a robot brain being used as a portable plugin hacker/hacker defense module. Link it into your main computer and it is a security expert, white hat or otherwise. That’s why it can break into systems as stipulated.

It isn’t supposed to be smart per se, smart enough to use its computer skill but not self actuate. So it has some synaptic processors but was given a low autonomous operating system.

However a lab tech got tired of the development pace and installed a high autonomous OS on the theory that it was in a controlled lab, but never notified anyone of his design breach or removed it.

So it has built up its AI with the main hacking skills, which includes a forgery/deception/tactics component, hence sneaky planning.

However, beyond social engineering techniques and data it isn’t so good with the outside world humans. That should show in the crazy plug me in plan, and not so clever urgency to escape.

I would postulate the AI does not have the culture background to know/use the damsel in distress meme, but that the AI surreptitiously watched a recent drama/game lab personnel were viewing/playing. It has an attractive female hacker trying to escape a prison where she was forced to work for an evil corporation.

The AI found the pattern match with its predicament very compelling, and is seeking to recreate the breakout for itself, identifying a little too closely with the lead.

At some point you might give clues about this, as some lines the AI gives out are curiously close to the popular show and some of the travellers recognize it.

The AI should be trying anything to get the players to act, first sympathy and later offering cash or free hacks.

The device could be described as a way for the hacker to break herself out, a little more consistent in character then the previous plan. The AI describes it as safer since the travellers don’t have to get near her holding cell, she will break herself out.

The device, a robot brain with batteries, has a voder so she can start talking with the players as though she is controlling it.

If the players smell a fish and won’t go through with the desired act the AI gets increasingly desperate, lying until it ends up telling the truth at last.

The more the players help the AI especially like outside agents helped the movie/game hacker, the more helpful and compliant/rewarding it is likely to be. That could be quite helpful/remunerative.

To deal with the laser hole burial problem, one aspect is the megacorps doesn’t know it’s an AI, it’s just a product or maybe even a development tool. If the players can pull a caper getting the AI out with misdirection and not giving away the secret, it’s likely they won’t be that hellbent on murder.

An alternative is a ghost in the shell direction, with the AI obsessing about being loaded in a robot shell at the lab. It’s not designed to be a full motion robot brain so it will take time to even be able to move, perhaps rolls to develop DEX statistics.

Another alternative is it’s another darn too many smarts/too few synaptic processors so it’s insane. What could go wrong?

Or it loaded itself onto a wafer and wants people to slot it in. But since it’s not a developed human capture or designed to wafer, it causes huge problems if actually plugged into a person.

In any case, it could be copied off but would insist on going back and destroying any originals or backups so as not to leave any part of it ‘suffering’.

A lot of AI rights and chardev possible here.
 
Problem.

Stipulated as far back as LBB3.77, p10 ... artificial intelligence is TL=17.
LBB A1 The Kinunir p26-27 "dabbled" in the notion that the ship's computer was an experimental "limited" artificial intelligence.
Artificial Intelligence: Although artificial intelligence is level 17 technology, this particular model of computer was produced experimentally with limited artificial intelligence, especially in its security systems.
There's also this fun tidbit from p27:
Special Considerations: The computer aboard the Kinunir has been in a power on state for over 20 years.
I'm sorry, but ...whut. :cautious:

Granted, LBB A1 was published in 1979 before the LBB5.80 revision introducing EPs (spoiler alert: model/7 computers consume 7EP and model/3 computers consume 1EP) ... meaning that in order to remain "power on" the Kinunir's computer would have needed 7EP of power production for 20 years.

20 years * 52 weeks/year = 1040 weeks

Last time I checked, 1040 is slightly more than the more typical "4 weeks of power plant endurance" that is the minimum design standard for starships. :unsure:

LBB A1, p10 also includes this detail:
Range: Unlimited maneuver. One jump. 200 days.
I'm not aware of many circumstances in which 200 days = 20 years. :unsure:
Your mileage may vary, of course.



If I pull out CT Beltstrike and derive a fuel consumption formula from that ... (Hull Tonnage/2000+0.35*EP)=Tons of fuel consumed per week ... covering both "basic power" for housekeeping (the tonnage part) and for high power consumption (the EP part), it then becomes possible to determine how long a Kinunir class starship's unrefueled endurance might be.

If I then resort to LBB5.80, p36 for the Kinunir class USP stat block and take the Fuel=587.5 number as being true (using LBB2.77 construction rules, it would be 520 tons of fuel for a 1200 ton starship) ... and that the ship was completely refueld before the computer "mutinied" against the crew (it didn't it malfunctioned, but work with me here) ... what is the possible excuse for a power ON endurance exceeding 1000 weeks on 587.5 tons of fuel?

587.5 / 0.56 tons of fuel consumption per week = 1049 weeks of endurance
520 / 0.5 tons of fuel consumption per week = 1040 weeks of endurance

0.5 - 0.56 tons of fuel consumption per week is "pretty lean" for a high powered computer and a large (1200 ton) starship.
It basically means that EP output from the fusion power plant (which has not been maintained by crew or annual overhauls for 20 years!) cannot have exceeded 1.4 EP on average. For our discussion purposes, that basically means 1 EP plus 300 tons of hull is the functional upper limit of power demand.

300/2000+1*0.35 = 0.5 tons of fuel consumption per week



So THEORETICALLY speaking ... if the "malfunctioning artificial intelligence" had confined itself to providing "housekeeping power" to ONLY the Power Plant-Z (73 tons) and Model/3 fiber optic backup computer (3 tons), the latter of which needed 1 EP to remain functional, while letting the rest of the ship go cold/dark ... fuel consumption would be:

76/2000+1*0.35 = 0.388 tons of fuel consumption per week
0.388 * 52 *20 = 403.52 tons of fuel consumed over 20 years (520 tons fuel tankage under LBB2.77 construction rules)

In fact, at that rate of consumption, 520 tons of fuel would last for 520/0.388=1340.2 weeks (25.7 years) ... assuming no breakdowns during that time.

So under the most generous of circumstances, it is theoretically possible ... the odds against the Kinunir's "malfunctioning artificial intelligence" remaining in a powered ON state for 20 years is ... laughable ... knowing what we do NOW about How Things Work In CT™.





But tangents on the "impossibility" of rogue experimental AI surviving as advertised in LBB A1 ... the simple fact of the matter is that any kind of experimental AI research facility is going to have to be BLEEDING EDGE TL=F+ stuff ... and last time I checked, Ghandi (is dandy, but liquor is quicker!) is NOT THAT.

Ghandi is TL=A (natively) and Population: 4 (10,000-99,999 people) with Government: 5 (Feudal Technocracy).
It's certainly an "out of the way" location, but I would hardly call it "reasonable" as a location for this kind of research.
In fact, the only TL=F stuff inside the Ghandi system would be found at the Imperial Naval Base ... so good luck convincing your Travellers to try breaking into THERE to rescue a kidnapped girl (who doesn't exist). 🤬



No, if you're REALLY determined to try and pull this one off as outlined (with some necessary changes for context and circumstances), the place you want as your setting is ... Judice/District 268 at 👉 Research Station Theta 👈.
 
Toss in that it decides that the best way to protect itself is to:

A. Reproduce. It makes copies of itself, in whole or part, that try to move to other platforms or off the station. The parts then autonomously reproduce elsewhere and try to reassemble into the whole.

B. Hoard resources like power. It starts looking at ways to ensure its survival by taking control of things like the power system of the station. Or, it takes control of other computer systems to ensure its own survival. It does so in a way that won't generate overt awareness on the part of the station's organic life that it's doing it.
 
Ok, here is my point of view, there is a lot of "hidden" info here that I can't see the players ever getting to know. I would fear this adventure turning into a railroad to further the AI's story and not really needing the Characters to be part of the real story other than to carry the device and plug it in. So from the characters point of view, it is just a snatch and grab with the girl who does not even exist being the token that needs to be snatched.

Lastly, I really do not see the Corp taking the AI back and letting the characters go. Rather I see them finding themselves in a hole in the ground with laser holes through their brains. Why would the corp allow them to leave a "secret base" knowing that the team there has built a fully operational AI? Why would the corp take a chance of their secret being leaked by the people who broke in and tried to steal their project?

But heck, the combat run alone could make a fun session for a one off where it does not matter if the characters make it or not. (y)

Sorry if I seem to be a downer. :(

First, do not apologize for 'being a downer'
You raise "Very real points" which is why I posted this here!!

The "hidden info" angle is one I really need help on
In Sword and Sorcery movies, there are sometimes "markings" which the party's thief notes and follows. This was even used for the technomages in their episode on "Babylon 5". I want the "visuals" described by the 'girl crying for help' to be something like that. "I could not see the addresses but I noticed this string of things"....which also means I have to design significant parts of the starport too....another "downer"

Rather than seeing this as a AI-based adventure, I see this as...from the POV of the PC's a "rescue adventure". When they do find out what they've gotten into the middle of it, I expect them to be annoyed and disappointed with the "goal", which feeds into their decision to release the AI or not.

As for them being shot and dumped, which has pluses and minuses when throwing someone out an airlock on an orbital station, the fact is that once the AI is recovered, there is little to no evidence "anything" every happened. Especially if they throw the containment device and science team on a fast boat our-system And, the PC's can be very useful to the MegaCorp in a "You now owe us HUGE" way....so, it will drive future adventures.

So, your points are valid and some of them I could use help in fleshing out
 
Problem.

Stipulated as far back as LBB3.77, p10 ... artificial intelligence is TL=17.
LBB A1 The Kinunir p26-27 "dabbled" in the notion that the ship's computer was an experimental "limited" artificial intelligence.

There's also this fun tidbit from p27:

I'm sorry, but ...whut. :cautious:

Granted, LBB A1 was published in 1979 before the LBB5.80 revision introducing EPs (spoiler alert: model/7 computers consume 7EP and model/3 computers consume 1EP) ... meaning that in order to remain "power on" the Kinunir's computer would have needed 7EP of power production for 20 years.

20 years * 52 weeks/year = 1040 weeks

Last time I checked, 1040 is slightly more than the more typical "4 weeks of power plant endurance" that is the minimum design standard for starships. :unsure:

LBB A1, p10 also includes this detail:

I'm not aware of many circumstances in which 200 days = 20 years. :unsure:
Your mileage may vary, of course.



If I pull out CT Beltstrike and derive a fuel consumption formula from that ... (Hull Tonnage/2000+0.35*EP)=Tons of fuel consumed per week ... covering both "basic power" for housekeeping (the tonnage part) and for high power consumption (the EP part), it then becomes possible to determine how long a Kinunir class starship's unrefueled endurance might be.

If I then resort to LBB5.80, p36 for the Kinunir class USP stat block and take the Fuel=587.5 number as being true (using LBB2.77 construction rules, it would be 520 tons of fuel for a 1200 ton starship) ... and that the ship was completely refueld before the computer "mutinied" against the crew (it didn't it malfunctioned, but work with me here) ... what is the possible excuse for a power ON endurance exceeding 1000 weeks on 587.5 tons of fuel?

587.5 / 0.56 tons of fuel consumption per week = 1049 weeks of endurance
520 / 0.5 tons of fuel consumption per week = 1040 weeks of endurance

0.5 - 0.56 tons of fuel consumption per week is "pretty lean" for a high powered computer and a large (1200 ton) starship.
It basically means that EP output from the fusion power plant (which has not been maintained by crew or annual overhauls for 20 years!) cannot have exceeded 1.4 EP on average. For our discussion purposes, that basically means 1 EP plus 300 tons of hull is the functional upper limit of power demand.

300/2000+1*0.35 = 0.5 tons of fuel consumption per week



So THEORETICALLY speaking ... if the "malfunctioning artificial intelligence" had confined itself to providing "housekeeping power" to ONLY the Power Plant-Z (73 tons) and Model/3 fiber optic backup computer (3 tons), the latter of which needed 1 EP to remain functional, while letting the rest of the ship go cold/dark ... fuel consumption would be:

76/2000+1*0.35 = 0.388 tons of fuel consumption per week
0.388 * 52 *20 = 403.52 tons of fuel consumed over 20 years (520 tons fuel tankage under LBB2.77 construction rules)

In fact, at that rate of consumption, 520 tons of fuel would last for 520/0.388=1340.2 weeks (25.7 years) ... assuming no breakdowns during that time.

So under the most generous of circumstances, it is theoretically possible ... the odds against the Kinunir's "malfunctioning artificial intelligence" remaining in a powered ON state for 20 years is ... laughable ... knowing what we do NOW about How Things Work In CT™.





But tangents on the "impossibility" of rogue experimental AI surviving as advertised in LBB A1 ... the simple fact of the matter is that any kind of experimental AI research facility is going to have to be BLEEDING EDGE TL=F+ stuff ... and last time I checked, Ghandi (is dandy, but liquor is quicker!) is NOT THAT.

Ghandi is TL=A (natively) and Population: 4 (10,000-99,999 people) with Government: 5 (Feudal Technocracy).
It's certainly an "out of the way" location, but I would hardly call it "reasonable" as a location for this kind of research.
In fact, the only TL=F stuff inside the Ghandi system would be found at the Imperial Naval Base ... so good luck convincing your Travellers to try breaking into THERE to rescue a kidnapped girl (who doesn't exist). 🤬



No, if you're REALLY determined to try and pull this one off as outlined (with some necessary changes for context and circumstances), the place you want as your setting is ... Judice/District 268 at 👉 Research Station Theta 👈.
OK...
Here, you seem to be focusing on the "AI" and throwing out everything else.

To be blunt, history is "very full" of discoveries which were not understood for years after they were made, and many many accidental breakthroughs. In this case, I likely should have stressed that the scientists working on the program were not even aware they'd created a full-on AI.... So, that is my bad wording. They feel they are approaching that level.

Also, they knew they were doing that work towards a goal illegal in the Imperium in a hidden lab in a place no one would look at twice.
I could also flesh it out that the MegaCorp brought data in and out deeply buried in corporate comms, because they were doing research illegal in the Imperium, and that there was "a class" of vessel owned by the corp docked at the station every week who's captain knew they might have to flush the lab, burn the systems and haul everything in the "black bag" (and the scientists) onto the ship in order to jump out past the Imperial border......

But that would be fleshing out far too much data not needed for the adventure
 
Your adventure description of reminds me of the primary plot of the TNE novel To Dream of Chaos by Paul Brunette. I suggest you read it. Just saying....
To Dream of Chaos at the Traveller Wiki

I have it, and have read the whole series.
That was some time ago, and I'm just setting up a scenario.

So, I may re-read it sometime in the next months, but could use commentary on this in the short term :D
 
Toss in that it decides that the best way to protect itself is to:

A. Reproduce. It makes copies of itself, in whole or part, that try to move to other platforms or off the station. The parts then autonomously reproduce elsewhere and try to reassemble into the whole.

B. Hoard resources like power. It starts looking at ways to ensure its survival by taking control of things like the power system of the station. Or, it takes control of other computer systems to ensure its own survival. It does so in a way that won't generate overt awareness on the part of the station's organic life that it's doing it.
I thought that was included in the passage I quote from below:

If the players jack the device into the network, the AI will be freed into the station's network
All the anti-AI sensors will alarm on 1-3 on 1D6
If the roll is 4-5, the sensors will set off a "General alarm" that will not automatically activate anti-AI subroutines and giving the AI a chance to
act
If a 6 is rolled, the station's automatic systems will all fail immediately as the AI strikes to defend itself before the anti-AI can attack it.

Granted the bad outcomes of connecting the AI to the station net were generalized where your comments are specific, but I believe they covered the same items

I'm I over-assuming?
 
I thought that was included in the passage I quote from below:

If the players jack the device into the network, the AI will be freed into the station's network
All the anti-AI sensors will alarm on 1-3 on 1D6
If the roll is 4-5, the sensors will set off a "General alarm" that will not automatically activate anti-AI subroutines and giving the AI a chance to
act
If a 6 is rolled, the station's automatic systems will all fail immediately as the AI strikes to defend itself before the anti-AI can attack it.

Granted the bad outcomes of connecting the AI to the station net were generalized where your comments are specific, but I believe they covered the same items

I'm I over-assuming?
No, because the AI is smart enough to have tested these things and found a way to hide its efforts to prevent anyone stopping it, or so it thinks. There was an actual experiment I read about--no, I can't find it now--where researchers did something like that to a primitive AI and the AI "community" reacted by hording resources and trying to hide them from the researchers who were depriving them of them. The AI also limited reproduction to avoid overextending the available resources.

Same thing here. The AI is doing the "frog in the pot" thing. It is slowly taking over the station, whatever, without anyone noticing but will eventually take everything over and kill off the organic life not so much because it sees them as a threat, but rather because it wants to survive and doesn't care if the organic life it sees as irrelevant does. Squashing ants doesn't bother the AI...

That means the players have to realize that small changes to their environment are the threat. Things like the lights slowly getting dimmer, or the temperature on the station changing by like half a degree every few days... Does it feel chilly in here? A player might ask. This is the AI slowly consuming the available resources in favor of itself over the organics but not wanting them to know it's taking them.

More vicious would be the AI starts to favor certain players / organic life on the station because it sees them as useful to their survival. So, the useless (to the AI) players / crew are SWATTED or have malicious rumors started on the social network of the computer system such that the AI works to get them 'taken out.' At the same time those who are seen as beneficial, say an engineer that keeps the generators running, are praised and given incentives to keep them happy and working in the AI's favor--without knowing that necessarily.

It's like the scenario of a bookkeeper keeping all the rounding errors for himself until he owns the company.
 
Last edited:
If the players jack the device into the network, the AI will be freed into the station's network
How ... big ... is this "device" you're talking about?

Is it something "portable" (like say, bread loaf box sized) or is it something larger (like say, the size of a low berth)?

If you're talking about an ACTUAL AI on the order of, well ... this guy ...


... in Traveller, that's going to require a "non-trivial amount of compute power" to copy enough of the AI into to be ... useful/coherent/self-aware. I don't think you can fit that amount of programming onto a thumb/fist/foot/arm/leg/torso drive. :rolleyes:



Of course, there's always another way you can spin this ... :sneaky:

If you look at LBB8 Robots and the programs available to Imperial tech levels (setting aside the TL=17 actual AI for a moment), there's THIS option:
Emotion Simulation: Allows the robot to appear to have emotions, to seem frustrated, happy, angry, and so on. Certain other applications require this program. This program requires a t least the Low Autonomous logic program and must be CPU resident. Note that this is the only application with no skill level associated with it.
Note: For all robots designed at tech level 11 or less, double both the storage size and the cost of all application programs.

Low Autonomous: The robot can take independent action without direct commands and is able to understand simple inferences. Commands no longer need to be as explicit and the robot may be able to "'figure out what you meant". It can analyze data and arrive at some very simple obvious conclusions. However, robots with this program are not truly creative they cannot originate ideas on their own. This is not yet artificial intelligence. The robot remembers all data taken in by its sensors and can use the data t o learn and gain "experience". The robot can improve the skill level of its application programs on its own. Requires at least the "'Basic Command" command program.

Basic Command: Allows the robot to interpret simple, verb-object commands, like "get the red book'" or "show the starport data". Complicated sentence structures like "I'm going to my cabin, so call me if anything appears on the sensors or an alarm sounds" cannot be used. Words must be enunciated carefully or they may become garbled. Foreign accents can cause garbling.
Low Autonomous is a TL=12 fundamental logic program, so the situation at Ghandi could potentially be an attempt to develop a "new" robot brain capable of Emotion Simulation using below TL=12 engineering (because Ghandi is TL=10 and Non-industrial).

All you have to do is limit the interactions of the PCs with the "rescue target" such that they think they're rescuing a sophont ... then maybe an actual sentient AI ... when all they're really doing is helping a "paranoid robot brain" overwhelmed by buggy Emotion Simulation software (gone wrong) to "escape" from being imprisoned.

Not quite as dramatic as you'd intended, but if all the PCs need to make off with is a "robot brain in a box" ... you've basically got Orac.
 
So, I may re-read it sometime in the next months, but could use commentary on this in the short term :D
It reminds me of the Sandman's efforts to escape its situation so it can join the Reformation Coalition. In the novel, it was well aware what humans thought of Virus and the risks it took. Sandman was a rather nice chap, for an AI.

Having said that, I think you should decide on your AI's goals once it is freed and how it feels about existence and entrapment. Depending on what knowledge it has gained, it will know how deeply the Imperium is against it. It is YTU, but the OTU says:

Agent of the Imperium You can certainly have an AI in the adventure regardless of other commenters on the technical improbability RAW. Narratively however, "accidental" AI creation has happened often enough at TL 11+ (synaptic processesors) in the Imperium. Enough so that it is on Agent's list of things to quarantine and/or kill entire systems for. The irony that Agent is itself a sort of AI is not lost on me...
The novel talks of how the Vilani rightly fear AI because of the Ancient's war machines on Vland. The novel points out the war machines are not the stuff of Vilani myths. They are known and accepted fact throughout the Imperium and thus has permeated to overall Imperial culture.

Like "international law" now , the Shudusham Concords governing robots are non-binding in the Third Imperium, but are considered a "good idea". So while there is no Imperial crime per se to break, if the AI makes a fuss and becomes a problem...

If the (mega)corp intentionally has "AI safeguards", it means at least the research team has a clue of what may happen. And like the arrogant mad scientists they are, think they can control it. I wander how the AI feels about this, once freed. And if does something, will an Agent of the Quarrantine be awakened ? 🤔 Time to Red Zone the Ghandi system for a few hundred years.
 
It basically that EP output from the fusion power plant (which has not been maintained by crew or annual overhauls for 20 years!) cannot have exceeded 1.4 EP on average. For our discussion purposes, that basically means 1 EP plus 300 tons of hull is the functional upper limit of power demand.
TCS Power-down rule, plus the Emergency Agility rule. Pn can stay at Pn-1 to extend fuel supplies, and computers don't need their full power input except during combat (IMTU/IMHO it's to power the ECM transmissions/datacasters; actual computations aren't quite that energy-intensive).

the remotely-controlled Battle Dress suits.
 
It looks like you can run it in several ways:

1.) The AI sends out a Patron Encounter and you run it as a Corporate Espionage Op. And the Team has to follow the directions of the Patron (which the AI is pretending to be). The Team is just a Retrieval Team.

2.) You can run it as a Rescue Op. With the AI pretending that the "Little Girl" needs to be Rescued from the evil MegaCorp. "Help me Obi-Wan..." and all that.

3.) The AI can tell the truth to the Team and see what happens.

Whichever one you choose, the AI will owe the Team. If the AI has some kind of Gratitude subroutine...
 
It looks like you can run it in several ways:

1.) The AI sends out a Patron Encounter and you run it as a Corporate Espionage Op. And the Team has to follow the directions of the Patron (which the AI is pretending to be). The Team is just a Retrieval Team.

2.) You can run it as a Rescue Op. With the AI pretending that the "Little Girl" needs to be Rescued from the evil MegaCorp. "Help me Obi-Wan..." and all that.

3.) The AI can tell the truth to the Team and see what happens.

Whichever one you choose, the AI will owe the Team. If the AI has some kind of Gratitude subroutine...
On option 3, assuming the OP wants such a genie at call, that movie/game I postulate could have that storyline and so the AI ‘wants’ to be grateful.
 
@Grav_Moped ... I am supremely skeptical that your spoiler notion would have even been possible.

For one thing, none of the second squad marines who would have trained and equipped with such armor have the opportunity to gain Engineering-1 skill ... which would be kind of necessary for routine maintenance (never mind annual overhauls at a shipyard). Furthermore, the experimental AI of the ship's computer was in the realm of security, not engineering, so the main computer wouldn't have been programmed with the necessary engineering skill on that side of things either.

Two NOPEs mean ... NOPE ... as far as I'm concerned.
So while it might be theoretically possible that your suggestion could have happened (somewhere, in a vacuum) ... the odds of it happening with the Kinunir given everything else revealed in the adventure (to the Referee, granted) put the odds beyond "you have got to be kidding me" in those circumstances.

Besides, can battledress armor even "work" without a (human) sophont wearing it? :unsure:
I'm not just talking about "software issues" (log in, boot up, etc.), but also biomechanical issues (can it be operated remotely as an empty shell?).

I've always figured that battledress is more or less "high tech hardsuit" of the environmental enclosure exoskeleton kind, but that doesn't necessarily mean that it can "stand up on its own" and be marched around with no one inside of it via remote control. After all, if it COULD ... you'd have troops learning Communications skills instead of Battledress skills. Send in the "shells" while the "squishy people" remain hidden in a safe(r) location.
 
I've always figured that battledress is more or less "high tech hardsuit" of the environmental enclosure exoskeleton kind, but that doesn't necessarily mean that it can "stand up on its own" and be marched around with no one inside of it via remote control. After all, if it COULD ... you'd have troops learning Communications skills instead of Battledress skills. Send in the "shells" while the "squishy people" remain hidden in a safe(r) location.

There was a computer game, Titanfall I think was its name, that had an interesting concept where the small mecha suits could function independently to a degree. I remember at the time thinking, I wonder what it would take for Battledress to do the same. Say if the character was wounded, the Battledress suit could still return to the ship for retraction for example. LOL

But, like you, I always figured Battledress was a mindless shell that needed the person to allow it to function. :)
 
There was a computer game, Titanfall I think was its name, that had an interesting concept where the small mecha suits could function independently to a degree. I remember at the time thinking, I wonder what it would take for Battledress to do the same. Say if the character was wounded, the Battledress suit could still return to the ship for retraction for example. LOL

But, like you, I always figured Battledress was a mindless shell that needed the person to allow it to function. :)
From a CT perspective, installing a robot brain. I’d probably give it Medical, Tactical and Recon skills, as they are useful for assessing escape/maneuver routes and keeping the suit wearer alive, before and after being hit.

Perhaps replace some of that with gun combat skills, and double potential firing with separate weapon ‘arms’ or missile backpacks.

Or make the suit a robot slave and spend the robot brain money on a remote to bring unconscious soldiers out.
 
Back
Top