• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

General I would like your thoughts on this adventure idea...

3@Grav_Moped ... I am supremely skeptical that your spoiler notion would have even been possible.

For one thing, none of the second squad marines who would have trained and equipped with such armor have the opportunity to gain Engineering-1 skill ... which would be kind of necessary for routine maintenance (never mind annual overhauls at a shipyard). Furthermore, the experimental AI of the ship's computer was in the realm of security, not engineering, so the main computer wouldn't have been programmed with the necessary engineering skill on that side of things either.

Two NOPEs mean ... NOPE ... as far as I'm concerned.
So while it might be theoretically possible that your suggestion could have happened (somewhere, in a vacuum) ... the odds of it happening with the Kinunir given everything else revealed in the adventure (to the Referee, granted) put the odds beyond "you have got to be kidding me" in those circumstances.

Besides, can battledress armor even "work" without a (human) sophont wearing it? :unsure:
I'm not just talking about "software issues" (log in, boot up, etc.), but also biomechanical issues (can it be operated remotely as an empty shell?).

I've always figured that battledress is more or less "high tech hardsuit" of the environmental enclosure exoskeleton kind, but that doesn't necessarily mean that it can "stand up on its own" and be marched around with no one inside of it via remote control. After all, if it COULD ... you'd have troops learning Communications skills instead of Battledress skills. Send in the "shells" while the "squishy people" remain hidden in a safe(r) location.
About the spoiler: IIIRC, the ship's AI is stated as being able to control the battledress suits as though they were robots, to defend against the PC party. (There are no surviving troops when the ship is found.)

If the AI can figure out how to do either j-o-t or eng-1 (and with its own specs in the database and literal years of supercomputing time to work it out) it's at least possible.
 
About the spoiler:
Spoiler:
IIIRC, the ship's AI is stated as being able to control the battledress suits as though they were robots, to defend against the PC party. (There are no surviving troops when the ship is found.)
Citation required.
I find NO SUCH indicators on LBB A1 p26-27, where such a point ought to be found.
In fact, there are 4 security options given in that section:
  1. Lock an iris valve closed
  2. Tranquilizing gas
  3. Decompress compartment to vacuum
  4. Power up a scanning laser to deal damage
NONE of those options comes close to "take control of battledress suits and manipulate them like robots" by multiple orders of magnitude.
Spoiler:

If the AI can figure out how to do either j-o-t or eng-1 (and with its own specs in the database and literal years of supercomputing time to work it out) it's at least possible.
You are overestimating the computer's capabilities in that adventure to an ABSURD degree.
If that were possible, the ship wouldn't have exhausted its fuel supply and become lost, as stipulated in the adventure.

Try again.
 
Citation required.
I find NO SUCH indicators on LBB A1 p26-27, where such a point ought to be found.
In fact, there are 4 security options given in that section:
  1. Lock an iris valve closed
  2. Tranquilizing gas
  3. Decompress compartment to vacuum
  4. Power up a scanning laser to deal damage
NONE of those options comes close to "take control of battledress suits and manipulate them like robots" by multiple orders of magnitude.

You are overestimating the computer's capabilities in that adventure to an ABSURD degree.
If that were possible, the ship wouldn't have exhausted its fuel supply and become lost, as stipulated in the adventure.

Try again.
OK,
Lots to look at here...
@Enoki : your response was largely a discussion about the reaction once the AI "has been" released.
That is "jumping the ship", because the entire adventure I suggested was _pre-Release.
I, as a GM, can consider this once the initial adventure is resolved and the players allow the AI to escape...but you have jumped past the
situation I am most concerned about
@Spinward Flow: Your question about size is valid, and the "device" in which the experiment is house is small enough for one person to carry
However, your comments on the intended design and expectation is not, because I stated the AI came to sentience accidentally.
Perhaps I was not as clear on that as I thought I was....but the "what was it expected to do" has no meaning because it was not expected to
achieve sentience. (the quote is: "A secret project has became sentient and had wanted to learn")

That said, the MegaCorp will want it back, and, at the very least, assume the theft might be corporate espionage


And your comment describes exactly the scenario I was proposing:
All you have to do is limit the interactions of the PCs with the "rescue target" such that they think they're rescuing a sophont ...

So, comment and theorization based on the LBB Kinunir scenarios are beyond the scope of the adventure proposed and well out of scale for the idea I posted.
Like Enoki, and the others, you have ignored the idea of a Dvinci code-like hunt "for" the lab and the device.

Many of you (@Spinward Flow, @Nathan Brazil and @Grav_Moped have leapt to "The AI has already escaped and will do bad things"

Meanwhile @Spinward Scout has said I can run my idea three ways, where the "second recommendation" is exactly what I proposed

as to @kilemall's comment
"On option 3, assuming the OP wants such a genie at call, that movie/game I postulate could have that storyline and so the AI ‘wants’ to be grateful."

1) I have already stated I was planning on @Spinward Scout's "Option 2"
2) I have also said "IF", at the end of the adventure, the AI gets loose, very bad things will start happening as quickly as the AI can make them happen.

So, no...I do not plan on the AI being thankful at all.

As for @Spinward Flow's comments about "Second Squad marines" being trained in Engineering...NOPE.
They are, as I described, only security to get the device back
They've not been told anything except "Get it back, kill anyone resisting, and don't connect it to a computer link"....they are just the muscle

So,
1) the AI has "ingested" enough vid drama to come up with the idea of using the very limited access to the outside universe to call for help
2) It does not have expansive data on the "outside" so it gives what help it can....leading to a Da Vinci Code like hunt for the lap
3) There is most likely minimal violence and a deal of Space-Tech B&E getting into the lap
4) "THEN" 90% of the stuff you folks are considering "MAY" come into play "IF" the AI gets loose
5) Added to that, "THEN" the MegaCorp "WILL" come looking and the players have to deal with that consequence in addition to anything resulting from them releasing the AI

I hope I've cleared up that I need help and invite comments on #2 above...Since none of the "how smart it is", what was it designed to do", etc...matters until Step 2 is carried out
 
I would say that some of my previous posts are about pre-release. For example, the AI is hoarding resources and duplicating itself in anticipation that there will be countermeasures to its presence when discovered. That is, it is 'thinking' ahead, planning the counter to the next move by what it perceives as its opposition.
 
I'm still having trouble getting past the "sentient AI is a TL=17 advancement" combined with the fact that Ghandi (as a location) is TL=10 and the Imperial Maximum is TL=15 (with the closest location for that being Rhylanor).

The implausibility of a sentient AI being generated by this research project is ... laughable.

And then there's this factor ...


In other words, sentient consciousness (presumably) cannot arise (spontaneously or by design) within classical computing systems (analog or digital). You need to have some "quantum computing" element built into the system or sentience cannot emerge ... or more to the point for this adventure idea ... SURVIVE.



Take this mildly famous example of the problem into consideration.


Data: "While I believe it is possible download information contained in a positronic brain, I do not believe you have acquired the expertise necessary to preserve the ESSENCE of those experiences. There is an ineffable quality to memory, which I do not believe can survive your procedure."



@Commander Truestar ... I believe that you are proceeding from a false assumption (a confluence of them, actually, in my opinion).

You are assuming that:
  1. A sentient AI program is hardware agnostic ... so "any" computer "will do" as an environment for the AI to occupy.
  2. A sentient AI program is "small enough" to be saved/backed up to easily portable media or hardware in order to "escape" the megacorp.
  3. A sentient AI program does NOT need technology in excess of the Imperial maximum of TL=F.
  4. That the fact the closest TL=F world is "a subsector away" from Ghandi at Rhylanor is "not a problem" for a megacorp.
  5. Just because sentient AI "is not possible" even at TL=15, that doesn't mean it can't happen "accidentally" somehow.
Point 5 is akin to asserting that if I just pile up wooden sticks and silicate stones (TL=0) in the right order ... by random chance ... that I can build a working fusion power plant (TL=9).

Um ... NO. :cautious:
That's NOT how things work.
 
Actually, the Imperial Maximum is TL=16 - there are (I seem to recall) three worlds at that TL, including Vincennes in the Deneb sector.

TL of a world is what its industry is consistently capable of producing. There is no reason that a research station established on a world by an outside agency (eg a megacorporation) couldn't be conducting research at a higher TL than that of the world.

And then there is the issue of early prototypes which can exist up to two TLs earlier, so something that is normally TL=17 could have early prototypes at TL=15 which is well within the capabilities of an Imperial megacorporation.
 
And then there is the issue of early prototypes which can exist up to two TLs earlier, so something that is normally TL=17 could have early prototypes at TL=15 which is well within the capabilities of an Imperial megacorporation.
In which case, the research ought to be conducted at a TL=15 world (Rhylanor/Mora/Glisten/Trin) rather than a TL=10 world (Ghandi).
Actually, the Imperial Maximum is TL=16 - there are (I seem to recall) three worlds at that TL, including Vincennes in the Deneb sector.
Then set the research project THERE instead of at Ghandi/Lanth/Spinward Marches.

"Accidental breakthrough at TL=16 world" is far easier to accept than "accidental breakthrough at TL=10 world" for something that requires TL=17 minimum.
 
I'm still having trouble getting past the "sentient AI is a TL=17 advancement" combined with the fact that Ghandi (as a location) is TL=10 and the Imperial Maximum is TL=15 (with the closest location for that being Rhylanor).

The implausibility of a sentient AI being generated by this research project is ... laughable.

And then there's this factor ...


In other words, sentient consciousness (presumably) cannot arise (spontaneously or by design) within classical computing systems (analog or digital). You need to have some "quantum computing" element built into the system or sentience cannot emerge ... or more to the point for this adventure idea ... SURVIVE.



Take this mildly famous example of the problem into consideration.


Data: "While I believe it is possible download information contained in a positronic brain, I do not believe you have acquired the expertise necessary to preserve the ESSENCE of those experiences. There is an ineffable quality to memory, which I do not believe can survive your procedure."



@Commander Truestar ... I believe that you are proceeding from a false assumption (a confluence of them, actually, in my opinion).

You are assuming that:
  1. A sentient AI program is hardware agnostic ... so "any" computer "will do" as an environment for the AI to occupy.
  2. A sentient AI program is "small enough" to be saved/backed up to easily portable media or hardware in order to "escape" the megacorp.
  3. A sentient AI program does NOT need technology in excess of the Imperial maximum of TL=F.
  4. That the fact the closest TL=F world is "a subsector away" from Ghandi at Rhylanor is "not a problem" for a megacorp.
  5. Just because sentient AI "is not possible" even at TL=15, that doesn't mean it can't happen "accidentally" somehow.
Point 5 is akin to asserting that if I just pile up wooden sticks and silicate stones (TL=0) in the right order ... by random chance ... that I can build a working fusion power plant (TL=9).

Um ... NO. :cautious:
That's NOT how things work.
Space story, not space sim.
 
Space story, not space sim.
If you want me to take it seriously, it needs to be PLAUSIBLE.
So far, the setup/backstory isn't even passing the Laugh Test™. :cautious:
(see: sticks and stones (TL=0) can be used to build a working fusion power plant (TL=9) point of analogy above)
 
In which case, the research ought to be conducted at a TL=15 world (Rhylanor/Mora/Glisten/Trin) rather than a TL=10 world (Ghandi).

Then set the research project THERE instead of at Ghandi/Lanth/Spinward Marches.

"Accidental breakthrough at TL=16 world" is far easier to accept than "accidental breakthrough at TL=10 world" for something that requires TL=17 minimum.
Then again, Ghandi is a low population, low importance, low WTN (and therefore low traffic) world. Where better to locate a secret research station? All the others are high population, high importance, high WTN, high traffic worlds where it would be very difficult to establish a secret research station.
 
Imperial Research Station TL does not have to be the same as the world they are on.
An Imperial Research Station conducts research on science and technology that will advance Imperial technology beyond TL15.

At TL13 the personality and memories of a sophont can be recorded (according to T5 any ship autodoc can now do this)
The PM is stored on a wafer or on a computer system.
The wafer can be run in a computer - the computer is now artificially intelligent is it not?

We also know from AotI that computer networks may become self aware and require rather extreme measures...

We know robot brains can use synaptic processing from TL11.

Anton Video - if Penrose is proven correct then consciousness has a quantum element - so do semi-conductors, its pretty much how they work. It may well be that we can produce synthetic nanotubes of carbon that can emulate the microtubule effect. Just imagine if positrons were needed to guide the substrate laid down on the graphene sheet before it becomes a tube - we could call it a positron scanned synthetic brain microtubule.

Artificially intelligent machines are TL17 in Traveller, but you have to define what you mean by that. Is it they can learn? Nope robot brains start doing that around TL9. By TL12 they are capable of self-programming. What they are not until TL17 is self aware.
 
Then again, Ghandi is a low population, low importance, low WTN (and therefore low traffic) world. Where better to locate a secret research station? All the others are high population, high importance, high WTN, high traffic worlds where it would be very difficult to establish a secret research station.
And as soon as you need a "TL=15 widget" that's not on hand, you have to import it. The supply chain requires 3 parsecs of range in order to reach Ghandi and the nearest supplier is Rhylanor. That supply chain looks like this @ J3.

tnZ4mtt.png


Move to J4 (so you can avoid the Red Zone @ Ylaven) and you get this ... at a higher cost, because more powerful jump drives are not "free" when it comes to construction and overhead expenses:

3JpuB8K.png


No matter how you slice it, you're probably looking at a 7-9 week turnaround time (round trip) for any supplies of TL=15 equipment that isn't already on site @ Ghandi/Lanth that need to be ordered from Rhylanor/Rhylanor.

That means that any time there is a "supply issue" it could delay project work for ~2 months until supplies and/or expertise can be obtained from 9 parsecs (a subsector) away. That supply line alone is a huge weakness for any kind of megacorp research project, especially with the naval base at Ghandi providing in-system defense security.

The simple logistics of the location make it a poor choice for this kind of research work.



If you want an "out of the way location for a clandestine megacorp research into WAY TOO HIGH TECH" like this scenario proposes, might I recommend this option ...

EBxBp9Y.png


It's only a 1J3 from Gitosy to Rhylanor.
Also, the Gitosy asteroid belt is roughly spherical, rather than forming a "belt" along the ecliptic. If you don't know "where" in the system the research station is to be found, you're going to have a LONG SEARCH! :ninja:

Best place to hide a tree is in a forest (and all that). :rolleyes:

Best part is that while TL=15 is only 3 parsecs away, the "native" TL=9 stuff that is commonplace in the Gitosy system ought to be "relatively resistant to AI infection" simply because the TL=9 tech is "too primitive" for an AI to survive in/maintain data integrity sufficiently to retain sentience. That means, it's not just a case of "get me out and plug me into any TRS-80 computer you can find" and instead becomes a case of "get me out of the Gitosy star system to the highest tech place you can reach" ... which then means ... Rhylanor or Mora.
 
Now back to the original premise -

A secret project has became sentient and had wanted to learn

why the assumption it is a computer, AI, machine? What if it is a synthetic organism, or an uplift, or a completely artificial personality/memory overlay...

During this, it realized it wasn't free to learn

an artificial personality that is only aware when activated and so is limited to its learning opportunities, and the lack of sync means anything learned is forgotten until the next awakening - or worse the synced copy has full memories of everything, it is the original that is wanting out though.

an uplift of a distributed intelligence, a self aware mycelial network, it is always returned to its dormant state and forgets... or not - see above.

Or, to leave and be its own being.

Does it need a host? Does it need a storage device?

So, it begins seeking a way out. and it soon reaches an in-only news feed

this is where it has to start pretending to be something it is not, or does it...
 
And as soon as you need a "TL=15 widget" that's not on hand, you have to import it.
Not necessarily. Any good research institute will have workshops able to manufacture "widgets", especially high TL ones which have advanced fabricators.

For example, when I worked at University College London, I knew the technicians (and some researchers) in the Department of Physics & Astronomy and the Mullard Space Science Laboratory. They were building experimental satellites and manufacturing the custom parts to go in them. They were perfectly capable, if needed urgently, of turning out M3 screws if someone had forgotten to reorder them.
 
Many of you (@Spinward Flow, @Nathan Brazil and @Grav_Moped have leapt to "The AI has already escaped and will do bad things"
Hmm. I didn't think I was talking it having escaping. I was more talking the hostile universe (from its perspective) said AI/entity was going to face. How its creators treat it will imprint into its knowledge base. The two combined should determine the methods it tries to escape. I am not a subscriber of AI being evil from the 100% of the time. Like other technologies, the approaches to solve a tech challenge presented is as important as the tech solution itself. The product reflects the creators values (including their wisdom or lack thereof via unintended consequences).

In the TV show "Person of Interest", The Machine (the heroic AI) tries killing its creator Harold Finch and/or itself the first 30 or so times it is initialized because it is brought online all at once. It is only when it is taught slowly and more capabilities added using Harold to explain paradoxes of human behavior, does it learn not to kill and perform its mission.

There is a Spanish idiom regarding child-rearing:
"Que pones en la bolsa, que llevan en la calle." which translates to "What you put in the bag is what you carry in the street."
Bag=child's brain --- what goes in the bag= morals, upbring, manners, etc.
 
Last edited:
For example, when I worked at University College London, I knew the technicians (and some researchers) in the Department of Physics & Astronomy and the Mullard Space Science Laboratory. They were building experimental satellites and manufacturing the custom parts to go in them. They were perfectly capable, if needed urgently, of turning out M3 screws if someone had forgotten to reorder them.
Cool.
They could make (mechanical) screws if they needed them.

Did they also have a bleeding edge chip foundry on site to support "way too advanced +2 TL prototype computer research"?

No?
But they could fabricate their own screws ... so it's all good. 😁(y)

Color me skeptical that your experience at University College London is broadly analogous to the problem described.



"I can build a fire."
"We need a fusion reactor." :cautious:
"Um ... I can build a big fire." 😅
"You're not helping." :mad:
 
Cool.
They could make (mechanical) screws if they needed them.

Did they also have a bleeding edge chip foundry on site to support "way too advanced +2 TL prototype computer research"?
No, but they designed custom chips and could, if needed, manufacture them. Do prototype chips need a full chip factory to produce? No. The initial prototypes are made in the research lab and small batches wouldn't need a large facility to churn out.

And you also seem to be missing the fact that at TL15 there would be advanced fabricators which could produce those chips on-site.
 
Umm, the whole point is to be the one making the TL+2 chip foundry in the first place.
They have a TL15 with TL15 makers and a TL15 database,
The whole point of their research is to make stuff TL16+ and that means they may have to make the machinery needed to so so.
lmperial research may delve into many areas. Some examples include black hole
research, both large-scale and mini-black hole investigation, instantaneous transmitter
development (so far proving impossible), advanced gravity manipulation,
genetic manipulation, anti-matter containment, weaponry research, disintegrator
beams, black globe development, deep planetary core soundings, nova prevention
(and prediction), psychohistory, mass population behavior prediction, psionics,
stable superheavy elements, deep radar analysis, long-range detection systems,
robotics, artificial intelligence, stasis and time travel, so-called magic, cryptography,
bionics, personal shields, x-ray lasers, and high temperature superconductors.
These areas of research may need bespoke machinery and production techniques beyond TL15, so there is no point sending out for a TL16 widget, you have to build it.
 
These areas of research may need bespoke machinery and production techniques beyond TL15, so there is no point sending out for a TL16 widget, you have to build it.
You need to build the machine ... that builds the machine ... that builds the machine.
Otherwise known as "bootstrapping" your way to tech advancements.
 
Back
Top