• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

CT/HG Computer Intelligence

kilemall

SOC-14 5K
Got to thinking about how I would deal with the HG computer per tech level hit modifier in the context of the CT rules, and came up with this concept that you may find useful in general.

Decided that the model number IS the computer's intelligence.

The model number gets applied like HG to to-hit as a positive and as a negative when fired upon, and in all situations including anti-missile shots and return fire (since these are highly automated firing processes).

The intelligence part also indicates to what extent the computer can interact with the crew and problem solve.

Most computers the players will interact with will be INT 1-4, somewhere between a poodle and a young child.

Most naval services would of course have a standard personality for their high end computer, and likely not allow it HAL levels of 'mission control' much less autonomous combat.

Not planning on having Robot skill type software built in as I see the ship computers as being primarily deterministic engineering/maneuver/fire control systems, more specialized in 'doing what it does'.

The smarter systems may be taught to do unusual maneuvers or engineering tasks, so one could order a 'Picard maneuver' or 'crazy Ivan' and the ship would execute it.
 
I have always considered simple computers as being basic without any sort of problem solving ability. responses are simple..."acknowledged, unable to process command...etc...if they have expert programs the computer just uses the program rating as it's only skill.

Until you get to a dedicated AI the computer has no problem solving capacity, without a specific skill software..and then only limited by the flexibility programmed in by the software developer..sort of like the AI of a good video game....nothing more.
 
I have always considered simple computers as being basic without any sort of problem solving ability. responses are simple..."acknowledged, unable to process command...etc...if they have expert programs the computer just uses the program rating as it's only skill.

Until you get to a dedicated AI the computer has no problem solving capacity, without a specific skill software..and then only limited by the flexibility programmed in by the software developer..sort of like the AI of a good video game....nothing more.

In one sense that's true, and in another it's not.

Consider the computer of a 1960s era plane- mostly tied up in radar and fire control, most controls still analog of some sort, all very deterministic and absolute input/output results, multiple engine planes handled by a dedicated engineer.

By the 1970s you have fly by wire in military planes (which migrates into the civilian sphere by the 80s) and the F-16, an inherently unstable design that REQUIRES the computer to be running and adjusting everything for a smooth flight. The engineer position goes away, handled by inherently more stable engines and more computing power.

By the 90s you have ubiquitous fly by wire, a high level of automation, B2 stealth planes that require that much more avionics to even fly.

Still deterministic? Mostly, but more and more you have to have fuzzy logic and less rigid responses to inputs, where the responses to conditions are less about X input results in Y action, but more X input results in smooth Y action to Z range.

Basically, an evolution of autonomic nervous system responses which may not respond precisely the same every time and can respond to a wider and wider array of situations.

Problem solving.

Is it Turing problem solving? Certainly not at the Model 1-4 level, this is more like dedicated engineering expert systems, but they have built in expertise specific to ship handling and fighting that substitute for human level for simple problems that can be handled at speed.

In just the two 'tech levels' we have seen computing has come a long way, allowing for plane designs that were literally impossible to field effectively without the control expertise captured in the logic provided by their human expert designers.

I am assuming the trend continues to greater computer responsibility and problem solving, especially when you have the TL constraints of the computer determining how big your ships can get. Taking that as a gameplay mechanism and not gospel truth, it still is an interesting control point, literally saying the ships get too complex without a big enough central computer to handle them.


So by TL 13 when you would have a Mod/7 7 intelligence on your latest dreadnaught, it may still not be a Turing AI in the full sense of the word or as per the Robot Brain rules, but it is an expert at what it does, control a ship.

Fits a lot of points for me including the HG +/- tech thing, pricing for the higher end models making sense given the high level of expert system programming and performance you should get for the money.

And another way to get across to players that they are in the FUUUUUUUUUUTURE and differentiating between being in a small fry ship and the Big Time.
 
I agree with what you have said as basically correct. However the computers rating, processing speed, memory capacity, and abilities to process multiple programs at once doesn't make the software any "smarter" or "Dumber"

if you run a Model one TL-12 computer with a rating of 10, or a TL-15 Model 7 the software it runs is still a fire control 1... the fact the commands can be processed more rapidly, more data can be accessed, and several other programs can be ran at the same time does not affect the ability of the software to carry out it's functions... as long as the base rating of the program is present.


A computers model number/power/rating/tech level wont improve the effectiveness of the software. Now if you wanted to say that installing an intellect, or intelligent interface in the systems, and give the computer an intellect rating based on the software's TL/rating that would work.
 
I agree with what you have said as basically correct. However the computers rating, processing speed, memory capacity, and abilities to process multiple programs at once doesn't make the software any "smarter" or "Dumber"

Within the context of the extant game rules, yes, but I am proposing that could change with this rule.

If we are talking about increases in power allowing for real life 'smarter' computers, I utterly disagree. It's precisely because there are so many resources that greater complexity is possible.

if you run a Model one TL-12 computer with a rating of 10, or a TL-15 Model 7 the software it runs is still a fire control 1... the fact the commands can be processed more rapidly, more data can be accessed, and several other programs can be ran at the same time does not affect the ability of the software to carry out it's functions... as long as the base rating of the program is present.

I agree that the CT rules, quite possibly all that follow, certainly have that paradigm. I also think it's an utterly incorrect model both from a RL perspective on just the two TLs we've seen improvements on, and from having significant differences between TLs.

My approach to LBB2 vs. HG re: tech is that the standard drives of LBB2 are effectively universal industrial standards, so a TL9 drive A can be plugged into a TL14 ship and it will work, and vice versa, TL14 power plant can go into a TL9 ship (the ultimate in backwards compatibility).

HG on the other hand is fully custom and at a full implementation of the TL the ship is built at (among other things requiring it's TL for parts and service)- more appropriate to a specific naval or megacorps' high end use.

In that framework the computers need to be able to have that same backwards compatibility especially with the low end models that are ubiquitous throughout LBB2 standard ships since they are a key part.

At INT 1 or 2 they aren't going to be significantly doing anything that much better then a purely deterministic model.

But for the high end ones and especially the ones destined for major warships, the HG rules re: +/- mods need to be honored to if one is mixing systems like I am, and this seems the best shorthand to explain the difference in performance.


A computers model number/power/rating/tech level wont improve the effectiveness of the software. Now if you wanted to say that installing an intellect, or intelligent interface in the systems, and give the computer an intellect rating based on the software's TL/rating that would work.

I agree that the game rules have it that way in CT and perhaps most of the other editions. But HG sure as heck doesn't, and I'm looking for elegant shortcuts.

One could carefully build up a new catalog of programs to highlight the progression of computing power increase and it's tactical capability increases. But rather then do that, seems easier to say a Predict-3 program running on a Mod/7 gets a +7 bonus then running through creating Predict 5-10 or something like that.

I'm having a tough enough time wrapping my head around what happens when you are managing computer programs like LBB2 on a 100K cruiser with the facing zonal damage system. Ship design and fleet operations become radically different.

In effect I am saying the computer is getting an intellect rating, if I am to understand what I read in some of the material for the later Traveller versions, but that would not be the same program at TL9 that is running at TL13, the OS and it's requirements and thus compiler would be different, not to mention greater capability to do whatever the program is designed to do with a 'smarter' OS executing it.

The program would be structured differently and have many more nuances possible even if the core algorithm is the same.
 
While it's MT, not CT, it's IMHO compatible enough as to make the comparison valid.

In 101 vehicles a robot brain is used instead of a computer for a recon drone (vehicle 23), and, in pages 1-2 is explained that the CP equivalent was used as 250CP multiplier per IQ point. It is also explained that this explication puts TL 16 computers (model 10), with CP multiplier 200 on the verge of IQ 1 (coherent with CT:LBB8 Robots, where at TL 16 you can use 50% reliable synaptics), while lower TLs computers are still far of it, and higher ones (the TL 17 model 11 has a CP multiple of 1000, so IQ 4) are really intelligent.

As said, this is coherent with what is told in CT:LBB8, where is said that starship computers use paralel processing, so being purely deterministic, as synaptic is not yet reliabale enough at TL 15 for such a critical system.
 
Has anyone taken the size of Traveller ship computers, accounted for cooling and shielding, and done Tflop projections based on size, calendar date, and Moore's Law?
 
While it's MT, not CT, it's IMHO compatible enough as to make the comparison valid.

In 101 vehicles a robot brain is used instead of a computer for a recon drone (vehicle 23), and, in pages 1-2 is explained that the CP equivalent was used as 250CP multiplier per IQ point. It is also explained that this explication puts TL 16 computers (model 10), with CP multiplier 200 on the verge of IQ 1 (coherent with CT:LBB8 Robots, where at TL 16 you can use 50% reliable synaptics), while lower TLs computers are still far of it, and higher ones (the TL 17 model 11 has a CP multiple of 1000, so IQ 4) are really intelligent.

As said, this is coherent with what is told in CT:LBB8, where is said that starship computers use paralel processing, so being purely deterministic, as synaptic is not yet reliabale enough at TL 15 for such a critical system.

I was always a big fan of the LBB8 bot brain model, other then the limited purely mechanical approach rather then the wetware bots that run throughout scifi.

However, LBB8 can get you higher INT levels then INT 1 at TL17, I've intentionally designed such.

Hmmm, just broke out the book, Low Autonomous at TL12 for instance requires Full Command level of Fundamental Command, alone it gives INT 2.

Heck 'stupid' linear processors get 1 INT.

Yep, looking at this, at TL12 you can get 20 linear processors, 50 parallel processors, 10% Synaptic so 7 of those possible (although INT is divide by 2 for synaptics so 6 would be more likely).

Works out to 1 INT for the linear, 10 INT for the parallel, and 3 for the synaptic plus 2 INT for the Full Command, for an accessible level of 16 INT.

So my proposal actually has the ship's computer as 'stupider' (but far more reliable and survivable re: damage).

The cost is not casual, 810,000 CR for the brain alone less software/chassis/power/interfaces, etc. but cheap compared to the ship computers.

INT is not the governing limitation, Fundamental Logic is.

Low Data is the non-learning deterministic bit, TL8.

High Data, the robot can already learn, TL9.

Low Autonomous, the bot can take limited action on it's own and sometimes 'figure out what you meant', TL12.

High Autonomous, the bot can normally figure out what you meant, TL13.

The Low and High AI (TL17+) has to do with creativity, originating ideas, drawing conclusions, and full 'personhood' and self awareness.

So, I am on firm ground here with what's possible if we take LBB8 as a guidepost.

I thought about integrating Robots and LBB2 computers once upon a time, simply use the Robots rules to build the ship computers, but decided the model/program/pricing to combat effect paradigm was too deeply embedded, and went with separate tools for separate functions.

The ship's computer wouldn't be able to formulate a sneaky tactical plan, but could potentially optimize stealth configuration/emissions on command for instance.

HAL's lip reading and autonomous plans for dicing up Discovery's crew may not be possible without extensive 'tactical' programming giving him that capability and leaning towards that 'solution set', but you can certainly expect talking to the computer and having it do more smart things within it's realm then I gather most players are used to.
 
Robot brains are quite cheaper than ship's computers. You can assume that a ship's computer must take care of many more things at once than a robot brain.

I'm afraid I cannot find my papers about it, but many years ago (at MT time) I designed a fighter drone by using LBB8 for the brain and the same rules as told above for using is as computer (according MT:101 Vehicles a robot brain can sustitute a computer and a crewmember), and the cost was on the order of 65-84 kCr (about the same than a model 0 computer).

Problem is how to rate it for HG modifiers pourposes...
 
Robot brains are quite cheaper than ship's computers. You can assume that a ship's computer must take care of many more things at once than a robot brain.

I'm afraid I cannot find my papers about it, but many years ago (at MT time) I designed a fighter drone by using LBB8 for the brain and the same rules as told above for using is as computer (according MT:101 Vehicles a robot brain can sustitute a computer and a crewmember), and the cost was on the order of 65-84 kCr (about the same than a model 0 computer).

Problem is how to rate it for HG modifiers pourposes...

well HG as stats for drones of varying TL, they are retty basic and have only limited flexibility..but that might be a rough guide...however if the schem of giving intellect to computer based on it's rating, as an extra I can see that being feasible.
 
Moore's law went bust last year.

They likely missed a dev cycle with the 2008 crisis, along with it gets harder to do anything in silicon from here on out, meaning either back to rare metals to get better performance or another basic technology.

Various non-volatile memory technologies can help, along with motherboard on a chip.
 
Robot brains are quite cheaper than ship's computers. You can assume that a ship's computer must take care of many more things at once than a robot brain.

I absolutely do, again different tools for different purposes.

I'm afraid I cannot find my papers about it, but many years ago (at MT time) I designed a fighter drone by using LBB8 for the brain and the same rules as told above for using is as computer (according MT:101 Vehicles a robot brain can sustitute a computer and a crewmember), and the cost was on the order of 65-84 kCr (about the same than a model 0 computer).

Problem is how to rate it for HG modifiers pourposes...

The reason I ordered a set of Striker is to merge a robot brain with a powered suit with legs and powered arms/tentacles (not BD, but not mecha either, more the personal light tank), so the soldier has his robot buddy managing sensors and/or weapons use while being the base of fire for a fire team.

I'm figuring this is about right-sized for a lot of the covert actions/colonial unrest/portable firepower that is cheaper then a grav tank low level stuff that the players are more likely to be involved in, and allows for that BD fun without breaking the TL curve.
 
well HG as stats for drones of varying TL, they are retty basic and have only limited flexibility..but that might be a rough guide...however if the schem of giving intellect to computer based on it's rating, as an extra I can see that being feasible.

I don't remember CT:HG to have any rules for drones (MgT:HG has, but the thread title talks about CT/HG. Maybe I should have specified though...)

The reason I ordered a set of Striker is to merge a robot brain with a powered suit with legs and powered arms/tentacles (not BD, but not mecha either, more the personal light tank), so the soldier has his robot buddy managing sensors and/or weapons use while being the base of fire for a fire team.

I'm figuring this is about right-sized for a lot of the covert actions/colonial unrest/portable firepower that is cheaper then a grav tank low level stuff that the players are more likely to be involved in, and allows for that BD fun without breaking the TL curve.

I don't own Striker, but I guess it should have rules for warbots, if the Zhodani are featured on it (as I guess they are).
 
Has anyone taken the size of Traveller ship computers, accounted for cooling and shielding, and done Tflop projections based on size, calendar date, and Moore's Law?

I don't know that I would want to exactly quantify the various models to specific teraflops, among other things ship computers are likely not the Big Dogs of the computing world, more like the hardworking grungy blue collar/hardfighting brother.

Also, those program sizes seem odd compared to what they are doing, which I just take to be one of those 'its for the gameplay' design decisions.

If I were to try and match the Models to real life stats, guess I would do some sort of multiplier effect of model, tech level and capacity.

I would also treat storage as 'virtual memory', programs temporarily swapped out and not active but can be brought active immediately, rather then 'on disk' as the original seemed to be shooting for.

Don't have time now, but I think I'll work something out if you like. Likely start with the Mk I fire control computer on the Iowa class and the AN/UYK-series as examples of TL5 and TL6-7 Model/1-3 computers.

To me, there is no better model for ship's computers then- ship's computers.

https://en.wikipedia.org/wiki/Mark_I_Fire_Control_Computer

http://digitalcollections.library.cmu.edu/awweb/awarchive?type=file&item=361154

http://vipclubmn.org/Articles/AnUyk43Computer.pdf

This is quite a lovely article on the development of naval computing and tactical control.

http://ethw.org/First-Hand:Legacy_of_NTDS_-_Chapter_9_of_the_Story_of_the_Naval_Tactical_Data_System
 
I don't know that I would want to exactly quantify the various models to specific teraflops, among other things ship computers are likely not the Big Dogs of the computing world, more like the hardworking grungy blue collar/hardfighting brother.
...

Reliability, uptime, not complex to maintain are a few points your hard fighting brother needs. In commercial IT manufacturers produced their more reliable, high uptime product lines, as well. Eventually, failover and other techniques overshadowed these product lines. These products we're never bleeding edge.
 
Moore's law went bust last year.

Wasn't it just revised to reduce the rate of change?

Wouldn't his Second Law, Rock's Law, be even more relevant now anyway?

I did a fair bit of detailing with my group of players about their ships computer, it's behaviour and parametres of such. In short though, it probably comes down to how one wants their game to play. 1977 in space? No problem. On track for The Culture? Fill your boots!
 
IBM just built 7nm processing chips and 1000x-faster non-volatile memory chips, this year. Maybe last year's "bust" was just a temporary blip. There's no technical reason that Moore's Law can't continue for hundreds of years.

Let's pretend.

Tianhe-2 in China is capable of 33.86 petaflops. It has 3.12 million cores and draws 24 megawatts including cooling. It occupies 720 square meters, let's say 4 meters high (I've seen pictures).

Can we safely say that cooling technology doubles in cooling-ability-per-size every two years, too? If so, then we can treat the whole thing as a unit. It's a big assumption, but let's pretend.

Every two years, that thing halves in size and keeps the same power.


0 years --> 720 sq m (30 m x 24 m)
2 years --> 360 sq m
4 years --> 180 sq m
6 years --> 90 sq m
8 years --> 45 sq m
10 years --> 22.5 sq m (less than 5m x 5m)
12 years --> 11.25 sq m
14 years --> 5.6 sq m
16 years --> 2.8 sq m
18 years --> 1.4 sq m
20 years --> 0.7 sq m (a square 33 inches on a side, pretty flat -- basically a big PC)
22 years --> 0.35 sq m
24 years --> 0.18 sq m
26 years --> 0.09 sq m
28 years --> 0.04 sq m
30 years --> 0.02 sq m (a square 5.5 inches on a side -- basically a tablet)
32 years --> 0.01 sq m
34 years --> 0.005 sq m (a square 2.8 inches on a side)

Pardon me converting some things to inches. I'm pretty comfy with metrics but as a dumb American, I still think naturally in the English measurement system.

The other way to do this is to take the 26 year mark, round a bit and say it's a 1-meter cube, and then double its computational power every two years.

100 years of doubling power every two years multiplies power by a factor of just shy of 10^15.

In 2016, we have a 34 petaflop (34 x 10^15 operations) computer that fits in a huge room.
In 2050, we can hold that same amount of computational power in our palm.
In 2150, that same handheld gives us 34 xennaflops (34 x 10^30 operations per second).

And that isn't even considering quantum computing leaps.
 
I disagree with the hundreds of years assertions due to heat/electron limits already in sight with current designs, coupled with cost issues (silicon isn't the best electronics alloy at all, it's just insanely cheap for what we have got out of it).

Plenty of other directions to go, wetware like Cordwainer Smith, laser cube/boards that end up looking like HAL, motherboard on a chip, possibly LCD like one novel had, as noted quantum computing, etc. but it may be an interruption while we reset to a new technology base.

Also, again Super Computing for specialized engineering/science machines is not what we are putting in our hulls, they are ship computers built to solve common ship operations problems and also take extraordinary punishment before failing.

Two different tools with as different 'problem sets' as ship computers and robots.

Finally, superFlops is not the be-all end-all of computing power, after all that is just measuring the capacity of a given kind of calculation.

Effective instruction executions, I/O, communication rates, type of problems the computer is built to work on, anticipatory pre-processing/storage retrieval/resource allocation are all elements that can greatly affect the capacity of a given machine.
 
Last edited:
IBM just built 7nm processing chips and 1000x-faster non-volatile memory chips, this year. Maybe last year's "bust" was just a temporary blip. There's no technical reason that Moore's Law can't continue for hundreds of years.
Yes, there is a reason: continued decreases in circuit size increase electron tunneling errors, and continued increases in overall chip size increase the error rate in production.
 
Back
Top