• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Ship Computer Size

I started getting the itch to run CT again. While pouring over the books to refresh my memory on how everything fits together, I was hit with the same twitch that hit me the last time around. Namely, as an IT guy the idea of monster sized computers with severely limited memory and storage really rubs me the wrong way.

How much would updating the computer rule change the structure of ship building/combat/etc? Obviously it would make a bit of a difference with ship construction (reducing the mass by 20T).

I guess the real question is, would it be worth it to work up the house rules, or am I biting off more than I realize with this?
 
First, what makes you think the memory is severely limited? Running two programs can be very taxing - if the programs require enough memory and computing power. Personally, I don't see the jump calculation as being a simple geometry problem. (I think it involves relative energies - so you have to calculate how to arrive at 0 energy relative to the new system.)

Second, there's nothing saying the computer displacement is all circuit boards. It includes some access space (which could be as much as 50% of the space, if you want access to all sides of the machinery). This is especially true, given that the Vilani are 10x worse than any OSHA bureaucrat when it comes to rules and regs. And, OSHA/code requires 1m leeway around the front and the back of any rack. It also could include some self-contained "life support". (Otherwise, when your life support dies - so does your computer.) It probably includes some Faraday protection, as well.

This ain't the first time this has been questioned. :) One of the answers previously given is that jump space is particularly hard on ICs, and that necessitates a different structure for the computer. Some have even suggested that tubes are required to deal with jump (or the energy discharge on entry and exit).

(EDIT: BTW, I am involved in a large, complicated, system of systems that uses several dTons of rack space with virtualized servers and such, and I can imagine a lot of redundancy and such built into a ship's computer.)
 
Option 1: (the Mongoose Option) Computers take 0 tons for the first, 1 each for the addition, mostly for the master console. Frees up a few tons on certain ships.

Option 2: The HG option - ignore the program rules

Option 3: Total Bonus Option - Ignore the detailed computer program rules.
Divide the model number between Jump/Generate, Maneuver/Evade, Predict, Multi-Target.

Option 4: Computers are micro-vacuum-tube based in order to survive jump. Tonages, prices, and capabilities are correct as written.
 
When Traveller was first written, the PC did not really exist, and the Cray was the standard for supercomputers. Presently, the laptop that I am typing this on is probably running faster than the Cray of 1982. In 1992, you could not export IBM486 processors running at about 100 MHz to the former Soviet Union. Now, the processors in your cell phones are running faster than that. Based on all of that, the computers in Traveller should get a thorough revision. I am going with smaller, more capable, and a lot cheaper ones, along with a lot cheaper software packages, at least for commercial ships. Military ships are another breed of creature entirely.
 
Second, there's nothing saying the computer displacement is all circuit boards. It includes some access space (which could be as much as 50% of the space, if you want access to all sides of the machinery).

A workstation alone could take up ½ a dT.

This ain't the first time this has been questioned. :) One of the answers previously given is that jump space is particularly hard on ICs, and that necessitates a different structure for the computer. Some have even suggested that tubes are required to deal with jump (or the energy discharge on entry and exit).

An explanation that unhappily founders on the twin facts that computers for space vessels are just the same as computers for starships and that fiber optic backup computers survive trips through jumpspace perfectly well.

(EDIT: BTW, I am involved in a large, complicated, system of systems that uses several dTons of rack space with virtualized servers and such, and I can imagine a lot of redundancy and such built into a ship's computer.)

It would be better, if such be the case, to set out the cost and tonnage of one computer and then note that starships normally go for triple redundancy (or whatever), thus allowing special designs where those safety concerns have been sacrificed for whatever overriding concerns.


Hans
 
I started getting the itch to run CT again. While pouring over the books to refresh my memory on how everything fits together, I was hit with the same twitch that hit me the last time around. Namely, as an IT guy the idea of monster sized computers with severely limited memory and storage really rubs me the wrong way.

MGT moved a bit into the late 20th century by eliminating the tonnage, making mass storage unlimited but, still has CPU limitations on running pgm's.
 
Hi,

While I understand the concerns about limited memory, I've always assumed that the large space requirements for computers could be for things like UPS's, redundant server rooms and the extra ventilation/cooling that those spaces would likely require.

I know in the old office that I worked at they ran some webservers in addition to the network servers for the office and they eventually ended up blocking off a space about the size of a standard office (maybe about 10'x10' or so) and made it a server room. In Traveller terms a 10' x 10' space would be about 2dtons (assuming a standard deck height similar to that in Traveller).

As such, devoting a few dtons for a small ship, or even more for a starship tens or hundreds of thousands of tons doesn't necessarily strike me as too bad (in concept).
 
When Traveller was first written, the PC did not really exist, and the Cray was the standard for supercomputers. Presently, the laptop that I am typing this on is probably running faster than the Cray of 1982. In 1992, you could not export IBM486 processors running at about 100 MHz to the former Soviet Union. Now, the processors in your cell phones are running faster than that. Based on all of that, the computers in Traveller should get a thorough revision. I am going with smaller, more capable, and a lot cheaper ones, along with a lot cheaper software packages, at least for commercial ships. Military ships are another breed of creature entirely.

Let's see... Cray 1... 64 bit, 16megs RAM, 160 MIPS maximum, and 130-250 MFLOPS. Total of 12 processors to achieve this

The Galaxy III pushes 1.5GFLOPS, is 128bit data path, at least 64 bit processor, and up to 64megs of RAM. And it does this with 4 cores and 2 GPU cores.

The current cell-phones outperform the Cray.
 
Can you imagine computing power ~2K years from now? And in what sized package? :eek:

Probably only about 100x current on the raw numbers. At that point, limits of physics preclude further improvements. Raw power per processor has been slowed in growth.

Superior architecture, however, will possibly push things up to about 3-10x that by optimization of 3D hardware, and improved QC.
 
Probably only about 100x current on the raw numbers. At that point, limits of physics preclude further improvements. Raw power per processor has been slowed in growth.

Superior architecture, however, will possibly push things up to about 3-10x that by optimization of 3D hardware, and improved QC.

How about something like quantum architecture? Or, are you factoring that in the "3-10x" part?

In any event, pretty wild stuff.
 
Probably only about 100x current on the raw numbers. At that point, limits of physics preclude further improvements. Raw power per processor has been slowed in growth.

Superior architecture, however, will possibly push things up to about 3-10x that by optimization of 3D hardware, and improved QC.


QC-quantum computing? That will likely be where the next big jump comes in computing.
 
How about something like quantum architecture? Or, are you factoring that in the "3-10x" part?

In any event, pretty wild stuff.

Quantum Architecture probably won't be a miracle... if it even stabilizes as a viable technology rather than a purely mathematical branch.

Neural networks on-chip are far more likely to make inroads - and 3D processor architectures are darned near essential for them to work effectively on large scales. Voice recognition and text reading are two areas where massive neural net processing is a highly viable solution.

Heck, the TL differences of the main computers really don't make much sense to me, either, since the odds are that there won't be a whole lot of improvement past TL10... and if there is, it's most likely to be reductions in manufacturing cost (by reduced waste and improved automation).

I accept them as an artifact of some hidden technology essential to jump drive astrogation...
 
I'm thinking it's the optical processors that will be the next thing.

An electrical transistor switches in about 7 nano seconds or about 2 meters at lightspeed, a photonic switch switches as soon as the inteferance pattern or poliarization effect kicks in however they are doing their switching. Ideally they use light to effect the switching so the switching speeds may be <1 cm at lightspeed for a X200 speed improvement over a silicon electrical transistor.

So my 3.6 GHz overclocked I7 6 core proc I'm using today becomes a 720 GHz processor when translated into photonics, placing petaflop performance into the realm of workstations and Teraflop into mobile devices.

The next step is to place memory co located with the processors, large gobs of it not a paltry few MB that we have today but TB scale memory as the cache.
Long term storage is in 3D crystals with no moving parts, like our SSD's of today (equalogic array with 48 SSD drives in it are available in the market place now) the access speed of these items gets faster the smaller and closer they are to the processors needing the storage, so you print the processor on the 6 faces of the 3D storage cube and the cube serves as the cache for the proc, so there you have your program loaded into the processor, and additional storage cubes a few cm away with the other programs that do not fit.
 
The computer in CT was also an abstraction for the ship mounted sensors.

Think about what the ship compute has to do.

It has to monitor the operation of a multi MW fusion reactor, it has to monitor the artificial gravity and acceleration compensation, it has to be able to solve the n-body problem in seconds, it has to be able to solve the jump space version of the n body problem in minutes.

That sort of stuff is going to require a bit more than a smart phone... ;)
 
Last edited:
Wow! Hadn't expected 2 pages(ish) of reply's in just a few hours. Definitely ran into some info here that I hadn't considered before when thinking in terms of raw tonnage. This is me thinking out loud, since it's well past time I crashed for the night, but here goes...

1. The space/tonnage includes not just the computer/server, but a backup system, at least one UPS, life support systems for the server room and Jeffery's tubes (to borrow a term), the terminals used to access the ships systems, all wiring involved in linking this together, as well as sensors linking everything in/outside the ship together.

2. While the systems themselves have gotten smaller and faster (photonic processing becoming necessary at sub-light speeds and beyond), programs have done what they always do, and have continually expanded to fill the available space/storage/processing power.

3. ... I know there was a 3, but I'm on hour 21 and just had a mental page fault.

Other than 3, does that sound about right? See, now I want to run the calculations for what we'd expect the weight of things like the UPS and cabling would be. Till tomorrow all, and thank you for the incredibly fast responses.
 
As a related note, in the 90`s, while playing a MT campaign, my players wanted an engineer robot and we designed it with the book 8 rules.

When designing the robotic brain, one of my players complained about the size it had, saying that we were building a mechanical one, acording to its size, instead a microchip one.

The, when we added the rest of the robot (arms, weels, and so on) it ended as a R2D2 sized robot, and he stopped complaining. So, we assumed that the robotic brain included most controls and that, while the brain itself was oversized, the whole robot was fine.

Maybe something like this happens with the computers, where the size (displacemenmt) used for them also includes the space needed by such thigs as consoles (they can only be that small if someoune has to read them), interface (ditto, if someone has to use it), workstations, aside form other things already said in this thread.
 
fibre optic cabling has a density of less than that of copper, is thinner, and typically can carry 3 orders of magnitude more data per cable than copper.
As tech level increases you go from analog serial over a low quality copper line (300 baud phone modem from 1975 anyone?) to parallel data over short distances via ribbon cable, to high speed serial over medium distances via a high quality shielded cable (1981 COAX ) Token ring 1.5 MB, to 10 base T, 100 base T, gigabit, and with fibre optic 10 gigabit and higher is possiable. Larger data streams are handled by aggregating multiple interfaces and cables. Your fibre optic trunks in the under sea cables is not one fibre it's thousands and is distrubited between hundreds of servers on both ends, which route to all of the myrid local fibre optic trunks and on to the local ISP's and from there to the end users/generators of content.

So we have the ship's computer core with fibre optic ports to the outside, it might be 1,2,4,8,16, or more based on the computer model number, each having double the number of I/O channels of the previous model. These ports feed into a router that connects to the various equipment sensors and control panels throught the ship. Civilian ships may use wireless hotspots for mobile devices to connect into the ship's systems, reducing the need for physical wiring. By defining the computer model by the number of data ports it can support we get a sense of scaling of the model's capability to handle inputs. A model 9's 512 ports could eat half of one of the under sea fibre optic cables, and clearly could manage a battleship's myrid devices and data streams.

Computer systems designed for redundancy have no single point of failure that knocks the computer systems offline, triple redundancy has no two points of failure that can knock it offline. That's two ports are required on each computer to establish double redundancy, model 0's single port can't support it. Model 1 cannot support triple redundancy, as that needs three ports. the cabling and hardware needed to do double redundancy is two times that for a single non redundant system in that every device has two data paths to it, and three data paths for triple redundancy. Military systems will route these paths through different cable runs or use different nodes to broadcast the signal. Every device needs two/three interfaces to support the level of redundacy of the ship. This does not mean there are three EMS sensors, it means that the one sensor has three data ports, if there is a 2nd or third sensor installed each of them would have the extra interfaces as well.

UPS battery systems get smaller / lighter for given power capacity as tech increases (I realize this is CT but T4's FF&S Table 224 gives you battery sizes and costs per MW/h by TL to get you into the ballpark)
 
Warwizard: double redundancy should require 1 port on each machine, not two, for the redundant interface. Triple should require two each - one to each other computer.

Some ascii art should help.
Code:
             S                S
A=========== Y   A=========== Y
| \          S   |            S
|   C======= T   |            T
| /          E   |            E
B=========== M   B=========== M
             S                S
 
Warwizard mentions explicitly what I didn't, as part of the package: infrastructure like routers. Especially routers. You might have a cute little box - all 'aerodynamically shaped' and such - sitting on your desk or a shelf. This system of systems I work with has something more like this. Yeah, that's 10U or 14U or some such, and it's some of the newest stuff. (My little 4-port is jealous, now.)
 
Back
Top