• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

Revamp of ships' computers

Exactly. There might be some size/weight loss at the early TL's, but at a certain point -- maybe TL9 or 10? -- the size/weight should stabilize, and just the performance keeps going up.
 
Originally posted by atpollard:
I like the size of Traveller starship computers.
Me, too. My vision of ships' computers has been forever burned into my brain from the computer rooms aboard the Liberator and the London of Blakes 7 infamy.
 
Originally posted by BillDowns:
This problem of increasing TL only making something available, but not more effective, smaller, or less pricy is a systemic problem with CT. Jump drives, maneuver drives, etc all suffer from that problem. My solution is to reduce the size and/or cost, usually both, by 10% per suceeding TL.
It's a nice idea, but presumably a pain in the neck to own. Imagine squeezing those TL14 jump drives into your Type S to make it jump 6. Lovely until they need some maintenance. Then you'll have to take them to a TL14 starport for the work to be done. Any lower, and you can't get the parts. Even if you could, the workshop isn't equipped to fit them & the engineers don't understand how to service them.

You could literally be stranded at a Class A starport with TL12, and have to pay to get your Scout shipped as freight back to somewhere they can mend it. Kinda like owning a Masserati (or so I hear).
 
Originally posted by Tanstaafl:
I have book2 and the Traveller book and the only reference to computer media is the self-erasing cassette for navigation flight plans. I don't see where it says "magnetic tape" and even if it is, why is that the only media available?
It's a funny thing, but 20 years ago I bought software on audio cassettes, which would fit in a pocket, and now I get it on DVDs, which don't.

Granted the chart-shop in the starport could sell my flight plan on the equivalent of a memory stick / SD card, which is tiny, but if they wanted to put it on a shelf in a shop then it'd be in a cardboard box about 6"x4"x1" minimum. That's to allow room on the box for a scannable barcode, attractive pictures & descriptive text, and to avoid shoplifting.

Similarly, the volume we currently need for a desk to support a keyboard and screen could be replaced by a wall-mounted flat screen and voice control. But by TL14 we'll use holographic 3D displays which need a volume to present the data in. And the friendly AI that discusses the flight plan with you in Galanglic occupies more data storage on the cassette than the actual coordinates.

I'd compare it with Microsoft Office, which gets bigger in every edition and needs a more powerful computer. Not because typing a letter becomes a more difficult activity, but because it now includes spelling-correction and a talking paperclip to teach me how to use the software. So by TL14 you'll need a teraflop supercomputer to write a shopping list. You'll probably dictate the shopping list, and it'll remind you to buy more beer for the poker session in your calendar. Luckily it'll be small enough to fit into the same space under your desk.
 
So the net-net is that we are back to the same spot, lots of complaints and few ideas that stand up to scrutiny. My 2-cents is this, personal computers get smaller (the only thing limiting their size is the screen and the keyboard) but the office/ building computers have remain the same size but not the same capacity. I work for a global company that handles large amounts of data every day and our mainframe fits in a 10 X 12 room. This system handles all of our workload, from dealing with our 26 manufacturing sites to 10 times as many sales locations. We run eight different heavy programs at one time, all the time. Our systems are out of date by about 5 years. We are only now catching up and trying to reduce the number of programs. I guess my point is this. It takes a room size computer to run a global company. It should not take it to run a starship.
 
I could disagree about a room size computer able to run a company but a starship would be less (if I am reading that correctly). A starship navigation system would have to calculate real time distance/time/displacement for transit, maintain immediate response systems for life support, regular space transit, sensors. A business system does not need to have significant fail-safes other than good backups and perhaps a backup system for fall back.

Heck - it took three supercomputers 11 months to get the factors of 2^2039 - 1, A nav comp can calculate the jump coordinates in a few hours if I recall correctly, and at a guess that would be significantly more complex than factoring a really big number.

I agree with Tinker: there is a lot more supporting stuff going on, especially in a starship computer, than a bare-bones system might need. And perhaps they are running the latest MS operating system - since the requirements seem to go up exponentially it could require a planet-sized computer to run!
 
Originally posted by CaptBrazil:
I could disagree about a room size computer able to run a company but a starship would be less (if I am reading that correctly). A starship navigation system would have to calculate real time distance/time/displacement for transit, maintain immediate response systems for life support, regular space transit, sensors. A business system does not need to have significant fail-safes other than good backups and perhaps a backup system for fall back.
Total agreement, except I feel that TL should reduce size and/or cost of systems. An IBM 3083 central unit of the late 70's cost about $1.5M, was roughly 1m wide x 2m high x 4m long, required a water chiller to cool it, and was, IIRC, 10 MHz, 64k cache, 1 Gb main memory max. A 9672 of 5 years ago cost about $250k, is 1m wide x 2m high x 2m long and is air-cooled. Processor is 2+ GHz, 64k L1 cache, 8M L2 cache, and I'm not sure about main memory, but at least 4 Gb, maybe more. And the 9672 is faster at most things.

WalMart's primary data center has a cluster of 10 mainframes bigger than a 9672, BTW.

...snip... And perhaps they are running the latest MS operating system - since the requirements seem to go up exponentially it could require a planet-sized computer to run!
Jeez, how you like having to reboot in the middle of a fight
file_21.gif
 
Originally posted by Tinker:
It's a funny thing, but 20 years ago I bought software on audio cassettes, which would fit in a pocket, and now I get it on DVDs, which don't.
For all of the "gosh, wow" factor of new memory storage devices, I think that the old 3.5" floppies were the optimal size medium. They had a durable case (unlike the old 5.25" floppies) and they fit nicely in a shirt pocket.

The best part of memory sticks are the fact that they can survive a trip through the washer and dryer without loosing data - but they seem a little too small for my taste.
 
Originally posted by CaptBrazil:
I could disagree about a room size computer able to run a company but a starship would be less (if I am reading that correctly). A starship navigation system would have to calculate real time distance/time/displacement for transit, maintain immediate response systems for life support, regular space transit, sensors. A business system does not need to have significant fail-safes other than good backups and perhaps a backup system for fall back.

Heck - it took three supercomputers 11 months to get the factors of 2^2039 - 1, A nav comp can calculate the jump coordinates in a few hours if I recall correctly, and at a guess that would be significantly more complex than factoring a really big number.

I agree with Tinker: there is a lot more supporting stuff going on, especially in a starship computer, than a bare-bones system might need. And perhaps they are running the latest MS operating system - since the requirements seem to go up exponentially it could require a planet-sized computer to run!
AS previously stated, the ship's computer does not handle the life supports systems, the engines, or the weapons. These systems are minimally affected by the destruction of the ship's computer. So, the only real task it has to handle is navigation and you don't need a room size computer for that. Also, forget about using MS for an operating system. Go with a Mac. I'm sure my Mac book pro could handle the calc's right now.
 
Your Mac book pro may be able to handle the calcs necessary...heck any computer can handle the necessary calcs. The issue is how fast can it do those computations and how accurate are those computations. I'm sure we all know machine eps is different on the older machines so accuracy is an issue. And without a) a great processor, L2 cache and RAM and b) AN EFFICIENT, ACCURATE ALGORITHM to compute the necessary results the output could end up very dated by the time it is spit out. OS has nothing to do with this conversation. I'm actually surprised no one has advocated one of the Linux OS's.

I have a very efficient (handcrafted) laptop right now that I use to run my calculations on (I am a numerical analyst). Right now, it outstrips our campus mainframe by a large margin (calcs that take hours on the mainframe typically take minutes for me). I use XP as my OS. No crash issues.

Now, if the assumption that literally ALL the ship comp has to do is navigation, then literally we should be able to deal with this issue with a laptop of similar power to what I have. Even if some dtons are then added for avionics wiring and interfaces, that is still considerably smaller than listed.

The computer control system, I suspect, is closer the the TNE situation, where there are separate computers running the various regions...i.e. one-two comps running engineering, one comp per turret/battery running gunnery, one running life support, and a separate main computer for holding most of the library data (like star charts for example) that deals with everything else.

In light of this kind of situation, it becomes easily explainable why so many dtons are used up by computers, why the cost is what it is, and why things like engineering, gunnery, and life support are minimally affected by the loss of the main ship's computer.
 
Originally posted by Renard Ruche:
RoS:
You might find this thread helpful... :rolleyes:
http://www.travellerrpg.com/CotI/Discuss/ultimatebb.php?ubb=get_topic;f=46;t=000093;p=3#000037
You're going to have to help me out here.

What does the 1130 topic--about what would have happened with Dulinor's coronation fleet attack if Virus had never existed and it hadn't gone haring off after it, and proceeded properly straight for Capital/Core/Core--have to do with the Revamp of ships' computers topic?
 
Originally posted by RainOfSteel:
</font><blockquote>quote:</font><hr />Originally posted by BillDowns:
An IBM 3083 central unit of the late 70's cost [...]
Worked on one! :D In 1994, no less. </font>[/QUOTE]Big Sicker, wasn't it :D

Mainframe-wise, I've worked on a militarized version of a Univac 1108, on IBM 4341, 4361, 4381, 3083, 9612, and 9672 mainframes. I would love to work on a z370. (how do you out in a drool??)
 
Originally posted by BillDowns:
</font><blockquote>quote:</font><hr />Originally posted by RainOfSteel:
</font><blockquote>quote:</font><hr />Originally posted by BillDowns:
An IBM 3083 central unit of the late 70's cost [...]
Worked on one! :D In 1994, no less. </font>[/QUOTE]Big Sicker, wasn't it :D

Mainframe-wise, I've worked on a militarized version of a Univac 1108, on IBM 4341, 4361, 4381, 3083, 9612, and 9672 mainframes. I would love to work on a z370. (how do you out in a drool??)
</font>[/QUOTE]The 3083 (4 CPU) was rolled out about four month after I arrived and was replaced by a 3084.

The 3084 (3 CPU) lasted from around Nov-1994 to approx. Jun-1996, when it replaced with, I believe, a 9672 (2 CPU). That was there until I left in 1999.

We lost a CPU with each move, but gained tremendous horsepower.

The move from the 3083 to the 3084 required an entire team of 16+ individuals who tiger-teams the job in 36 hours. Some were coolant techs who came in, pulled up the floor panels, and repiped the cooling system.

They were putting the main cabinets together when they suddenly discovered that the main "grouper" cables (high speed data transmission for communication between CPUs in different cabinets) were not present. They had to do counter-to-counter airline delivery shipping to get a pair from that mainframe's original location (they'd been left behind on accident).

IIRC, the set up was like this (ASCII art to follow):

Note: Each # is a the size of a 7 foot high large fridge.
</font><blockquote>code:</font><hr /><pre style="font-size:x-small; font-family: monospace;"># | #########
#
# | #########
----- -------

#
##
##
# ###

# ###


##### #
#
##### #

#####

#####

##</pre>[/QUOTE]The top vetical set of 3 symbols were the cooling pump.

The top two horizontal sets of 9 symbols were the uniterruptable power supply (UPS).

The rows of hyphens and vertical lines represent walls, as the top two sets of equipment were in different rooms. The pumps made a huge amount of noise (thunderous).

The odd offset cross-shape below the UPS was the mainframe.

The lone symbol was the master start-up CPU that controlled mainframe bootup and shutdown.

The two horizontal sets of 3 symbols were the power conditioners, they were there to produce absolutely smooth electrical power for the mainframe. (Think of them as gigantic and spiffy power supplies.)

The bottom four horizontal rows of symbols were DASD (Direct Access Storage Device) strings. Each was filled with harddrives, and each hard drive was the size of a suitcase and weighed 70 lbs. I should know, when a service tech showed up to replace one, he had to coopt me to help him lift the old one out and the new one in (it wasn't that 70 lbs. was too heavy, it was that they were awkward and placed oddly in the cabinets). I believe each HD unit had a capacity of 250/350 MB, or something like that (I can't remember for sure).

The vertical set of three symbols next to the DASD strings was the DASD controller.

The bottom two symblos were the . . . hold on to your socks . . . reel to reel tape drives. Used in multiple daily operations, no less. Including for transfers to a Unix server (that had a speciallized SCSI reel tape drive on a flat bed mounted in a rack next to the servers). When asked why they just weren't ethernetted/token-ringed together, the response was, "Do you have any idea how expensive that peripheral is for a mainframe," or, "The mainframe is just too old." Both were untrue, and I always laughed my rear end off, privately, of course.

The mainframe was connected to the DASD controller by 36 speciallized cabled, each about one inch thick and typically, I think, about 10-15 yards long with giant hand-sized plugs at each end.

The mainframe also had 36 extra computers called "channels" (on top of the 3 main CPUs). Each channel computer CPU ran communication requests from the mainframe to storage over one of 36 cables connected to the DASD cable (at the time, 4MB/sec. was each cables' max speed).

This gave the system an aggregate transfer capacity of 144MB/sec. That was quite a bit back in 1994.

It also wasn't a "hypothetical" maximum, where you didn't actually achieve that limit (like ethernet or modem "so-called" maximum speeds). You actually got that transfer rate.

Because there were 36 extra computers asynchronously handling all I/O requests, the 3 main CPUs were left to do real work.

That mainframe, small and limited as it was for the breed (the 3084 could be expanded quite a bit over what we had), was extremely powerful. It could handle 500 concurrent users plus continually executing batch jobs.

Oh, and I think I missed out discussing one, and possible two cabinets and their functions. (I'm thinking there may have been a cabinet that received the underfloor cooling pipes, and then redistributed them.)
 
We had around 6 3375 DASD on ours - 4Gb each. Plus a pair of 3390 tape cartridge drives. And we did have the ethernet adapter; with 3 Novell SNA servers (or whatever Novell called them) to service the roughly 500 PC's running 3270 emulation.

The 9672 - last I heard, about 1 1/2 years ago - had 1 Gb main memory, 11 3385 (??) DASD drawaers, 1 Shark with 2 Tb hooked up to 10 channels, 2 3395 Multi-cartridge tape drives, 2 Ethernet adapters with IP support. Single processor, but 2+ GHz cpu & bus speed. And all for under $500k and supporting almost 1,000 PCs.

Can you imagine how many Citrix servers it would take to replace that?
 
computer idea

ok lets assume a model 1 computer is a multi cpu multi core unit. that computer has to over see "all ship functions" yes individual light switches and environmental controls will still function, but if the main computer goes down central control of the systems also goes down. think of having a automated sprinkler system. you can still run around and shut down or open valves manually, but at central control you can open and close each zone with a command or program. this allows for combat damage to appear not to immediately take life support off line just because the computer was hit, however if something in life support goes wrong the computer cant fix it, so in the end we are the last back up system on the ship.

if we assume that the computer sizes in cannon traveller account for human spaces for the PC or NPC to be the last back up system, then the size and space required is accurate.

but what happens if you put 2 model 1 computers on the ship with one of them being dedicated to only one job. ie a fire controll computer or a jump computer. i think it should be able to perfrom faster than normal in that one task. i also think that if you "over load" a computer it should slow down but still get the job done... example a model 1 computer calculating a jump 2, would require at least twice the time, if it was doing the rest of its normal functions.

diverging for a moment i like the T20 idea of breaking out sensors, commo, avionics, and cpu out. each "sub item" is independ givig you the possiblty of a model 2 computer with a model 1 commo and a model 2 sensors. ect.

why not have a rule addition for dedicated systems, being smaller in size or, just effectively running at higher model numbers based on TL and number of items they are cotrolling under normal load?

just some random rambles
 
Back
Top