• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

embracing retro 'puters

Are SBDs at an advantage in real-space combat, then, as they can pack in more versatile and powerful computers in the same tonnage as a star-ship of the same size?
Not only no jump drive, but a computer that doesn't need all that hyperspace functionality.
 
I am a bit curious with all of this discussion of "retro computers" as to how many of the forum can remember doing all of the programming view 80-column Hollerith punch cards, and dealing with card readers and massive quantities of hard-copy printout in order to debug a program? Those were the computers that I worked on in high school and my roommate and me worked on in college. Those would be your Tech Level 5 and 6 machines that were still around when Traveller was first written.

That is what comes to my mind when I hear the term "retro computer".

Given that background, I have drastically shrunk computer size and cost, and assume that when you buy your life support system, you are buying the computers to manage it as part of the cost, with the same going for the engineering section.
 
I am a bit curious with all of this discussion of "retro computers" as to how many of the forum can remember doing all of the programming view 80-column Hollerith punch cards
[ . . . ]
There were folks running nuclear reactors with IBM 1800s until 2010. These were machines with just a few K of RAM built in the 1960s using the the sort of technology you're describing.1

I think that the notion of needing a supercomputer to fly a spaceship is a bit silly.

[ . .. ]
Those were the computers that I worked on in high school and my roommate and me worked on in college.
[ . . . ]
Uphill both ways ...


1 - http://ibm1130.org/today/
 
I think folks underestimate just how powerful today's commodity hardware really is. A Really powerful™ computers don't have to be very large at all. A modern GPU has throughput measured in trillions of calculations per second.

The Cray 1 had 160 MFlops on an 80 MHz 64-bit processor. And 8.3 MB memory. For $7.9 Million.

My phone has a Qualcomm MSM8937 Snapdragon 430
CPU Octa-core 1.4 GHz Cortex-A53 64-bit processor... up to 3.44 MFLOPS, tho' typically around 0.5 to 1.1 GFlops depending upon task type. And 4 MB ram... for $100. My phone can totally outperform the Cray-1 using a cheap processor.... except for working databases.
 
Cassette is a form factor, doesn't necessarily mean there's a tape media in it.

Similarly, "tape" can just be the vernacular from the old days like "dialing the phone" or why we have floppy disk icons on modern computers.

If the information was in the Tb-level then a cassette, the size of one of the old audio ones, could contain the data needed to assist in a jump. Not just the astrological information but data on J-space between the two points, further assisting in the Jump and being why the difficulty level of making the jump is lower than without the cassette.
 
I am a bit curious with all of this discussion of "retro computers" as to how many of the forum can remember doing all of the programming view 80-column Hollerith punch cards, and dealing with card readers and massive quantities of hard-copy printout in order to debug a program? Those were the computers that I worked on in high school and my roommate and me worked on in college. Those would be your Tech Level 5 and 6 machines that were still around when Traveller was first written.

That is what comes to my mind when I hear the term "retro computer".

Given that background, I have drastically shrunk computer size and cost, and assume that when you buy your life support system, you are buying the computers to manage it as part of the cost, with the same going for the engineering section.

My IBM 360 assembly class used punch cards, and the actual computer was several blocks away. We had the teletype machines for the Pascal programming classes so I had reams of green bar paper, and finally real screens when I took the OS class (Unix - one of the many who got snared and stuck in vi, although now I use vim pretty much on all my machines. old habits and it works on pretty much everything)

My CS degree is 1986, 5 years prior to the WWW basically. And to tell the truth, I hate web programming. It changes so fast (when I started, you got an OS update every 3-6 years, not what seems to be weekly and mostly to correct what they screwed up in the last update because they are in far too much of a rush).

And get off my lawn!
 
The oldest computer I worked with consistantly in college was a Dec VAX 11/730 I did my Pascal homework on. It had diskpacks and 132 column green bar paper printer.

The Community college I had transferred from had Apple ][+s running Apple BASIC, and from a Corvus networked hard drive of 20 megabytes, Apple fortran 1.0.

The community college students who were business programmers used punch cards for their programming, but I am not sure what machine they had. Maybe some model of an IBM 360.

About the same time I had an Amiga A1000 at home I ran AmigaBASIC programs on. I used a 1200 bps modem to dial in to my university account to read Bitnet email.
 
The Cray 1 had 160 MFlops on an 80 MHz 64-bit processor. And 8.3 MB memory. For $7.9 Million.

My phone has a Qualcomm MSM8937 Snapdragon 430
CPU Octa-core 1.4 GHz Cortex-A53 64-bit processor... up to 3.44 MFLOPS, tho' typically around 0.5 to 1.1 GFlops depending upon task type. And 4 MB ram... for $100. My phone can totally outperform the Cray-1 using a cheap processor.... except for working databases.
Well the Cray-1 did use an IBM mainframe as an I/O processor.

Depending on the speed class a Micro-SD card goes up to about 100MB/sec as compared to the 800kb/sec or so you got from a mid-1970s IBM DASD. A modern smart phone probably has an order of magnitude quicker I/O than the subsystem likely to be hanging off a Cray-1.

You'd probably be surprised just what you could crank the transaction processing throughput up to on a smart phone. In 1976 an IBM 360/167 (a machine with a CPU about as fast as a 386) got 97 tps on a TPC/A benchmark (just shy of 6,000 TPM).1 This would have had an I/O subsystem of similar spec to that installed on a Cray-1.

Some of the demo configurations folks use today (which have nothing to do with a real machine spec but hey) can push millions of TPC/C benchmarks per minute - and TPC/C is considerably harder on the machine than TPC/A.


1 Fun fact: on this benchmark they used a product called IMS fastpath, which posted transactions with a message queuing system - the call returned before it was committed to disk and you could potentially read an obsolete version of the data. That's NoSQL with eventual consistency in 1976 - who would have thought?
 
It is equivalent to today's supercomputers, not an I-pad.

What a model 1 can do:
  • run a nuclear fusion reactor (ever seen the size of the server rooms at CERN?)
  • run the environmental systems - this includes gravity and acceleration compensation
  • run the avionics, sensors, comms
  • control a maneuver drive
  • run or plot an n-body hyperdimensional transit
Now harden the thing so exposure to radiation in space isn't going to cause it to go belly up...
No. The thing that made supercomputer "super" was their I/O bandwidth and rate, and parallel processors. They weren't vastly faster than mainframes. The calcs they used them for required huge sets of data, modeling thousands of points.


My lamentably short second career was in SCADA (Supervisory Control and Data Acquisition). The operator's computer doesn't control anything. The local control station doesn't control anything. The equipment itself does the controlling, and accepts standardized input from the local control station, which communicates with the operator's computer, which displays information in a useful way and transmits commands through the chain to the equipment. I actually saw a text-only version of an operator's interface from the '80s.



So, no, the ship's computers doesn't do any of these tasks. The equipment will be autonomous, and the operators just need an interface to tell the equipment what they want it to do. When you buy the equipment, you buy everything that equipment needs: it's own controller and interface, code for the local control station, code for the operator's station.


The reactor runs itself. It responds to demand. Anything else happens so fast no human operator can respond. There would be some power level monitors and switching systems that would be controllable from the bridge.


As for maneuver, most of those controls are built into the ship. Again, the operator needs an efficient interface, and the computing power required for that is trivial.



As for jump, MM himself said the calcs can be done on a '77 vintage hand calculator... One only needs data on the position of major gravitational bodies. There are none between the stars. Most system pairs aren't aligned with each others' planes of orbit, so there will not likely be any interference by planets. No n-body calc needed.



Now, many players rationalize the computer size as including the avionics, sensors, and comms. They aren't anywhere else in ship design. But really they should be entirely separate systems. Computer sizes are also rationalized as work stations (seat, desk, keyboard or other physical interface, display, all ergonomically arranged). But that isn't going to be changed based on the capabilities of the computer, only by how many crew need to monitor and command those processes. That is what the bridge space is for, which is separate from the computer.



What about shielding? Again, shielding is going to be built into the ship. It's called the "hull." Yes, there will also be shielding in whatever case the computer comes in. And a means of passing data through the shielding (typically optical).


None of this takes anywhere near 14 m³ of space, a small bedroom, much less multiples of it.
 
Let's see what is written:
The computer installed on a ship controls all activity within, and is especially used to enhance weapons fire and defensive activity. It also transmits control impulses for maneuver and jump drives, and conducts the routine operation of all ship systems. What the computer actually does is based on the programs installed and operating at any one time.
Routine operation of all ship systems - grav plates, acceleration compensation, environmental controls, waste heat management etc.

A computer which is not operating effectively paralyzes a starship.

Due to the fact that you need to be running a maneuver program to use the maneuver drive, a target program to shoot weapons etc.

Manual control is not an option, nor is using localised controls for ship systems.

As to running a jump program on a calculator you can't make a jump without a cassette or a generate program and a computer that can handle the jump number. What MWM actually wrote is:
Computer: Jump drives have precise power requirements which can only be
met if the power is fed under computer control. In addition, the calculations needed for a jump require a high level of accuracy.
 
Let's see what is written:

Routine operation of all ship systems - grav plates, acceleration compensation, environmental controls, waste heat management etc.
[ . . . ]
None of which implies that it has to be especially powerful. It's just big for some reason. It doesn't make any representation about the architecture beyond a requirement that you have to run certain software on the computer to control it. Also, this size doesn't vary between starships and non-starships so there's no reason to infer that controlling a jump drive is particularly computationally intensive beyond the CPU slots the Jump program takes up. History tells us that we can fly a non-starship with a computer that is somewhat less powerful than a ZX spectrum.

Neither LBB2 nor LBB5 break out sensors or avionics separately so we have to assume that they're rolled up in the computer and bridge. LBB5 abstracts software packages away to a DM for computer size. A lot of folks (myself included) just bundle sensors into that and assume that other avionic systems live in some arbitrary split between computer and bridge.

N body simulation is only needed if you are simulating bodies large enough to perturb the orbits of the other bodies in the simulation. In practice, your little free trader is not going to have a material (or even measurable, for that matter) effect on the ephemeris of a planet.

My computer does a perfectly fine job of running Kerbal Space Program, which abstracts away N-body mechanics in favour of calculating the gravitational effects of the bodies on your spaceship. However, there is at least one third party plugin that does n-body calculations, and therefore can (at least in theory) simulate lagrange points, halo orbits and other wierd and wonderful effects of complex gravitational systems. The effect your ship has on the orbits of the planets is below the resolution of a double precision float. This will also run just fine - in realtime - on an ordinary PC.

Traditionally, Mathematicians considered N body simulation to be hard because it can't be solved algebraically (i.e. no nice closed-form solution, much like - say - PDEs) and therefore has to be solved computationally. N body simulation has gotten a reputation for being computationally intensive because folks have tried to use it to simulate things like galaxies or the big bang, where the number of interactions is proportional to the square of the number of bodies. For the number of bodies you actually care about if you're flying a spaceship it's a non-issue.
 
Last edited:
...

Traditionally, Mathematicians considered N body simulation to be hard because it can't be solved algebraically (i.e. no nice closed-form solution, much like - say - PDEs) and therefore has to be solved computationally. N body simulation has gotten a reputation for being computationally intensive because folks have tried to use it to simulate things like galaxies or the big bang, where the number of interactions is proportional to the square of the number of bodies. For the number of bodies you actually care about if you're flying a spaceship it's a non-issue.
It could become an issue if one posits that there's something about Jumpspace navigation that requires simulating a lot more bodies than necessary for normal-space maneuvering.
 
It could become an issue if one posits that there's something about Jumpspace navigation that requires simulating a lot more bodies than necessary for normal-space maneuvering.

However, as he pointed out, a NON-starship does not require substantially less computer than a starship. So Jumpspace appears no more complex than real space.
 
However, as he pointed out, a NON-starship does not require substantially less computer than a starship. So Jumpspace appears no more complex than real space.

True by hull size in LBB5. Not the case in LBB2, where a Mod/1 will suffice for any non-starship up to 5KTd (the limits of that design system).
So at least under LBB2, computing Jump 1 is no more complex than maneuvering in real space, but Jump 2 and up is progressively more complex.

Under LBB5, something about a 1000Td hull makes it as tricky to control as doing a Jump-2. Etc.
 
So if one wanted to use more modern knowledge on computers/computing when designing a ship, how would you represent it? That they take up no tonnage or space or what?

As for jump casettes, are they like these types of magnetic data storage devices?
 
Let's see what is written:
Hmmmm, yes. All based on limited knowledge of TL7 computers. All based on TL7 spacecraft. Everyone believed that computer control of the Apollo's maneuvering thrusters was required, yet Apollo 13 managed to do it manually.


The F16 (entered service '78, but well publicized in the early '70s) has that kind of computer control, but it is one integrated system for a single-seat aircraft. Aircraft with multiple crew have those systems divided up and in some cases functionally independent. Traveller spacecraft are more like naval ships, with even more independence among systems.


If the ship's computer really controls all that, then why can't larger ships be run by a crew of one? Why are gunners needed at all? Why are jump navigators needed? The description is more than a little self-contradictory...
 
The Cray 1 had 160 MFlops on an 80 MHz 64-bit processor. And 8.3 MB memory. For $7.9 Million.

My phone has a Qualcomm MSM8937 Snapdragon 430
CPU Octa-core 1.4 GHz Cortex-A53 64-bit processor... up to 3.44 MFLOPS, tho' typically around 0.5 to 1.1 GFlops depending upon task type. And 4 MB ram... for $100. My phone can totally outperform the Cray-1 using a cheap processor.... except for working databases.


True, but don't underestimate the programmer genius of getting far more out of that Cray then your phone.


Cray programming did serious engineering and science work. Your phone gives you maps, text/email and cat videos.


That's the thing that more power is usually used for today- far less prog/dev costs for commodity uses.
 
[ . . . ]
The F16 (entered service '78, but well publicized in the early '70s) has that kind of computer control, but it is one integrated system for a single-seat aircraft. Aircraft with multiple crew have those systems divided up and in some cases functionally independent. Traveller spacecraft are more like naval ships, with even more independence among systems.
F111s had fly by wire controls well before the F16 - and were famous for crashes caused by a bug in the software. IIRC the F16's claim to fame is that the airframe is not naturally stable in flight (which gives it agility) so it needs active control to remain flying.

The computer running the F111's fly by wire system had 4k of memory.
 
True, but don't underestimate the programmer genius of getting far more out of that Cray then your phone.[ . . . ]
Actually, you can use LINPACK on both. Most numerical computing libraries (including those behind systems such as MATLAB, R or Numpy) use LINPACK, LAPACK or a BLAS compatible backend these days.

That state of the art big data platform you're using - yep, that's FORTRAN in the back end. Now get off my lawn.
 
Last edited:
It could become an issue if one posits that there's something about Jumpspace navigation that requires simulating a lot more bodies than necessary for normal-space maneuvering.
Still, you only have to bother with it if you're simulating a body that is sufficiently massive to perturb the orbits of the other bodies you need to simulate. In comparison to the mass of stars or planets your 200 ton free trader is inconsequential to the point that the perturbation it causes is within the measurement error of your instruments.
 
Back
Top