• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

Revising the science of Traveller

I should possibly have mentioned the drawbacks to the T4 computer rules.
Firstly the ratings of the various ship programs weren't given in accordance with the new computer rules. So I made them up.
Secondly, ship computers weren't rated according to the computer rules that I can find. So I made it up.
T4 could have been a really great system if it had been playtested properly IMHO.
 
Hmmmm, the computer I sit in front of today is at least 1000 times as powerful as the 286 I bought for a huge hunk of cash about 17 years ago. It is about 50 times as powerful as the 486 I upgraded to several years later. That is fairly close to Moore's law.

The idea that a whole TL represents a mere factor of 10 increase doesn't fit, unless you're saying we have advanced 3 TL in less than 20 yr.

The idea of linking 10 desktops to equal the typical mainframe is off, too. It would probably be closer to 100, and that assumes gigabit+ connection that would not induce too many wait states in the cooperating processors.

:rofl: The idea that a supercomputer (purpose built) would only be 10 or even 100 times as fast as a general desktop system is looney! Here is an August 03 announcement, 11.8 Tflop PNNL supercomputer fastest open system, of a system utilizing 2000 Itanium 64-bit processors (each more powerful than today's 32-bit desktops). That's an unclassified machine that would be available to private businesses - the latest news on a Cray using 64-bit AMD Opterons at Sandia says 40 Tflops. Astrophysicists are hungering for similar power from a Thinking Machines system.

Tflops are already 1,000,000 times faster than the Mflop supercomputers of the pre-PC days, which would be the previous TL in Traveller.

So, if we are looking at TL advances in computers a factor of 1000 would be foolishly conservative. I think a factor of 1,000,000 (only 30 years by Moore's law) would be reasonable, and 10^9 not wildly optimistic assuming we're still short of TL 8 maturity and a long way (more than one 20yr generation) from TL 9.
 
Last edited:
Just a side note, for those thinking that ol' Moore has been running out of steam as systems get faster: the Tflop barrier was broken by Cray in spring of 1997, the 10T barrier in spring 2003.

10 = 2^3.32, 3.32·1.5 years = 5.98 years. Right on schedule! :D
 
Last edited:
Originally posted by Straybow:
Just a side note, for those thinking that ol' Moore has been running out of steam
The end of Moore's Law is kind of like the arrival of fusion power -- always a few years around the corner. ;)
 
Since I am in the middle of a sub-genius-izing brainfart, would you be so kind as to reveal what dt, dm, and Ve are, and what units they are supposed to be expressed in?

I know that M=Mass, but not what units (g? kg? Mg?)
I know A = Acceleration, but not what units (Gee's? m/s^2?)
I assume V is for velocity, but the 'e' subscript is throwing me.

Other than that, I seem to be following your presentation, though I can't say I like the final result, as I'm pretty sure even modern chemical rockets are more efficient than that, able as they are to sustain over 1 G for several minutes when launching stuff into orbit. I personally have no problem with relativistic exhaust velocity, though physics doesn't really care with what I'm comfortable with. :D
 
Brain fart excused (just light a match to cover the smell) ;) It's in the preceding 2 paragraphs, but my pedantic dissertation is long enough to obscure anything important.

First, let me dispose of a misunderstanding about chemical rockets. The Shuttle takes off at about 2G. The Shuttle uses a huge tank that is 8-10 times the volume of the Shuttle proper. That's an eyeball measurement, but it's close enough for this evaluation. So that means it takes ~90% of the total volume (probably 50% of total mass) just to reach orbit. That doesn't bode well for sustained multi-G maneuver.
…which allows an exhaust velocity Ve of…

…for rocket thrust we use dm/dt, the mass flow rate, and Ve.

F = M·a = dm/dt·Ve
I may go back and bold the definition parts so they stand out better.

Big M is ship mass, expressed in whatever units are consistent with the rest of the equation, and likewise a is expressed in whatever units are convenient. I mixed units (tons and kg, km and meters) and you can check to make sure I did the porper conversions to get the final numbers.

[calculus lecture mode] dm/dt ("d" should be the greek lower case delta) is the first differential of mass, the instantaneous change in mass with respect to time. That means reaction mass flow rate. And obviously that reaction mass is exhausted at a velocity relative to the ship, Ve.

The calculus for a complete analysis would intergrate the change of mass in the ship due to the reaction mass spent, but we're not that picky at this stage of analysis.
[/calculus]

Voila:
Ship_mass × acceleration = rocket_flow_rate × exhaust_velocity
M·a (kg·m/s²) = dm/dt·Ve (kg/s·m/s)

:eek: Hmmm, I see I said I was calculating "two days of constant acceleration at an easy 1G" but then used 2G in the equation; but then I also used deuterium instead of plain hydrogen, so that cancels out.

The problem with relativistic exhaust velocity is we can't get there from here. Thermodynamic heating only pushes the gas so fast, and then acceleration through the supersonic nozzle is strict area ratio effect and we can only build them so big. Unless you want giant bell nozzles bigger than the rest of the ship. :D
 
Last edited:
Boy do I feel dumb! I come in from a link to a post half-way through a thread and comment on it, and not think to look if there is more to read, and the next day I notice my post sticks out like a sore thumb! :rolleyes:

Well, now that I've caught up on the thread, I have been thinking about this subject a bit since I started this whole thing a bit back. Before I screw up what I want to say, let me show you the things I have been thinking about:

Graphical operating system that runs on a C64 (20 yo computer): fits into 64K of ram. Lots of disk access to use it; you really needed two drives to use GEOS effectively, and that's why I never used it.

Modern graphical operating system: Windows XP REQUIRES 64M of ram; about 1000 times as much, and that's without running anything else. Really, you need about 256M AND 1000 times as much storage space as GEOS (if not more).

C64 used a 1 MHz CPU, most people have at least a 1 GHz CPU now.

My SID (music) collection was quite extensive, but it fit on a few ~150K floppies. A good song was about 5K. My MP3 collection is considerably larger, but a typical 128kb file takes up about 5M. Some people can't stand anything less than 256kb, so double that for them.

Pictures on the C64 were 320x200x1, or 8kB (well, that's really not accurate, but its video display had many modes, and that was the most memory-intensive one). A modern desktop takes up 1600x1200x32, or 7.5MB (well, MINE does), and much larger pics are not terribly rare like they were back then. Remember also that the C64 was a few years ahead of its time in comparison to the "PC" of the day.

Ok, enough of the examples.

While computer power has improved by 3 magnitudes, so have the demands placed upon them to do the same things they used to do. While it's true that my modern 1GHz machine could calculate pi to the millionth digit a lot faster than my C64 (indeed, I would have to get a memory expansion hack to get the 64 to calculate that many digits), the modern computer is asked to do a lot of other stuff.

My current music collection sounds better now than it did 20 years ago. My pictures are much prettier too. Overall, I have gained quality, but it still takes a while to boot (assuming this is a C64 using GEOS; a plain C64 boots in under 1 second), I can still store about the same NUMBER of songs and pictures in equivalent space.

So the MEASURABLE gains are what we should be paying attention to.

First, newer computers can do some things that older ones can't. TL border right there. My C64 cannot host a website (and yes, I've heard of the guy who did it anyway). My current computer cannot render cartoon-level movies in realtime. A computer 20 years from now SHOULD be able to render fairly realistic scenes in realtime. Three eras of about 20 years apiece, three TLs, right on schedule.

Second, the overall gain is not NEARLY what Moore's Law says. What have I gained in the last 20 years? My FPS games are more detailed, my pics and music are more detailed, I still don't have the ability to do cheap or fast backups because I have too much stuff, memory is still volitile... So we can say QUALITY can also define TL borders.

And finally, what real gain is there to be made from having ultra-precise computer TL rules anyway? The CP multipliers from the TNE sequence were used to reduce crew needed. I still think they are a bit conservative, but not so much as I used to. My own experience in the Navy shows that you will never man a battleship with just computer brains. Who's going to polish the deck when the brass comes aboard? Who's going to do the officers' laundry? It's hard to look down your nose at a computer, or to call a non-working computer 'stupid' in front of a repair droid. :D

At any rate, I no longer think that the Traveller computer problem is as serious as I first thought.

TL is about doing NEW things. Calculating pi a second faster is nothing, going from vector graphics to realistic holoprojections IS something.
 
Ever do a large spreadsheet with Lotus123 on a Dos3.3 PC-XT? A recalc could take 10 minutes or more. Now do the same spreadsheet on your XP with Excel or OpenOffice. The 1000-fold clock cycle will show itself in about 5 seconds. Disk access now takes up the majority of the time required.

Do engineering-type number crunching and the 1000-fold increase in power is dramatic. One of the projects I worked on in grad school simply could not be modeled on any 386 computer that could be built. No motherboard could handle the physical memory, etc. A specialized multiprocessor desktop system costing 10 times as much could, although processing might take an hour or so.

A mainframe required an overnight batch job; the actual CPU time was measured in seconds but the queue and loading took much time. Now it can be done in minutes on a desktop, with actual CPU time comparable to the 20 yr old mainframe vector processor.

On the other hand, an engineer is likely to run a much finer simulation with a new system, which means improved accuracy and eventually lower costs and higher safety. Now the increased demand on resources is not just a trivial difference in color depth and resolution on the monitor.

Storage is a separate matter. I helped somebody set up a computer for an insurance company 20 years ago. The actuarial tables and similar data took up over 20 meg of the full-height 80 meg drive. Today the actuarial databases take about the same amount of space. The application that uses it take more, but the remainder of the 80 gig hard drive is excess.

The computer at work will never have all the megs of miscellaneous pics and archived emails and junk I've got on mine. The OS and application will become antiquated and major hardware components will fail before the drive ever fills up.

You could say that although the raw power increases by 10^9 the resource demand increases by 10^6, leaving a factor of 1000 net increase per TL. I still think 1000 is foolishly underestimating the impact of truly radically new computer implementation tech that a TL increase represents. A factor of 10 increase is just plain stupid.

I don't think we can even imagine something more than two TLs above our current level. It would involve physics our generation of super-geniuses can't yet imagine.

;) PS: The TL12 BB will have self-polishing finishes, and the closet in each officer's stateroom will have built-in nanotech cleaning systems. Crewmen will do more important tasks.
 
Last edited:
I rememember a school project where we used a Soundex (phonetic similarities) to take a name and generate a number of possible alternatives which we used to hit a 5000 name phonebook and find likely matches. Hashing the phonebook into shape took something on the order of 20-25 mins on a 386 and 7 minutes on my 486. That was a notable improvement. I imagine on my P4, it'd take something under 30 seconds.

And that's just within 20 years.
 
I think you guys are missing the point. Effectively, those tasks that took hours to do then and minutes to do now are NEW ABILITIES. You are doing something you couldn't before, which is the whole point of Tech LEVELS.

Let me use another analogy. You can make a small sail-powered raft, and with it, you could cross the Atlantic (I think people do this with personal sailing craft now). A sail-raft is pretty low tech, right?

Now let's say we make a galleon. It sails faster and is safer than the little sail-raft. It allows us to carry large packages and allowed the mass-colonization of the rest of the world from Europe. You could technically do that with rafts, but who would want to?

Get yourself a clipper ship, and now you can more cheaply and efficiently get cargo and people across the ocean. It's possible to travel around the world in a much shorter period of time than it took people like Magellan and Drake. It took them a year or two, a clipper can do it in a couple months. (A steamship can do it in a couple weeks (international treaties notwithstanding), and a jet can do it in a day, but these aren't sail-powered.)

Rafts are like TL1. Galleons are like TL 2 or 3. Clippers are like TL3 or 4. But they all have the same basic functionality: they all FLOAT and are moved by the WIND. It's just that each does the job faster and has a higher capacity and level of safety.

By the same token, certain specific tasks we put to our computers are going faster at a geometric rate, but then we put to them tasks that simply weren't possible or practical with earlier generations, and THAT is a tech level.

C64: No/Slow graphical operating system, little space, slow, cannot do much more than vector drawing in realtime. TL7

Windows 2000 + 1G x86 CPU: fast, full color OS, plenty of space, EXPANDABLE, can play full-screen video at full speed. TL8

TL9 will be a machine that can do 3D VR interfaces in realtime, have a reconfigurable instruction set (like RISC on acid), probably be asynchronous...

DON'T define TL by a superficial number like MHz, define it by what new tasks can be done that couldn't be done before, or were impractical. There's always going to be research into newer things. We have proven that we can make stable antimatter atoms, does that mean we are TL17? No, because we cannot use it, or make it cheaply. 20 years later, we might find that we can do that, but chances are unlikely. Are we TL9 because we have experimental fusion reactors? No, because they must be PRACTICAL before we can say we've moved up. (Or at least that we can turn one on and leave it on.)

Improvements within a TL should not be ignored either. The black-powder design sequence from World Tamer's is plenty of precedent for that. TL2, 2M, 3, and 3M.... some TLs could stand to be subdivided into 2 or more pieces, if we don't want to define Y2000 as being any higher than 10 (my original suggestion, in first post).
 
If you'll notice, MHz isn't what I was looking at for the rating, but Floating Point Operations Per Second. FLOPS require multiple instructions, and are a good estimator of real-world performance. My 286 was 12 MHz, my new computer is 2.4+ GHz (can't remember exactly), only a factor of 200. But the processing power is far more than that.

8088 PCs required up to 30 clock cycles per instruction; 286 about 12 cc per instruction; 386 roughly 5; 486 about 2, and parity between ips and cc was achieved in Pentium chips. Chips now perform multiple instructions per cycle in several "pipelines," so that on-chip math processors approach 1 MFLOP per MHz. In terms of FLOPS and other real-world operations per second, this Athlon is well over 1000 times as fast as my 286.

T20 is slightly different from CSC, it has "units" and "CPU" and "PP." I don't understand what each controls, but I think units refers to storage (hard drive or TL equiv). Each of these quantities goes up by a differing factor, always considerably less than x10 per TL.

Not only less than x10 per TL, half the TLs are ignored and assumed the same as the previous TL, as computers are divided into 5 processor architectures over 10 TLs. Each of these represents an increase of rarely more than x10 in units or cpu. It is as though game designers are afraid of computers (no, they'll take over the world and start a war on the humans just like Terminator :eek: ).

TL07 Linear . . . . . . 10 Units . . . 20 CPU . .. 5/3 PP
TL09 Parallel . . . . . 25 Units . . . 250 CPU .. 13/7 PP
TL11 Synaptic . . . . 50 Units . . . 500 CPU .. 18/9 PP
TL13 Adv Synaptic . 100 Units . . 1000 CPU . 28/11 PP
TL16 Positronic . . . 1000 Units .. 2500 CPU . 28/11 PP

These values are for Desktop computers; Hand models are 1/100 as powerful, Portable models are 1/10 as powerful, and Miniframe models 10 times as powerful (except PP, which shifts up one category in the progression).

I can understand why they might not want to strain their brains coming up with 5 more specific processor architecture/design types in between, but why ignore the intervening levels completely?

If that table were instead TLs 7,8,9,10 and 11 it would be more believable for me but still underestimating the progression of computer power.
 
Last edited:
CSC = T4 Central Supply Catalog, right? COnsidering T4 is considered the most-broken version of our fine pastime, try to hold your disappointment in check.

Even GURPS, which has a much more aggressive TL tree, does not give computers so much power as they really have. As long as you're not talking straight Mflops, though, they give a fairly reasonable expectation of what to expect electronics to be capable of doing.

There IS a limit to how much computer power can be harnessed. Before too long, "Moore's Law" is going to run into a speed bump, and at that point, we can probably say TL9 or 10 begins. Coaxing more computing power beyond that point is going to be difficult. Hooking CPUs up in parallel is only so useful; getting a dual CPU system is not 2 times as powerful as a single CPU system; the task must be divided properly to come close to that. But the chart you've reproduced is not necessarily far fetched.

At TL9 or 10, we will have to go massively parallel to increase CPU power, and at TL11-12 we'll probably run into the practical limits of that and need something new. While their specific numbers may not please you, if you think of the kids of tasks that can be accomplished, the numbers given are not all that insane.

You cannot continue to think that 2Mflops is ALWAYS twice as good as 1 Mflops. Considering the use we will put that doubling of raw speed to, the actual gain may only be 10-20%; heck, you might LOSE some in the bargain if some one comes up with a killer app that cannot be run without anything less than 2 Mflops.

How is a jump calculated, for instance? Is is a single calculation, which can be done with an abacus (given enough time), or is it something that really requires a lot of calculations to be running continuously? The second implies that you must have a specific amount of computing power and that anything less simply won't work. BOOM! Tech level. Maybe J2 requires a whole lot more on-the-fly power. It's like trying to play an FPS with a crappy system or with a top-notch system. I can play Doom on any machine made in the last 5 years, but I cannot run UT2004 on my 5 year old computer AT ALL. It simply will not work. (You could call that a tech level, but this might really only be a fractional TL.)

Anyhow, you are still stuck on absolute values. I am sorry I am not a better teacher, to get you to free your mind.
 
Computers are fun and all but i think i would be more interested in Grav tech being brought down to earth.
 
Jump calculations: for one thing, you need to predict where the planets of the destination system will be at the time of arrival. You could do that with a simple multisystem astronomical calendar program that a TL7 computer can handle in real-time.

After all, canon jumps are uncertain by ±16.8 hours. Earth orbital vel wrt Sol is 107k km/h, which would translate to positional uncertainty of 1.8M km. At one parsec that is a little more than 0.01 second of arc. Then add in rel vel of the origin and destination stars: easily ten times that much but likely in a different plane from the planet's orbit. Lastly you have the vector of the ship's path oriented independent of either.

Uncertanty wipes out any modelling computation advantage of fancier computers.
 
Last edited:
Originally posted by TheDS:
There IS a limit to how much computer power can be harnessed. Before too long, "Moore's Law" is going to run into a speed bump

Presumably you mean when transistors are down to tens of atoms.

and at that point, we can probably say TL9 or 10 begins. Coaxing more computing power beyond that point is going to be difficult. Hooking CPUs up in parallel is only so useful; getting a dual CPU system is not 2 times as powerful as a single CPU system; the task must be divided properly to come close to that.

But some task divided among ten processors can run in a hundredth the time of one processor. Moores law does not say chips will keep getting smaller, but does include things like multiprocessors, neural-network processing, and holographic systems to increase overall power.

You cannot continue to think that 2Mflops is ALWAYS twice as good as 1 Mflops. Considering the use we will put that doubling of raw speed to, the actual gain may only be 10-20%; heck, you might LOSE some in the bargain if some one comes up with a killer app that cannot be run without anything less than 2 Mflops.

Which suggests that 2Mflops (let us say, rather, 2 GFlop) is likely to be more than twice as powerful than 1 GFlops as you pass the threshhold for new apps. But that is why we are talking flops rather than Hz. For at least some classes of tasks it is meaningful.

How is a jump calculated, for instance? Is is a single calculation, which can be done with an abacus (given enough time), or is it something that really requires a lot of calculations to be running continuously?

According to Book 2 it can be run in 1 "space" on a TL5 Comp 1 (a 1940s Eniac). That means a 4MHz 80888 with 256K RAM is overqualifed to calculate Jump 1. Jump 6 takes up six times the capacity, so a 386 with 1.5 MB ram should manage quite nicely.

Anyhow, you are still stuck on absolute values. I am sorry I am not a better teacher, to get you to free your mind.

He is not stuck on absolute values, but he prefers objective benchmarks to fuzzy handwaving. I think you have articulated your position well, we just don't find it well reasoned or compelling.

I think we are looking at one of the problems in Traveller TL. Once you get more than 2-3 TL beyond present art, our imaginations start to fail us.

TL 5 vaccuum tubes
TL6 transisters and ferrite cores
TL7 microprocessors
TL8 parralel processing
TL9 Neural network (3D) processing
TL10 Quantum processing
TL11 Holographic processing?
TL12 ????

IMHO the six levels of "future tech" all together should only be about three.
 
Originally posted by Uncle Bob:
He is not stuck on absolute values, but he prefers objective benchmarks to fuzzy handwaving. I think you have articulated your position well, we just don't find it well reasoned or compelling.
Ok then, I'll stop trying to elevate your consciousness :D

I think we are looking at one of the problems in Traveller TL. Once you get more than 2-3 TL beyond present art, our imaginations start to fail us.

TL 5 vaccuum tubes
TL6 transisters and ferrite cores
TL7 microprocessors
TL8 parralel processing
TL9 Neural network (3D) processing
TL10 Quantum processing
TL11 Holographic processing?
TL12 ????

IMHO the six levels of "future tech" all together should only be about three.
Well, we could say the same thing about just about any game, and just about any tech. Recent past has shown that raw computer power increases at a rather startling pace; nothing else has even come close, and I doubt the designers had any reason to expect them to do what they've done. Who could have predicted disposable home computers 30 years ago? That particular development drove the industry; with only a few hundred (thousand?) customers to spread expenses across, computer development went slowly. Once there were millions of users... Now we're in the hundreds of millions, approaching billions if not already there. It becomes possible to spread the cost of development out among them such that they are so dirt cheap it's not even funny.

But I digress. I simply don't feel you can put absolute values on this particular area. I once did; heck read the initial post that started this whole thing. One must take the CT computer rules with a grain of salt. They work quite nicely as a generalization, they do not - as you have pointed out - work as an absolute.

Considering the demands placed on higher tech ships - on board environment, emission control, bio-toxicity management, compatibility with multiple species, larger and more detailed star charts, better spoken-language parsing (better chance to recognize WHAT you mean when you tell it to "secure the building"*), and God knows what else - well, a high-tech ship's computer is going to be handling a lot more stuff than a low tech one, such that it's going to take a more powerful computer to perform a hi-tech J1, even though the calculation itself takes up the exact same number of CPU cycles (or less, assuming better architechture).

* "Secure the building" means different things to different people. What does it mean to you? Say it to a Dogface (army), and he'll blow it up and kill everyone in sight. Say it to a Jarhead (marine) and he'll go inside and kill everyone, then defend it against being retaken. Say it to a Squid (navy) and he'll turn off the lights and lock the doors before he leaves. Say it to a Wingnut (air force) and he'll get a rental contract with an option to purchase. And if you say it to a Puddle Pirate (coast guard) he'll stand out front and tell all passers-by that they can't go in, but do nothing to stop them from doing so. (This sounds like a joke, but it's really not so much of one as you think.)
 
Originally posted by TheDS:
Considering the demands placed on higher tech ships - on board environment, emission control, bio-toxicity management, compatibility with multiple species, larger and more detailed star charts, better spoken-language parsing (better chance to recognize WHAT you mean when you tell it to "secure the building"*), and God knows what else - well, a high-tech ship's computer is going to be handling a lot more stuff than a low tech one, such that it's going to take a more powerful computer to perform a hi-tech J1, even though the calculation itself takes up the exact same number of CPU cycles (or less, assuming better architechture).
I am not so sure. Look at the tasks you are assigning this computer. Except for speech recognition, (which is rapidly getting VERY good) we have systems and computers that do these chores now, and have been for the last 20 years.

Emissions control meaning what? Radiation? This is handled quite easily with an on/off switch, and proper design of the ship. Turn the comm set off, turn off the any unneeded electronics, and put the rest in a Faraday cage, and you are set.

Bio-toxicity and internal systems monitoring, this is already a done deal, and not that difficult to implement. Multi species compatablity for computer interface, granted that is not done, but there are plenty of systems that operate in multiple languages, multiple display and ergonomic modes.

Databases are simply a function of memory and search algorythims. So the star charts and such are easy enough to implement. So you have more data. The only problem is finding it, and for that you would need a faster computer, but not really all that much faster.

Besides which, for jump calcs, most of this stuff does not matter. You know what star you are going to, where you are at, and your star charts should have all the required information for your calc. Unless the formulas are incredibly complex, or recursive, it shouldn't take that much computing power to perform the calculations. And those calcs have nothing to do with toxicity, internal monitoring, or what language you are talking in.
 
Originally posted by TheDS:

Considering the demands placed on higher tech ships - on board environment, emission control, bio-toxicity management, compatibility with multiple species, larger and more detailed star charts, better spoken-language parsing (better chance to recognize WHAT you mean when you tell it to "secure the building"*), and God knows what else - well, a high-tech ship's computer is going to be handling a lot more stuff than a low tech one, such that it's going to take a more powerful computer to perform a hi-tech J1, even though the calculation itself takes up the exact same number of CPU cycles (or less, assuming better architechture).
Except that none of these systems suffer adversly if the Ships Computer suffers battle damage or a malfunction. The Ship's Computer only affects maneuver, jump, navigation and fire control, and the rules explicitly state that a TL5 vacuum tube Eniac can do all the functions of a ship's computer.

All other systems are independant of the Ship's Computer. They are under local control, whether that is a rheostat, a tape drive, a microprocessor, or a population of nanites.
 
Back
Top