• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

*Another* System?

Originally posted by Straybow:
By the factor-of-ten standards we'd be around TL13 for computers if the original vacuum tube machines are TL5. We may be more advanced in computers than an average culture at our general TL, but not by that much.

Depends how close we are to AI. Don't "semi intelligent robots" come in at about TL12... around the time of personal datalinks and advanced translators? Mobile phones are almost there in their current state... next step holocrystal storage (TL13).
 
Originally posted by robject:
I agree with your first point -- computing hardware is going to be invisible and ubiquitous in the Far Future -- and your last point -- ten PCs do not equal a mainframe.

As far as orders of magnitude go, Porter's rules concern the whole computer, not just how many teraflops it can perform. And that's the way to do it. I think the rules are the best I've seen yet... unless you count Classic Traveller, which might only need some size changes (and might not even need that).

My Commodore 64 is an 8-bit machine, cost $1000 (that's with its disk drive, which was as bulky and heavy as the C64... in fact, it has its own 8-bit 6510 processor, just like the C64!), uses the same amount of power as a modern PC, and runs at a blistering one megahertz.

My home computer is about the same mass and price as the C64 plus its disk drive. CPU went from 8 to 32 bits -- half an order of magnitude? Speed increase is 3 orders of magnitude. I/O, price, power, and size are all flat -- essentially no change.

Average them all out, and you get 0.6 pseudo-orders of magnitude. So, perhaps my desktop is an R0, just like my Commodore 64.

One could argue that a 32 bit machine versus an 8 bit machine is 100 times more complex, bringing the rating difference up to 0.83, but again I'm not sure that really matters in the big scheme of things...
No, if you are going to "average" the "whole computer" and lump in CPU bit-width you must use base 2 rather than base 10. Doubling the CPU path is about equivalent to increasing clock rate x10 (without changing wait states and other implementation specs). So, your 32-bit home computer is 2 orders of magnitude greater than the C64's 8-bit processor. Cutting edge commercial CPUs are 64-bit (3 orders higher) and clock speeds have gone from MHz to GHz (3 orders higher) in the same time-frame.

The purpose of looking at Flops is to encompass clock cycle efficiency, bitwidth, and clock speed in a single parameter. It works.

If you read more in the other thread, even if you account for significant OS interface burden and programming slop you still get far more than one base 10 order of magnitude useful processing increase between Traveller TLs.

If "average" cultures advance at a lazy doubling each decade you'd have a 1000-fold increase over a century. In the OTU timeline the average is almost 1000 years to a TL. You'd have to make computer design a capital crime to slow it down to 1k/TL.
 
Perhaps we should also consider, that there are and there will be certain limits in growth or advancement.
So extrapolating growth rates for something is a very risky business and believing into a constant increase in perhaps really just a "believe".

Regarding miniturization/integration we soon will scratch some borderlines, not that certain production steps become impossible but simply not economic.

Regarding advancement in computing a major scientific fear is, that - even if technology advances a couple of decades - there is no intellectual capacity available to really get along with that (like an humble average mechanic looking at the motor section of a BMW).

All in all the OTU is a very optimistic thing regarding tech advancement and much more sympathic to me as a long future period of real stagnation caused by missing resources :\
Maybe we should be glad with 1k/TL ?

Regards,

Mert
 
Going much smaller than current Gallium Arsenide lower limits gets to the point where electron tunneling can occur through resistors, and semiconductance is unreliable in variable magnetic fields (according to an article I read last year, but sadly don't have to hand).

There are undoubtedly ways to improve below this threshold, but they are not binary-digital in nature, and may be revolutionary.

Moore's "Law" is neither law nor unbounded. Yes, I suspect we'll see higher and higher bit width systems. No, I don't see traveller computers as viable under the semiconducting paradigm (See below), but by the same token, I think computing power will max out in around a century.

I suspect, however, that traveller computers acquired their size by virtue of Doc Smith's assumptions... micro-vacuum-tube ceramic blocks.

IMTU, semiconductance doesn't work in jump-space, but vacuum tubes do.... and their physical limit-out is much larger than semiconductor based circuitry. So anything reliant upon semiconductance is useless for ships...
 
I don't have AI (as in true artificial intelligence) IMTU.

Likewise, MicroVacuumTube tech is far more resistant to radiation and EMP. SDB's are likely to use it for the ability to hide in strong magnetics...

And no, fire control, once you can crank the fomulae faster than you can move the turrets, becomes a set-point. It can't get any better than fast enough to put rounds on target; at the ranges listed, it's not detect and aim; it's put sufficient rounds into his most probable location. I COULD run that on a TRS-80 Hand computer. (1Khz 8bit with 2x80 character display...); it becomes slightly more problematic with non-beam weapons....

Therefore, SDB's running non-MVT do not get accuracy bonuses for excessive computing. They do, however, get much smaller computers.

Likewise, most yards won't install Semiconductor stuff IMTU, as most yards are building starships, possibly sans a jump drive, for a small percentage, but all the hulls and installed equipment are jump-spec'd. Imperial Law.
 
So Aramis, what you have done is trivialize the computers. What space combat rules are you using? Directing lasers at a rapidly maneuvering target at a significant fraction of a light-second is not a trivial problem.

I asume the "Computer" tonnage is actually the fire control/sensor array, and all real data processing is transparent and distributed.
It means a trivial change to my deckplans, but I can use Mayday or HG2 rules with no change.
 
Actually, directing lasers at a rapidly manuevering target at a significant fraction of a light-second is a fairly trivial problem -- it's a combination of newtonian mechanics and, at extreme range, statistical analysis.

I can't think of any way for a computer to give a significant edge there. I also can't think of any way for a gunner to give a significant edge there.

Obviously, having skill be entirely irrelevant is a bad thing for a role-playing game.
 
If the acceleration is high enough to matter, IE, displace the ship in graterOf((2xDistanceinLS) or (Solution+Training time in seconds + Distance in LS) seconds more than its own smallest dimension, you can't actually aim for the ship. You put a sufficient number of shots, each separated by less than the target's length, into a cross section area of the ovoid of probable location.

Hitting with lasers is dependant not upon computing power (any 32 bit computer of the current speed regimes is powerful enough), and the real limitation becomes the time needed to train the lasers on the given coordinates.

At short ranges, it's mostly going to be missile you need to pattern fire at; edge on, however, some ship designs become quite capable of being missed at 0.1 LS or more.

The math can be done in tiny fractions of a second on a 100MHz 32bit math-coprocessor. A dedicated circuit could do it rather easily. Even MVT tech can do it fast enough to not matter much, and massive parallelism can overcome the lesser speed inherent in MVT over similar gross architecture semiconductor systems.


Computational speed is important, but becomes "No further benefit possible" quite readily, using realistic numbers and traveller ranges. Predict programs shrink the "Probable" ellipsoid based upon prior behaviours, thereby reducing shots needed to assure an intercept. Check the 1997 archives of the TML for the detailed formulae. I know I worked it out and posted there...

Now, of course, the above assumes that
1) LS sensing is the maximum possible
2) nothing similar to 2300's Stutterwarp is involved.
3) laser fire is subject to lightspeed lag, too.

Of these three, 1 might be breakable, as might 3; recent scientific results have pushed the speed of information over the speed of light, and light's speed may not be quite as constant as once thought, based upon some experiments done recently, but I've only read the articles about their paper.

note that 3 is breakable in some SciFi settings; star trek especially.
 
Hi !

Well, I agree with Aramis, that targeting itself could be done with even minor computing powers,
if there are correct coordinates available.

On the other hand getting and keeping these coordinates up to date is the complicate part, needing higher computing powers for real time simulations and to provide an optimized connection between sensoring and targeting. (Its the brain doing the interface job between eye, body, finger and trigger)
As such I consider the computer to be a very vital part in future weapon tech (just as it is now already).

Regarding the human skill it might be not important anymore, when computer power rises, at least for tasks like sensoring/targeting/firing.
Guess human skill is more relevant for tactical stuff here.

Regards,

Mert
 
Very little computational power is needed to get and keep the coordinates up to date, either -- the electronics used in a standard modern adaptive optics system have most of the expertise needed (they aren't equipped for range-finding).

If you read computer factor as indicating sensor systems, some bonus for a better computer may be appropriate.
 
Anthony, is that true for sensoring systems working at Traveller ranges, too ?

Honestly I have got no idea about resolutions and precisicion of real life sensoring systems.
I just have an engineers feeling, that its damn complicate to locate a 50 m object in x-times 15000 km with +/-10 m precisicion unless its a beacon


I considered Traveller sensors to give only a blurred pattern of located objects and related directional information at longer ranges.
Maincomputers job might be to run simulations about ships type and maneuvering actions and resulting emission patterns, which are compared to actual sensored information in real time, so that the best fit is considered as the right one.
Thats just a method I know from semi-fuzzy object recognitions systems...
 
Yeah, it should be true for sensor systems working at traveller ranges and precisions. It's just a matter of tracking a dot, and is mostly a function of the camera system being used to track the target. You need lots of computer power to search the sky for anomalies, but very little to track an already detected object.
 
Well, sensors pretty much max out in utility too. Once you have sensor enough that you can determine position to around 10 meters at a light-second (pretty much doable with sensors the same size as laser optics) you don't really need better sensors. Basically, any ship with the weapons to hit at those ranges will also have the sensors to do it.
 
Aside to Bob: Bob, would you quit double posting?

On the detection thread...

current computers are capable of doing the radar return and vector information as fast as is done. The problem is that NASA doesn't use radar, merely passives.

PASSIVE detections are a bit of a bitch, but actives are far from it. The pulses have a timecode, and it's timecode, freq-shift, and time-lag calcs. All of which can be done very fast. On fairly old computers.

Nasa uses a Unix box for doing their passives detection samples. The exposures they are using are long (20+minutes), and the computer needs to compare over multiple nights due to the differences. NASA is hunting needles in haystacks: non-emission low-albedo non-thrusting objects at ranges measured in multiple AU. Worst of all worlds for sensors... Once detected, it has to deduce from multiple fixes the orbit...

If NASA launched a high-power Radar, however, in 30 minutes, we'd probably add at least one more body to the catalogue... as that would be a 2-AU return. give an hour, and it's a 4 au return. With not just bearing and apparent motion, but bearing, distance, apparent motion, and close/fade rate (those last two can be combined to a directional vector in 3d), all from a 2 ping emission. Of course, radar is likely to be detectable at HUGE ranges....
 
What would that high powered radar beam do to a ship or satellite passing though it at close range?
Would there be any harmful effects?
 
Back
Top