• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

*Another* System?

There's a reason NASA doesn't use active sensors for searching for targets. Passive sensors suck badly unless you already know the location of the target (which generally increases range by 1-3 orders of magnitude). The 4th order signal dropoff with range (vs 2nd order for passives) means they really aren't useful as broad area sensors at space ranges.
 
Most real life actives are most limited by atmospheric noise and surface curvature.

and yes, the 4th order drop off is a problem. But for ranges in system, not a big issue.

We were/are able to receive a 25W signal at ranges in double digit light-hours (Voyager II). IIRC, it used multiple 3-5m dishes.

Most military radars emit huge (kW) powerful pulses; partly to rise above the terrestrial RF noise clearly, partly due to albedo issues, partly due to size issues for the receiver... as most military radars are actually quite small antennae dishes... so signal strength is important.

Someone did the "Definitive Sensor Rules" for T4... quite realistic...

And NASA has never really discussed use of active sensors for broad-field work. At ranges past in-system, they have little use; however, they are quite viable for in-system search. NASA does, however, have a history of Atmospheric penetrating radars for planetary mapping. Venus was mapped this way. Discussion of mapping titan this way was held, I don't recall if it made the launch or not. Several cometary and asteroid mapping mission plans (Which may have been scrapped) included radar and ground penetrating radar imaging....

As for visual imaging, accuracy is down to small factions (IIRC Hundredths) of arc-seconds. Radio-astronomy is, IIRC, about the same...

It is said that Arecibo could "pick up a cell phone on jupiter"; or somewhere around 2-5W at source over 4-6 AU. Since Voyager II could transmit at a peak 26 watts, that's a very reasonable assumption; voyager could be picked up past the orbit of neptune with Arecibo (and, IIRC, the VLA, and several of the 34m dishes...) Now, the 70m dishes are required.

So, 26 watts, from a 3.66m dish, being picked up over 12Light-HOURS, by a 70m dish.

radio accuracy of the DSN antennae: (NASA Press Release)
An analogy of successfully getting a DSN transmitter signal into Voyager-2's limited receiver window, would be like throwing a baseball across thousands of miles of ocean, and being able to get that baseball to pass through a very small port-hole window of a moving cruise ship.
The spacecrafft speed si about 30LM/year.

So, a 5m dish (1/14th the size, so 1/196 the reception power), at 1kW, (40x the power, or 6x the signal range) should get half a light hour or so (roughly 23lm) of viable return... so that's 12lm, or so of viable round trip, assuming perfect albedo; let's chop the signal again, for that, and we still get a decent wide-field radar at roughly 6LS, or just under an AU. In a single shuttle mission, one could, in theory, put up a 10m dish...

Perhaps it's time to write to Virgin Galactic, and see if they want to put NASA to shame on the "Asteroid Threat" scene, too... Oh, wait, how are we going to power a 1kW radar system...


Citations:
Voyager II transmitter data: http://ringmaster.arc.nasa.gov/voyager/datasets/rss/vg2uinst.html
May 04 commo with both voyagers http://voyager.jpl.nasa.gov/news/profiles_dsn.html
 
Just an update on the requested tech specs: "...A basketball at 20 million miles" (Discover TV "Magazine", subject - earth-crossing asteroids) using optical telescopes.
 
Originally posted by Aramis:
So, a 5m dish (1/14th the size, so 1/196 the reception power), at 1kW, (40x the power, or 6x the signal range) should get half a light hour or so (roughly 23lm) of viable return...
Except you're using completely wrong math.

First of all, the 25W signal being picked up is directional -- per the citation you give, +36 dB or about 4000x as much, and +47dB for the shorter wavelength. So, this is equivalent to a 100 to 1000 kW omnidirectional signal.

Ok, now, our 5m receiver can pick up a 100kW omnidirectional signal at about 6000 light-seconds, and a 1 kW omnidirectional signal at 600 light-seconds. At a range of 1 light-second, it can pick up a signal of 2.8e-3W.

Now, let's assume we're using an omnidirectional transmitter, and that the range is one light-second, and the target has a radar cross-section of 10 square meters. A 1 ls bubble has a surface area of 1.1e18 m^2, so the average power level is 9e-19 w/m^2, and the 10 square meter target is reflecting 9e-18W. This is not exactly an omnidirectional signal, it probably reflects to only half the sky, and it slightly favors reflecting towards the emitter, so we'll call the total reflected signal 2e-17W, or 10^14 times less than the minimum signal our 5 meter dish can detect. In order to get the signal to a level that the 5 meter dish can detect, we need to reduce the range by a factor of 10^3.5, or to about 100 km.

Now, the reason radar is usable at interplanetary distances is because you can focus it. Using a 2.2GHz radar with a wavelength of 14 cm, and a 70m dish, you can focus the signal to 0.002 radians, which makes it approximately 3 million times brighter than an omnidirectional transmitter. Use a megawatt pulse instead of a kilowatt, and note that we're using a 70m dish instead of a 5m dish, and we've just increased the signal strength by a factor of 6e11. Also, note that we're typically interested in an asteroid with a radar cross-section of closer to 10^6m^2 than 10^1m^2 and we gain another 5 orders of magnitude, for 6e16x stronger, or the equivalent of a 1W signal, which won't be any problem to detect at 1 ls, and shouldn't be a significant problem at 10 ls.

Unfortunately, in order to do this, we're only looking at a tiny tiny part of the sky, which is completely useless for initial detection.

For numbers which are more linked to reality: a good minimum detectable signal for a radar receiver is 10^-15 to 10^-16W, though I think some astronomy systems go down to 10^-18W; we'll use that number from here on out. If you have a 12m dish (area 36pi) picking up an omnidirectional signal at 1 ls (area 3.6e17pi) a 1W omnidirectional signal at the transmitter works out to a 10^-16W signal at the receiver, which is 100x MDS. If we have an omnidirectional emitter, we would need to be outputting 10^15W to detect an object with a 10 m^2 RCS; however, a focusing effect of 50 dB is not unreasonable, which brings it down to 10^10W (and means we're only scanning 1/100,000 of the sky). Upgrading from 12m to 70m brings it down to 10^7W. Targeting an object with an RCS of 10^5 m^2 drops this down to 10^3W. Using a shorter wavelength can give more focusing.

Still, this math should demonstrate why active sensors aren't used to find asteroids...
 
Hi folks !

Pretty interesting remarks !
So using these stats and passive sensoring would at least allow to say "there is something".
What object "resolution" and positional resolution of such an passive scan could be expected ?

Same question regarding active scanning.
If its about distance measurement the active way should be high precisicion (+/- a few mm).
But what precisicion could be reached regarding the angular position, which might be quite useful for targeting lasers ?
 
Active sensor resolution depends on the wavelength of the active sensor; radar generally won't have sufficient angular resolution at space combat ranges, but simple optical sensors (or lidar) do have sufficient angular resolution, and if you add radar to determine range you have enough information to hit.
 
So You have to hit the target with a LIDAR beam first in order to get information to hit it with the laser weapon ?
 
robject,
You mention that I/O is about the same between your C64 and your Mac. I would disagree - your C64 probably couldn't render images anywhere near as well (though the C64 was really good, and this is coming from an Atari guy
) as your newer computer. It also couldn't handle nearly the throughput of data that a simple USB connection will handle, now. (I use a 20GB USB HDD often, and it is just as fast as an internal HDD.) But, we are moving so much more data now that the speed is not necessarily noticeable. We talk/fight/slander alot about UWPs on these forums - they were made to condense data down to an easy-to-use size. Now, we can transmit all the sector data (multiple worlds, satellite info, perturbed orbits, etc.) in a spreadsheet that doesn't take very long, at all, to open.

Frankymole,
I saw an article just last week concerning holographic video discs. 1GB transfer rates and 1TB of storage on a CD-sized format. Cool, huh.

Aramis,
You are right. Nobody is going to work too hard at a computer program that can target a ship faster than the turret can slew. But, of course, you might make a program that can track multiple bogies, and predict which one you should shoot next, after this turret has come around to kill the first guy.

Of course, the point of Science Fiction is (often) to assume that all the limits we currently know are not really limits. And, then, work from there!
 
Turret-slewing is hardly going to be your main concern anyway. At distances of 1 space hex (30,000 km) a ship moving at butt-hauling speeds is going to change its orientation toward the turret at what, a microradian per second, if that much? It was a concern for high rates of fire for large guns on WW2 ships, not for space combat. So long as the turret can slew as fast as ownship can rotate, I'm not concerned.

Computers have been put to more and more complicated tasks in the past 50+ years. Sure ENIAC is infinitesimally less powerful than your modern x86 CPU, but it was not put to comparible tasks either. On the EXACT same tasks, yes, you'll make your computations in a trillionth the time.

Straybow mentioned large databases from 30 years ago and running the same databases today, that they sort lightening quick. However, quite a few companies have expanded their databases from keeping track of a few variables to very inclusive databases. They no longer care so much about JUST the name, address, pay rate, and number of days of employment for their employees (and customers), now they have HUGE databases containing every gatherable aspect of a person's life, and if the company has grown, then there's more people to track too.

FLOPS is as useful in comparing computer power as is MHz. Useful for some things, but not a good indicator of all around performance. the tasks we put computers to have evolved as much as the computers. You cannot play Doom on ENIAC, nor on a C64, but you could play it on a 386 iirc (never played Doom, nor with any x86 older than P2). But Now there's Doom3, and you can't play it on a 386, nor a P2; you need a full-bore modern computer to handle it.

Why?

Because there's a lot more detail in it than there used to be. The biggest eater is graphics. The graphics are a lot prettier, more realistic. Sound eats a lot of performance too; the new sounds are much better, but we've about gotten to the peak of sound technology, in that we can't make it any more lifelike or take up less computer power, without getting REALLY expensive. The enemy AIs are smarter. They have more realistic reactions to being shot. The physics of the game world are much more detailed.

All this means that we are doing something new. I'm not talking necessarily revolutionary-new, but evolutionary change is as much a part of TL as is revolution. Why do you think that with each new generation of games/hardware that they now compare them based on delivered framerates? It is well known that MHz =/= performance, and it is equally true that the fastest MFLOPS =/= the fastest task completion, else we wouldn't see so many setups vying for first place in the same review article.

I don't think it's unfair at all to say computer power increases by a factor of 10 for every TL, so long as you don't think of it in the ultra-narrow-visioned ability to add a pair of numbers together, and rather take a broader view that computing is more a function of the new kinds of tasks that can be performed IN A REASONABLE TIME. Could you fold a protein with a C64? Maybe, given the right hardware hacks and a few centuries to wait. Can you do it with a modern computer? Sure, again, if you're willing to wait a few years for it to complete. A modern supercomputer can do it pretty quickly in comparison, but why do they break it down into smaller chunks and distribute it to people with nothing better to do with their wasted computer cycles? Because it would simply take too long to do it with even a supercomputer.

Maybe next TL, when raw computational power (FLOPS) had increased another 1000 fold, we'll be able to build supercomputers that fold all by themselves, without need to distribute the task, but the home user experience is only going to improve 10x (if that). Maybe we'll get those transparent computers Straybow has promised us, but I think it'll have to wait another TL.
 
Because there's a lot more detail in it than there used to be.
a good case, but it breaks down on a couple of points. what if someone says they don't need all the pretty sounds and graphics and auto-spell-checking, that they just want something that gets the job done? also navigation, gunnery, targeting, auto-evade, and numerous other tasks don't seem as computationally intensive as protein-folding.

maybe the traveller universe runs on windows.
 
Optical resolution example from recent program on Science Channel: "We are tracking items as small as a basketball at 20 million miles"

Modern radars can be built which can track 5-7mm objects out to a mile (CIWS can do this, based upon published data, and probably better).

Resolution limit is immaterial, provided it is smaller than expected beam divergence at same range...

Number of targets: again, not a terribly relevant issue; the F15 can track 25 simultaneously with the 1980 computer set.

A modern computer can do far, far more.

Mind you, modern combat aircraft do not have modern combat technologies, due to implementation issues. Most are lagging 5 years behind, or worse. They don't NEED to be Top of the line; hey just need to do the job in time and under budget.

One of the biggest changes I foresee is a return to backplane style architecture, or something comparable. The Box ceases to be a complete computer, but instead a host for a number of dedicated specialized computers sharing access to a human interface point. Software and hardware again merging back to cartridges, albeit more permanently installed. Need to do 3D video, fine, you get a 3D video processor adhering to a standard codec.
Need a firecontrol subsystem and a radar subsystem? Fine, plug them both in in adjacent slots on the token ring backplane (radar then fircecon; each time the token arrives the radar passes a frame to the firecon)

Dedicated hardware is usually more efficient for large tasks. Multi-processing is usually better than single processor, once the 3 processor level is reached. Redundant arrays of inexpensive computers, often specialized.

Video is a key example. A dedicated graphics processor can do FAR more graphically than a significantly faster general processor.

Note, however, that sometimes, specialized processors are not used. The Rocket videoboard was a popular Mac II peripheral; it was far from the most powerful video card available for the mac II; it was, hwoever, a second fully capable CPU, and could be software configured to run a second instance of the the computer.

But in general, the faster machines now are using specialized sub-processing, leaving CPU's for more generalized tasks. To wit, Video, Audio, and I/O. It used to be common to hand-off heavy math, too. (It probably will again...)
 
Originally posted by Fritz88:
robject,
You mention that I/O is about the same between your C64 and your Mac. I would disagree - your C64 probably couldn't render images anywhere near as well (though the C64 was really good, and this is coming from an Atari guy
) as your newer computer. It also couldn't handle nearly the throughput of data that a simple USB connection will handle, now.
Oh no doubt, otherwise the C64 would still be on the shelves. (Aside: the Atari 800 seemed to have essentially the same hw and capabilities as the C64. Does that mean they dropped the ball, or Commodore was lucky?)

I guess what I was thinking of is the nature of the I/O itself: a handful of ports, serial, daisy-chained, parallel, whatnot, and that's about it. The C64 and the modern PC are designed to do the same type of things.

It seems that, compared to the throughput of a Nortel DMS 250 Switch, they're piddly-squat. And beyond that, the mainframe is surely the One I/O Platform to Rule Them All. If rumor be true, Mainframes are all about throughput. Lots of it. They put PCs to shame.

Optical resolution example from recent program on Science Channel: "We are tracking items as small as a basketball at 20 million miles"

Modern radars can be built which can track 5-7mm objects out to a mile (CIWS can do this, based upon published data, and probably better).
OK, these are good numbers. What are the ratios, then, assuming the sensors are at TL8 or so? Approximating:

basketball = 0.3m @ 3mkm
= 1m @ 10mkm.

And, say, 5mm objects at 1.5 km?
= 0.005m @ 1.5km
= 1m @ 300km

Looks like a 5mm object may be trackable to 333,000 km... is that incorrect? How much 5mm dust is there in 333,000 cubic km of space?
 
I don't know exact numbers, but the Airforce tracks orbital objects to 5mm with radar from surface radio telescopes on maui; there are over 10000 (yes, 10K) objects being tracked by USAF's Space Command in orbital space. Over 5000 objects tracked with optical telescopes from the same observatory. They track out well past geosynch, but not into interplanetary space. Public announced radar resolution is 1cm; items being tracked prove that to be larger than actual resolution. (They can track the screws lost by shuttle crews.)

(mixed sources, including "Commander's unclassified briefing to US Space Command", ca 1995; "Guided Missiles", USAF Acadamy Press; several confirmations of data on Discovery and Science channels.)
 
Note: it is possible to track, with radar (or optical sensors), objects which are smaller than the resolution of the radar, as long as the background noise is low. As an obvious example, the optical resolution of the eye is around 1 minute of arc, and alpha centauri has a width of about 0.002 seconds of arc, but alpha centauri is visible to the naked eye.
 
Got some realworld stats, which are maybe a bit more valid for combat purpose stuff.

As the link below describes Eurofighter' "Captor" radar system enables to target other fighters at distances up to 185 km and jumbo-sized craft up to 360 km. Up to 20 seperate targets could be tracked independantly.

Naturally there seems to be a large difference between scientific detecting capabilities and actual usability for combat / navigation.

A vast amount of computing and database power appearently is used for the Non-Cooperative Target Recognition System (Active) and the Passive Sensor System, which both compare sensor input patterns to database stored patterns in order to get information about specifications and actions of the target.

Heres the link:
http://www.airpower.at/flugzeuge/eurofighter/sensorik.htm#captor
 
Cool. Does that mean, then, that a rough estimate of TL8 sensors (for combat, navigation) would be

Size 6? craft: 200 km
Size 8? craft: 400 km

(What's the size of a fighter? Under 10 displacement tons? And by 'jumbo', I assume you mean like a Boeing 767 -- a Size Code 8 ship, in other words)


If would be nice if sensing scaled nicely like that.

</font><blockquote>code:</font><hr /><pre style="font-size:x-small; font-family: monospace;">TL Size Range (km)
8
3 1
4 10
5 100 Person-sized
6 200 Grav fighter
7 300 Fighter
8 400 Scout, Trader, etc
9 500 Escorts
A 600 Cruisers
B 700 Dreadnoughts
C 800 Monsters</pre>[/QUOTE]
 
Originally posted by robject:
Oh no doubt, otherwise the C64 would still be on the shelves. (Aside: the Atari 800 seemed to have essentially the same hw and capabilities as the C64. Does that mean they dropped the ball, or Commodore was lucky?)
In a way. Commodore went to the game stores and mass market. Atari, opposite what you would expect, went for business / home office customers through a network of dealers. My step-father carried them at his store and they sold well, but they were not taken very seriously when sitting next to all of those business-bred CP/M machines. Atari had the name and could have commanded the gamer market, but they didn't think that it was a big enough segment. The 800 was an amazing machine in its day.
 
Back
Top