Err, my problem with this particular line of reasoning (and the entire you-can-feel-and-interact-with-the objects-tactilely idea) is that the computational power necessary to seamlessly cause the 2 or 3 different systems to interact surely does exist in the OTU. Somewhat analogously to modern computers being able to seamlessly display high definition (are we to "3D" yet?) graphics and the accompanying sounds in any video game.
Dean,
It seems my problem with HDLs not also being "trojan holodecks" is very hard to explain. I happen to be in complete agreement with what you wrote above. What I'm cautioning about is the "next step" so to speak. Let me try another analogy.
We've all used 2D touchscreens, right? They've become fairly ubiquitous over the last 15 years or so. They're part of ATMs, control systems, cellphones, all sorts of things. There's even a new PC on sale with a touchscreen display.
As you know a touchscreen consists of a video display with a transparent touch sensor layered over it. When the display presents an object, you tap the touch sensor panel above that object. The positions of all the objects on the screen and your finger's position on the sensor panel are monitored by the computer. Those positions are compared and correlated, then the computer makes adjustments as necessary.
In the case of an ATM, your finger taps the screen over the X,Y location of the displayed "Withdrawal From Savings" box and the computer notes you wish to take money from your savings account and brings up the correct window to continue your transaction.
Here's the part of that example I want everyone to remember because it's the very small and subtle point I'm trying to make regarding HDLs. When the ATM user tapped the box to withdraw money from your savings account, they didn't actually
tap the box. They placed their finger in the region of the withdrawal box(1) they saw, the computer then detected their finger's position, calculated their finger was in the region of the withdrawal box, concluded they had chose that box, and took appropriate actions.
Here's the first important point: The ATM user didn't
physically touch the displayed 2D object. Remember that.
Instead, the computer sensed that the user's finger was near enough to the displayed 2D object and undertook the actions it was programmed to do when the 2D object was selected. The user didn't physically interact with the displayed object, they physically interacted with the touch sensor instead. Understand?
Now let's jump to TL14 - 15 HDL control panels in
Traveller. The holographic display system creates and displays a 3D object just as our ATM's video display creates and displays a 2D object. Just as a touchscreen overlaid the 2D object displayed on the ATM's video screen, a system of magnetic, gravitic, or whatever fields overlays the 3D object displayed by the HDL. When the HDL user attempts to touch the displayed 3D object his finger tip physically interacts with the magnetic, gravitic, or whatever fields instead, just as the ATM user's finger interacted with the touch sensor and not the displayed 2D object.
Here's the second important point: The HDL user doesn't
physically touch the displayed 3D object. Remember that.
Instead, and just as with the ATM, the HDL computer sensed that the user's finger was near enough to the 3D object and undertook the actions it was programmed to do when that object was was selected. The user didn't interact with the displayed 3D object, they interacted with magnetic, gravitic, or whatever fields instead. Understand?
What's more, the user's finger interacted with the magnetic, gravitic, or whatever fields and, if those field weren't present, the HDL user's finger would simply pass through the displayed 3D object because it is made of nothing but light. The HDL user can't grab the 3D object because it is made of photons and not
Star Trek's techno-babble "photonic matter".
That's the point I'm trying to make here. HDL panels
do not create 3D objects out of photons that a user can then
directly interact with in a
physical manner. Instead, the user interacts with the sensor and feedback fields projected by the HDL panel in conjunction with the 3D objects. No fields, no interaction, no matter what the HDL displays.
In the case of
Star Trek's holodecks, photons are somehow manipulated to produce objects that people can directly interact with in a physical manner. That's why I don't want HDLs in the
OTU to work in the same way, people will take HDLs and quickly extrapolate holodecks from them.
We've plenty of examples of VR in canon. The group who "invented" HDLs even wrote about VR systems in their Vincennes article, but they also didn't write about holodecks, anything that worked like holodecks, or anything even resmebling holodecks despite the broadcast of
ST:TNG being co-existent with
MT's production run(2).
I've suggested that HDLs operate in a certain fashion so that the technological assumptions behind that operation cannot also be used as an argument for the TL16 maximum
OTU to have holodecks.
Traveller may have holodecks at tech levels beyond 16, but that's a question for another day.
I hope this rather lengthy post finally explains my position and the ideas behind it.
Regards,
Bill
1 - Some of the first touchscreens I worked with consisted of two sheets separated by one ten thousandth of an inch. That's 0.0001 in decimals.
2 -
MT from 1986 to 1991 and
ST:TNG from 1987 to 1994.