• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

HDL's - Holodynamic Linked

I've suggested controlling air density and/or graviton manipulation. Bill doesn't quite like that idea.[/B]


S4,

Not exactly. I'm worried about HDLs using what Star Trek's techno-babble called "photonic matter" or some such nonsense(1). I don't want it said that HDL's create 3D objects that people can than physically interact with directly. I don't want holodecks.

What I do envision is what you're suggesting. The HDL projects a hologram of a roller ball. You could pass your hand right through it, except that the HDL also overlays a dense layer of air or uses gravitons or some other such "trick". That means you finger interacts physically with the air layer or graviton layer or whatever and does not interact physically with the holographic 3D object.

In the holodeck model, the HDL's 3D object has an inherent physical presence that you directly interact with. In the hologram-only model, which I believe to be the correct TL16 max OTU model, the HDL's 3D object has no inherent physical presence you interact with, instead you physically interact with magnetic, graviton, or whatever fields that are overlaid the 3D projection.

It's a small point but a telling one.


Regards,
Bill

1 - IIRC, there was a Trek episode in which "photonic" beings "invaded" the ship's holodecks believing the projections there are "real". Then again, there were probably several Trek episodes like that. Given the continual trouble the holodecks gave the various Trek crews I don't why the damned things were ever installed.
 
Not exactly. I'm worried about HDLs using what Star Trek's techno-babble called "photonic matter" or some such nonsense(1). I don't want it said that HDL's create 3D objects that people can than physically interact with directly. I don't want holodecks.

I'm with you, and agreed.

You may have skimmed through it, but I suggested, above, the idea of one of the systems going "out".

Say you're gripping a joystick, but the holo part of the system is damaged and goes out. You look down. You still feel the stick in your hand, but you see nothing. It looks as if you're gripping air. But, it still feels like you're holding the stick...

I think that'd make for some cool gaming.

It's the details, you know.
 
S4,

Not exactly. I'm worried about HDLs using what Star Trek's techno-babble called "photonic matter" or some such nonsense(1). I don't want it said that HDL's create 3D objects that people can than physically interact with directly. I don't want holodecks.

What I do envision is what you're suggesting. The HDL projects a hologram of a roller ball. You could pass your hand right through it, except that the HDL also overlays a dense layer of air or uses gravitons or some other such "trick". That means you finger interacts physically with the air layer or graviton layer or whatever and does not interact physically with the holographic 3D object.

In the holodeck model, the HDL's 3D object has an inherent physical presence that you directly interact with. In the hologram-only model, which I believe to be the correct TL16 max OTU model, the HDL's 3D object has no inherent physical presence you interact with, instead you physically interact with magnetic, graviton, or whatever fields that are overlaid the 3D projection.

It's a small point but a telling one.


Regards,
Bill

Err, my problem with this particular line of reasoning (and the entire you-can-feel-and-interact-with-the objects-tactilely idea) is that the computational power necessary to seamlessly cause the 2 or 3 different systems to interact surely does exist in the OTU. Somewhat analogously to modern computers being able to seamlessly display high definition (are we to "3D" yet?) graphics and the accompanying sounds in any video game.

There's no reason at all to think the holographic, gravitonic, or whatever else systems exist that actually allow you to "feel" and "interact" with them cannot be programmed and coordinated in such a way that the user actually perceives that they are manipulating and feeling the holographic image, rather than some other system.

One way to explain this whole thing away is to make the tactile system "weak". That is to say it takes a light touch, and any gross movements break the illusion and push your hand through the gravitonic force-feedback system. It would simply be unable to handle forces over a few newtons. This would prevent holodecks and still work as described.

My .02
 
Last edited:
There's no reason at all to think the holographic, gravitonic, or whatever else systems exist that actually allow you to "feel" and "interact" with them cannot be programmed and coordinated in such a way that the user actually perceives that they are manipulating and feeling the holographic image, rather than some other system.

I completely agree (of course, I do, based on what I've been saying up-thread).

Especially at TL 14-15. HDLs are supposed to be the best-of-the-best in the way of control units. They're the top-o-the-line technology.

I mentioned before that some TL 14 or 15 ships may not use HDLs in favor of DLs units to save money. I'm not sure if this is a viable suggestion, though, as I haven't looked at the MT Ref's book to see what the cost savings is, if any. It may just be a tech thing.

Also, remember that Traveller tech has all but mastered the manipulation of the graviton via inertial compensators, grav drives on vehicles, grav belts, starship thruster plates, interior grav plates, repulsor drives that keep entire cities aloft in the clouds. Heck, I suspect that ripping a whole in N-Space in order to cross over into J-Space probably involves manipulation of the graviton, too.

Gravitational manipulation could very well be at work, on a small level, on these high tech HDL control panels.





One way to explain this whole thing away is to make the tactile system "weak". That is to say it takes a light touch, and any gross movements break the illusion and push your hand through the gravitonic force-feedback system. It would simply be unable to handle forces over a few newtons.

I like this. This type of thing would be very interesting to explain to players. Maybe even put it in the game with something like this...

The PCs are looking for a new pilot. They're in a bar. They overhear one pilot tell another not to push too hard on the HDLs.

Instantly, they know which of the two pilots is more experienced.
 
Err, my problem with this particular line of reasoning (and the entire you-can-feel-and-interact-with-the objects-tactilely idea) is that the computational power necessary to seamlessly cause the 2 or 3 different systems to interact surely does exist in the OTU. Somewhat analogously to modern computers being able to seamlessly display high definition (are we to "3D" yet?) graphics and the accompanying sounds in any video game.


Dean,

It seems my problem with HDLs not also being "trojan holodecks" is very hard to explain. I happen to be in complete agreement with what you wrote above. What I'm cautioning about is the "next step" so to speak. Let me try another analogy.

We've all used 2D touchscreens, right? They've become fairly ubiquitous over the last 15 years or so. They're part of ATMs, control systems, cellphones, all sorts of things. There's even a new PC on sale with a touchscreen display.

As you know a touchscreen consists of a video display with a transparent touch sensor layered over it. When the display presents an object, you tap the touch sensor panel above that object. The positions of all the objects on the screen and your finger's position on the sensor panel are monitored by the computer. Those positions are compared and correlated, then the computer makes adjustments as necessary.

In the case of an ATM, your finger taps the screen over the X,Y location of the displayed "Withdrawal From Savings" box and the computer notes you wish to take money from your savings account and brings up the correct window to continue your transaction.

Here's the part of that example I want everyone to remember because it's the very small and subtle point I'm trying to make regarding HDLs. When the ATM user tapped the box to withdraw money from your savings account, they didn't actually tap the box. They placed their finger in the region of the withdrawal box(1) they saw, the computer then detected their finger's position, calculated their finger was in the region of the withdrawal box, concluded they had chose that box, and took appropriate actions.

Here's the first important point: The ATM user didn't physically touch the displayed 2D object. Remember that.

Instead, the computer sensed that the user's finger was near enough to the displayed 2D object and undertook the actions it was programmed to do when the 2D object was selected. The user didn't physically interact with the displayed object, they physically interacted with the touch sensor instead. Understand?

Now let's jump to TL14 - 15 HDL control panels in Traveller. The holographic display system creates and displays a 3D object just as our ATM's video display creates and displays a 2D object. Just as a touchscreen overlaid the 2D object displayed on the ATM's video screen, a system of magnetic, gravitic, or whatever fields overlays the 3D object displayed by the HDL. When the HDL user attempts to touch the displayed 3D object his finger tip physically interacts with the magnetic, gravitic, or whatever fields instead, just as the ATM user's finger interacted with the touch sensor and not the displayed 2D object.

Here's the second important point: The HDL user doesn't physically touch the displayed 3D object. Remember that.

Instead, and just as with the ATM, the HDL computer sensed that the user's finger was near enough to the 3D object and undertook the actions it was programmed to do when that object was was selected. The user didn't interact with the displayed 3D object, they interacted with magnetic, gravitic, or whatever fields instead. Understand?

What's more, the user's finger interacted with the magnetic, gravitic, or whatever fields and, if those field weren't present, the HDL user's finger would simply pass through the displayed 3D object because it is made of nothing but light. The HDL user can't grab the 3D object because it is made of photons and not Star Trek's techno-babble "photonic matter".

That's the point I'm trying to make here. HDL panels do not create 3D objects out of photons that a user can then directly interact with in a physical manner. Instead, the user interacts with the sensor and feedback fields projected by the HDL panel in conjunction with the 3D objects. No fields, no interaction, no matter what the HDL displays.

In the case of Star Trek's holodecks, photons are somehow manipulated to produce objects that people can directly interact with in a physical manner. That's why I don't want HDLs in the OTU to work in the same way, people will take HDLs and quickly extrapolate holodecks from them.

We've plenty of examples of VR in canon. The group who "invented" HDLs even wrote about VR systems in their Vincennes article, but they also didn't write about holodecks, anything that worked like holodecks, or anything even resmebling holodecks despite the broadcast of ST:TNG being co-existent with MT's production run(2).

I've suggested that HDLs operate in a certain fashion so that the technological assumptions behind that operation cannot also be used as an argument for the TL16 maximum OTU to have holodecks. Traveller may have holodecks at tech levels beyond 16, but that's a question for another day.

I hope this rather lengthy post finally explains my position and the ideas behind it.


Regards,
Bill

1 - Some of the first touchscreens I worked with consisted of two sheets separated by one ten thousandth of an inch. That's 0.0001 in decimals.

2 - MT from 1986 to 1991 and ST:TNG from 1987 to 1994.
 
Last edited:
Sounds like there are two ways to go (if you don't like gloves ;)) either some form of repulsor tech that interacts with your hands, or some form of hypnosignal that fools your brain.

I suppose we shouldn't ask how the repulsor tech works...? :smirk:

Bill, excuse my dimness, but for all of your explanation, I can't see what aspect of holodecks you object to. Maybe it's because my Trek is largely limited to TOS, so I don't know much about the things.

You could make a holodeck out of the stuff proposed above. You would see objects, be able to handle them and interact with them, the 'objects' could even be humanoid.

What I'm missing is what makes this tech a non-holodeck from your POV? What can you do with a Trek holodeck, that you can't do with a HDL holodeck?
 
What I'm missing is what makes this tech a non-holodeck from your POV? What can you do with a Trek holodeck, that you can't do with a HDL holodeck?


Icosahedron,

I went to the Star Trek wiki to get the specific techno-babble terms I was looking for, just so there wouldn't be any additional confusion.

In Trek's holodecks, the crew doesn't interact with holograms as we know them. The Trek holograms are actually "a construction of photons, force fields, and holomatter created inside a holodeck, holosuite, or by another type of holographic projector. Please note the "Trekno-babble" term "holomatter".

Trek's holomatter itself is is a partially stable substance giving the illusion of solid matter, held together by force fields created by hologenerators.

So unlike our holograms which are made up of photons exclusively, Trek holograms are made up of both photons and a partially stable substance called holomatter.

Traveller's HDLs use photons and force fields. We all agree on that.

Trek's holodecks use photons, force fields, and holomatter. The holomatter is the difference here. A subtle difference but a telling one.

HDLs create a 3D object with photons and then display it via holography. Manipulation of that object is done at one remove. You do not physically touch the 3D object because it contains only light. Instead, you touch the force fields around the 3D object. This is similar to an ATM. You don't physically touch the 2D withdrawal box display, you touch the touchscreen above it.

Unlike HDLs, holodecks create 3D objects with photons, force fields, and holomatter. The holomatter allows you to touch the 3D object directly and then physcially interact with that object.

In HDLs, your finger interacts with force fields "shrouding" the 3D object and the HDL computer measures that interaction between your finger and the force field to make changes in the 3D object. In holodecks, your finger interacts with the 3D object directly and the computer measures the interaction between your finger and the 3D object to make the necessary changes. HDLs have an intermediate step, holodecks do not.

The reason I've been repeatedly attempting to explain this is that holodecks do not belong in the OTU and a mistaken idea about how HDLs and holodecks work could lead people to posit the existence of holodecks in the OTU. Holodecks, as they appear in Star Trek, have never been mentioned in the OTU. We've many examples of VR systems, but nothing resembling a holodeck.

What's more, the people who invented HDLs did so after holodecks appeared on Star Trek. If HDLs were simply small holodecks masquerading as control panels, DGP would have also written about larger Trek-style holodecks being used for entertainment and training purposes. They did not and that strongly suggests that they felt HDLs and holodecks were different things and used different technologies.

Holodecks might very well appear at TLs above 16. However, they've never appeared in the OTU and I don't want to give some bright light the tiniest excuse to further muddle canon. Canon is muddled enough, so we needn't add to the problem.

HDLs are not holodecks and HDL technology cannot be used to create holodecks.


Regards,
Bill
 
I went to the Star Trek wiki to get the specific techno-babble terms I was looking for, just so there wouldn't be any additional confusion.

Wow, you know it's gonna be a wild ride down the rabbit hole when you start a post with that sentence! :D
 
In a nutshell, what Bill is against is the HDLs producing 3D objects that are real objects (as with Star Trek's holodeck).

He's against the Terminator 2+ liquid metal idea, because that would produce a real object.

He's for the holographic generated 3D objects with tacticle response provided by another system (gravitic manipulation, for example).

I'm on board with that, too. I think we all are, now, after investigating the HDLs in the MT tomes.

The question is: How is the tacticle response generated?
 
I don't believe any OTU technology can create a "holographic" joystick that would feel like a physical one. This is the sticking point. It requires considerable matter (or mental) manipulation, and opens a big can of holoworms.

Buttons and switches are far simpler (eg it doesn't matter if your finger goes through them).

Joysticks give you holodecks. Buttons don't.
 
I could see a holographic representation that you could manipulate without feedback by the use of multiple sensors (motion, light, etc). Someone has already hacked the Wii to do this with a 2D screen, and MS' Surface is really just XP with several cameras to figure out what you are doing - it is not a touch screen at all.

So a 3D manipulative system is do-able, I just don't buy the tactile feedback, even with projected force screens or what not. And in the case of piloting a ship - sorry, I want a physical controller that won't disappear if the power fluctuates. I prefer the ST style of a touch-sensitive board that is configurable by user and/or function for data entry/reading, but I prefer a physical device for actual ship steering (perhaps more than one as per current aircraft to control pitch, yaw, direction, etc. Hate to see the Hiver version...)

And 3D is good for a lot of things, but some things are better in 2D. I can see 3D for data visualization, topo-mapping, that sort of thing. For controlling a ship, 2D is probably faster to resolve into meaningful data (studies show that the analog speedometer is more effective at conveying your speed than a digital readout. Now that may just be due to what we are used to, and perhaps if you color-coded the digital readout (faster becomes more red or something) it may be more meaningful). But I still want a physical control to hold on to. Makes it easier to lean into the curves... :D
 
A somewhat tangential post: This has nothing to do with holograms, but it is something that might substitute for holosuites:

MindGames is a wholly owned subsidiary of ISMM Corporation of Trin/Trin's Veil. For several decades, ISMM, a cutting-edge computer software laboratory, has been experimenting with total-immersion computer simulations where the user experiences fully realistic stimulation of all five senses. Three years ago, they opened MindGames franchises on Glisten, Mora, Rhylanor, Trin, and several other GTL12 worlds in the Domain of Deneb. Here, visitors can enter highly realistic computer simulations offering a variety of adventures.

The original simulations ran with a 1-to-1 correspondence between game time and real time, so participants had to lie in special "experience tanks" designed to keep the body well-fed and healthy during the game if it was to last for more than half a day. Today, thanks to sophisticated psychological techniques used to stretch the time perception of the players, the simulations run with an average time-scale factor of 12-to-1 -- that is, for each hour spent in a MindGame, the player experiences roughly 12 hours of elapsed time. Experience tanks are still used for long games, but many games run for 4 to 12 hours (two to six days of game time), and these need only induction helmets.

This is from a Variant article I wrote for JTAS Online (that is, I meant it as a canon-compatible article, but Loren published it as a variant; I'm not sure why). It's based on a company mentioned in the Traveller section of a Thieves' World module. The article includes a number of adventure seeds related to MindGames.


Hans
 
Seemingly a tangent to the basic thrust of this thread, yet, it helps to explain the cautionary note that Bill is injecting into this...

Imagine being able to focus gravitic beams to produce proto-material making virtual material that are more or less "real" to the operator.

Now, imagine me as a player saying "I wish to invent the first prototype Gravitic restraint system. Imagine if you will, the good old fashioned 18th century "stocks", where you bend over, place your neck in the neck groove on the lower board, your left wrist in the wrist groove of the lower board, and your right wrist in the other wrist groove. Then, snap the whole "gravitic stocks" construct down so it encloses the entire victim's neck in a circular area smaller than the head's diameter, yet larger than the neck's diameter. Our poor "victim" can not escape from the gravitic stockade, and is immobilized to remain in that location and position until released.

Now imagine such a field being mobile. It is placed in a brig such that you insert the victim to be held in the brig, and at any moment, in the flash of an eye, this gravitic field can be formed such that it immobilizes the target victim on the spot. The victim can't move because he's wrapped up by focused gravitic beams that won't let him move beyond the confines of the field projected "protomatter"

One thing that could be the natural consequence of advanced technology, might be something that interacts directly with the brain's pattern itself. Perceived objects that do not exist in reality, can be beamed directly into the cortex of the mind such that it accepts brain stimulation in the same coded fashion that the brain accepts REAL information from REAL senses in REAL hands. So the coded response for a pin prick in a real hand, can be recorded by a computer, and the beamed pattern directly to the cortex might get the same signal. Was that signal received by a real hand, or projected by the computer itself?

More on this topic in the next post.
 
Now for the problem that everyone is forgetting...

The controls we're discussing here are interfaces between the computer and the sapient being (almost said human, but there ARE aliens in the Traveller universe right?). Imagine the following situation...

You're grasping at a holographic generated control. The computer's sensors note the position of the hand, and its motion, and is able to plot the motion of the hand via its sensors - and respond as it is programmed to respond. No real direct control between the user's hand motion and the computer itself right? Now, what happens for error correction in that system? If you're moving a real life steering column, your hand is held in place not just by your muscles, but by the interaction of your hand on the physical wheel itself. If you move your hand while gripping the wheel at a rate of turn of 1 degree per second - it will take you 60 seconds to move that wheel 60 degree's worth of turn right? But what if you meant to move that wheel 3.5 degrees per second in 10 second's worth of time? That's a 35 degree turn you intend to do at a rate of 3.5 degrees per second. Your hand is held in place by the wheel itself as well as by your muscles moving the wheel in a sharply limited range of motion (ie the wheel can't move forward towards you, at an oblique angle towards your left and up directions, etc). Do you have enough muscle control to keep your hands at precisely the same spot at all times? Can you make a perfect circular motion with your hand without something to guide it?

Hand motions are seemingly simple, yet are the process of feedback between the hand's actual motion, the eye's actual focus on the hand's motion, or perhaps the tactile sensation of the hand on a given object. Place it on a physical wheel, and you can move your hands in a circular motion required to move that physical wheel. Now, try a simple trick...

Take a video camera. Move your hands in the motion as if you had an imaginary wheel in them. Turn that wheel. Close your eyes, turn that wheel again. Superimpose the image of you moving the wheel the first time with the second sequence where you turned the wheel.

Want to bet that your hand motions will not be precisely the same, nor have the same precision of control over that imaginary wheel?

This is why I suspect that actual physical controls will be required in addition to simulated controls for a starship.

Imagine something worse occurring...

Your hands have no real support as they grasp the fake steering column. You're tired, and your arms are experiencing muscle fatique. As a consequence, your tremors are directly being read by the computer as instructions, but are not intentional instructions. You sneeze, and because your hand interacted with the virutal controls - the ship does something you don't want it to do. Real physical objects requiring intertia to be overcome, might not response with as much sensitivity to your sneeze as the virtual controls might/would.

Which brings me to my final thought on the topic...

While it looks neat to have a virtual control, any single component of a sensor system, gravitic focusing element, and so forth, if it fails, will begin to cause degradation in the control system itself. If you need 1,000's of elements to maintain all of the components of your control system, and you need a lot of software to run the single station - multiply that by the number of stations you intend to have aboard your ship. Then add in the fact that someone has to maintain these elements. Then toss in the fact that in instances where focused gravitic fields can do the jobs that used to be done with real physical parts, and you begin to see what can and will come into being in a Traveller universe as speculated upon by those within this thread trying to describe how the holographic controls work...

Moving parts will cease to exist but as gravitic fields designed to do the same job as the physical components did.

How much energy is going to be required to make all those gravitic based objects?
 
I'm sorry.

I want a joystick.

No, I want 2 joysticks.

:)

Just not ST:NG iPod touch screens. You never see people clean those buggers do you, and my iPod needs cleaning all the damn time! Did Data clean his own screen d'you think??
 
I don't believe any OTU technology can create a "holographic" joystick that would feel like a physical one. This is the sticking point. It requires considerable matter (or mental) manipulation, and opens a big can of holoworms.

I disagree. Traveller tech allows a person to strap on a grav belt and go zipping around. I don't have a problem thinking that holograms can be used as controls.
 
Two points:

First, I still don't see the operational difference between HDL and Holodeck. I'm trying to imagine a Turing Test: I'm in a room that appears to be a holodeck. How do I decide whether what I'm interacting with is 'HDL image + force field' or 'photonic matter'?
And if I can't tell the difference, why would someone accept one but vehemently oppose the other? It seems you don't object to Holodecks per se, but Photonic Matter, Bill. We would agree on that.

Second, As Hal and Andrew pointed out, the tech that produces tactile feedback on holograms opens a whole 'alternative use' can of worms, unless you can figure a rationale of why it doesn't. If we're exploring repulsor tech for HDL this needs sorting out.
 
Last edited:
Having read through SOM it only mentions the "HDL's simulating controls" in one place (Page 5) and even that does not spell out that it can produce controls with haptic feedback

All other descriptions of the workstations mention physical control elements for things that need tactile feedback. So may be "controls" is used a bit more generic and refers to hovering elements that respond to drag/drop gestures etc. without the haptic feedback.
 
So, to continue the thread...

How does the hand become fooled that it is touching something real? I hologram roller ball appears on the panel. How is it, when we touch it, that it feels like a real ball under our finger tips? What makes that happen?
I open the floor to you...

Why is Tactile Feedback even important? In a car or aircraft, it is used to convey information on outside forces acting on the vehicle, but in space ... with inertial compensation and artificial gravity ... there are virtually no external forces acting on the ship and most of the internal forces seem to be dampened from the operator's perception.

On a simple line, I can convey data of a certain complexity. On a two dimensional graph I can convey data of greater complexity than the simple line. On a 2D representation of a virtual 3D space (like a photograph or TV Screen) I can convey data of still greater complexity. An interactive 3D hologram would seem (IMHO) to represent an interface capable of conveying Data of still greater complexity.

Since Starship operations are clearly more like 'fly by wire' than direct control (even in the Apollo Era - more so today), it seems perfectly reasonable to manipulate a holographic object that responds to 'touch' without the need to simulate a tactile feedback. The Holographic interface is an advancement of the Star Trek-like flat displays, so the operators have long since abandoned the need to feel a switch 'click' or the resistance of turning a knob. Why would the 'Next Generation' interface require a nostalgic step back to 'virtual' 1950's technology?
 
Quite a few studies with Virtual/Augmented Reality show the need to provide some feed back. For certain actions this can be visual (Color change etc) but for some actions, specifically those that are fully/partially "out of sight" a tactile feedback is needed. Same for certain manipulations, it's at least easier to "feel" something
 
Back
Top