TomcatHeavy
New Member
One thing that I haven't seen brought up is that the human eye can only focus at one distance at a time.
Regardless of the complexity of this proposed hud, it will never be in focus unless your eyes are focused on the two to three inches it is away from your face - which means everything beyond that will be formless and blurry (friends, the convention you're at, the trees you are running through while making a fan movie, etc).
Real digital targeting systems overcome this in one of two ways; 2D, and what I'll call 'engineered positioning.'
So first with 2D - these targeting systems are simply displayed in two dimensions along with the visual representation of the target. Depth is completely eliminated because everything is displayed on one screen, so your eyes are focusing on one distance regardless of how close or far the target or the HUD is. Quite obviously, this takes extremely advanced computing to do, as well as giving up your actual sight. You're not using your eyes to see the target, you are watching a sensor's video representation of the target in your helmet. Suits that do this would likely have completely solid helmets, or helmets without visors (but there would have to be a sensor array mounted to the helmet in some fashion).
And now 'engineered positioning.' A fighter cockpit is a perfect example of the ideal proof-of-concept of a functioning, 'natural vision' HUD. The pilot uses his eyes to observe the HUD as well as the targets, but the HUD is positioned several feet in front of the pilot's face so it can occupy his vision much more clearly. Additionally, the placement of the cockpit is significant because all of the force and thrust and speed that the craft has drives toward (through?) the cockpit 100% of the time. There is no movement that the craft is capable of that will not be felt, or will not affect the cockpit, and thereby - the HUD. Finally, HUDs in cockpits are pretty narrow compared to a full, wrap-around visor that a helmet has. The weapon systems a fighter has are all pointed through the 3D space that the HUD occupies, so it doesn't really matter if it is "small" because the boundaries of the HUD also helps define the maximum latitude of effectiveness of the weapons it has. Plus, if the HUD were made larger and wider instead of perfectly square to the pilot's seat, parallax error would go all to hell.
But, what I'm really trying to say is that the investment to make a HUD inside of a helmet that doesn't include digitally projecting an image in a way to trick your eyes into believing the image exists in the 3D space in front of you (a la Google Glass) is probably going to be disappointing. It is a great idea and would be the envy of any and everyone that supposed themselves to have a Halo suit, no questions asked; unfortunately a static model just won't perform the way you want it to. I'd rather see you spend your time and money and energy on something that you will fulfill your expectations, but best of luck to whatever you choose.
Regardless of the complexity of this proposed hud, it will never be in focus unless your eyes are focused on the two to three inches it is away from your face - which means everything beyond that will be formless and blurry (friends, the convention you're at, the trees you are running through while making a fan movie, etc).
Real digital targeting systems overcome this in one of two ways; 2D, and what I'll call 'engineered positioning.'
So first with 2D - these targeting systems are simply displayed in two dimensions along with the visual representation of the target. Depth is completely eliminated because everything is displayed on one screen, so your eyes are focusing on one distance regardless of how close or far the target or the HUD is. Quite obviously, this takes extremely advanced computing to do, as well as giving up your actual sight. You're not using your eyes to see the target, you are watching a sensor's video representation of the target in your helmet. Suits that do this would likely have completely solid helmets, or helmets without visors (but there would have to be a sensor array mounted to the helmet in some fashion).
And now 'engineered positioning.' A fighter cockpit is a perfect example of the ideal proof-of-concept of a functioning, 'natural vision' HUD. The pilot uses his eyes to observe the HUD as well as the targets, but the HUD is positioned several feet in front of the pilot's face so it can occupy his vision much more clearly. Additionally, the placement of the cockpit is significant because all of the force and thrust and speed that the craft has drives toward (through?) the cockpit 100% of the time. There is no movement that the craft is capable of that will not be felt, or will not affect the cockpit, and thereby - the HUD. Finally, HUDs in cockpits are pretty narrow compared to a full, wrap-around visor that a helmet has. The weapon systems a fighter has are all pointed through the 3D space that the HUD occupies, so it doesn't really matter if it is "small" because the boundaries of the HUD also helps define the maximum latitude of effectiveness of the weapons it has. Plus, if the HUD were made larger and wider instead of perfectly square to the pilot's seat, parallax error would go all to hell.
But, what I'm really trying to say is that the investment to make a HUD inside of a helmet that doesn't include digitally projecting an image in a way to trick your eyes into believing the image exists in the 3D space in front of you (a la Google Glass) is probably going to be disappointing. It is a great idea and would be the envy of any and everyone that supposed themselves to have a Halo suit, no questions asked; unfortunately a static model just won't perform the way you want it to. I'd rather see you spend your time and money and energy on something that you will fulfill your expectations, but best of luck to whatever you choose.