This short essay is also included in my 2024 update of my Survey of Alternative Displays, but I figured it was worth highlighting on my site as well.


The word hologram is probably the ultimate misleading term in the experience design and AV industry. True holograms, by definition, are recordings of interference patterns/wavefronts - typically done as a photographic film process.

In the language of digital display tech - everything from light field, to transparent, to autostereoscopic, to persistence of vision is branded as a "hologram" to suggest something more than 2D, even if that's (usually) all it is. However, it can be helpful to look at some of the characteristics of what tends to be the most desired elements of a futuristic display. Is there a way we can look at some commonalities around what exists as a "hologram" display today and what might be needed to get it to a next stage?


The characteristics of a display marketed as (heavy air quotes) "holographic" often include:

  • Their image seems to float against a real-world background utilizing parallax to trick us into a sense of depth (Pepper's ghost, persistence of vision, projection on scrim, transparent LCD/OLED, etc)
  • Some sense of stereoscopic depth/alternate views of an object to tickle the brain into thinking it is a real object (autostereoscopic, light field displays)
  • Some combination of the above (swept volume displays, volumetric projection)

However, I often get the sense that to pass as a "real hologram" or to become an ideal holographic display, the tech needs to take things much further into science fiction. While current displays are cool, many fall well short of meeting some of these common requests or needs below:

  • The ability to show a full black to white image in any lighting environment, no matter how dark or bright the ambient environment is.
  • Be able to scale and be viewable from the smallest detail to a very large size (changes in depth cues would be a big thing to address in that scale).
  • Be variably transparent to the environment, or not necessarily contained within a rectangular frame or require looking into the display (like seeing a floating object from a full 180 or even 360 off the surface of the display).
  • Render things from diffuse points of light to crisp details.
  • React to their environment (more on this below).

The Promise of a Hologram

Holograms as a pop culture concept seem to hold some sort of power or promise of a certain type of experience, but I've always been curious what the actual end goal of using holographic displays might be.

The drive to market a digital display as a hologram seems to stem from a couple of factors. First, there's the "cool factor" of a novel display that enhances or spices up otherwise less exciting content. People might also want something more immersive or mysterious than a glowing rectangle on a wall, and selling something as a hologram feels like selling a previously unattainable goal - like cracking cold fusion, new battery technology or anti-gravity. The future is now, etc etc.

Second, there is an element of holding a sort of promise of utility and function. Some research suggests that holograms can streamline information processing. Our brains evolved to see and process fully 3D or 4D content much faster, and this is part of the reason why medical and defense industries look towards these displays for an enhanced ability to see and process complex scenes. Looking at a flat 2D map takes longer to parse and understand, and may contain less critical information than a fully 3D map.

Moving one step further into the future - I think that even we were to achieve the famous Princess Leia and "Minority Report" holograms, some people wouldn't consider that "good enough". Someone would want something more crisp, more colorful, more opaque, larger - the arms race for 2D displays certainly went that route, and even head mounted/AR displays strive for a similar goal. To be fair, to make most of the above happen would essentially require many new levels of understanding about not only our physical reality, but also our ability to manipulate and steer photons in mid-air. Additionally, if with the floating images, we would probably need to add physical sensations as well - touch, texture, heat, pressure, etc.

What does the endgame of a "perfect holographic display" seem to be? Is it just a desire to essentially manipulate visual reality itself? Let's imagine we had truly holographic displays tomorrow that could manipulate realistic visual reality at room scale or larger (some AR headsets are already working towards this approach). As dystopian or sci-fi tin-foil-hat as it sounds, that sounds like borderline dangerous technology for anyone to have in their possession - there are already concerns about AR layers disrupting the physical, but real reality being subject to a shift sounds wild. There are obviously utopian takes as well, but it would need some serious reckoning about the appropriate way to use the tech, much like our current discussions around ethical AI usage. To achieve anything close to that level of realism still feels decades off, but it can be fun/terrifying to think about what this could all be headed towards. 

Moving from Output to Input

While a lot of what I've covered in the alternative displays survey focuses a lot on displays as an output mechanism, I think to truly get to the realism of a promised hologram there is still a bit of nascent research to be done around the input to displays and their content. When I say inputs, I don't mean just HDMI cables, and physical sensors and touchscreens and other interactives either, but rather the idea of the capture of the light field surrounding the display.

Most displays these days are essentially blind to the world around them. Some current displays and devices have built in tricks for sensing ambient light and adjusting brightness or color, but that's about it for the popularized adjustment capabilities. Some displays today (Like Sony Spatial Reality) even rely on head or eye tracking to create their 3D illusion. If we got to the point of realistic holograms but still had to light the 3D scenes in the same way we do today, I think everyone would quickly realize that the next hurdle to creating true realism is capturing the world around the display. 

The hologram should, in an ideal scenario, react to changes in the environmental lighting conditions just as a regular physical object would. I should be able to shine a flashlight on it, cast a shadow onto it, or bring it inside or outside, see myself in the reflection of shiny objects, see through transparencies and block opacity. All of this would all need to happen with extremely low latency. 

Right now we have displays we send pixels to, and some may have sensors, but a full 360┬║ lightfield sensor that is integrated into a lightfield display would make things that much more impressive. Showing a nice shiny rendered sphere floating in space that reacted to all real-world lighting cues would feel next level. XR Studios understand a bit of this technique in terms of using environmental LED to cast 3D scene light onto the IRL subjects being filmed, but turning it all inward feels a bit far out with our current understanding. There are also examples out there of things like using a fully spatially tracked flashlight, and using that flashlight to cast light or shadow onto a 3D scene - its a clever trick and looks really cool, but obviously still has its limitations. AR headsets are getting there in terms of reality capture as well and I'm sure AI models will also help in some of this. While there has been research on light field cameras like the Lytro from years past, I haven't come across a ton of projects that attempt to incorporate light field capture AND light field output.

Thanks for reading!