AI-designed material produces super-resolution pictures using a display with low resolution
Holographic image displays are one of the most promising technologies for the next generation of augmented/virtual (AR/VR). They use coherent light illumination in order to simulate the 3D waves that represent, for example the objects within a given scene. These holographic displays could simplify the optical setup for a wearable device, resulting in a compact and lightweight design.
A perfect AR/VR experience would require images with a high resolution to be displayed in a wide field of view to match the human eye’s viewing angle and its resolution. The capabilities of holographic projection systems are limited primarily due to the small number of pixels that can be controlled independently in the existing image projectors.
In a recent Science Advances study, researchers reported that a transmissive material designed using deep learning can project super-resolved image displays with low-resolution displays. Researchers at UCLA, led by Professor Aydogan Ozcan, published a paper entitled \”Super-resolution Image Display using Diffractive Decoders\” in which they used deep learning to spatially engineer diffractive transmissive layers on the wavelength scale. They created a material based physical image decoder, which achieves super resolution image projection when the light is passed through its layers.