In the latest camera research, there is a shift away from focusing on increasing the number of megapixels to combining camera data with computation processing. We don’t refer to the Photoshop style of processing, where filters and effects are applied to a photo, but a radical, new approach, where the data incoming may not look at all like an image. The data is transformed into an image only after a number of computations, which often involve complex mathematics or modeling the way light moves through the scene.
The additional layer of computation processing frees us magically from the chains of traditional imaging techniques. We may not need cameras at all. We will instead use light detectors, which we never thought of using for imaging until a few short years ago. They will also be able to do amazing things like seeing through fog, inside a human body, and behind walls.
Single-pixel cameras
It is a single-pixel camera that relies on an incredibly simple principle. Cameras typically use many pixels (small sensor elements) to capture scenes that are likely lit by one light source. You can do it the other way and capture information from multiple light sources using a single pixel.
For this, you will need a light source that can be controlled. A simple data projector is a good example. It illuminates a scene in a spot-by-spot manner or by using a variety of patterns. You measure the amount reflected from each spot or pattern of illumination and then add it all up to create the final picture.
The disadvantage is that it takes a lot of light spots to create a single image. This would only take one photo with a normal camera. This imaging technique would enable you to build cameras that are otherwise impossible, such as those that operate at wavelengths beyond the visible spectrum where good detectors can’t be turned into cameras.
Read more: The amazing camera that can see around corners
These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.
Even light particles that have never interacted with the object you want to photograph can be used to create images. It would be possible to use the concept of “quantum entanglement”, where two particles are connected so that whatever happens to one particle happens to the other, even if the particles are far apart. This opens up fascinating possibilities when it comes to objects like the eye, whose properties may change under different lighting conditions. Does a retina, for example, look the same in light or darkness?
Multisensor imaging
The single-pixel image is one of the easiest innovations in future camera technology. It relies on the traditional idea of what constitutes a picture. We are seeing a rise in interest in systems that collect lots of data but only a fraction of it using traditional techniques.
We could then use multisensor approaches, where multiple detectors are pointed at the scene. Hubble telescope produced pictures by combining images taken with different wavelengths. Now, you can purchase commercial versions of such technology. For example, the camera collects data about the intensity of light and its direction. This allows images to be refocused once the picture has been taken.
Light L16. Light L16.
The next-generation camera will likely look like the light L16 camera. This camera features groundbreaking technology that is based on ten different sensors. A computer combines the data into a professional-quality, 50Mb image that can be refocused and zoomed. The camera looks like an exciting Picasso interpretation of the crazy cell phone cameras.
These are only the first steps in a new generation that will revolutionize the way we take and think about images. Researchers are also hard at work on solving the problems of seeing through fog and imaging deep within the body and brain. These techniques all rely on the combination of images and models that describe how light travels around or through different substances.
Artificial intelligence is another interesting method that has gained traction. It uses artificial intelligence to “learn,” or “train,” to recognize objects based on the data. These techniques are based on the learning processes of the human brain and will likely play a significant role in future image systems.
The single photon imaging and quantum image technologies have also matured to the point where they can capture images with extremely low light levels and videos at speeds of up to a trillion frames per second. It is possible to capture images as light travels across a scene.
It may take some time for these applications to be fully developed. Still, now that we know the underlying physics, we should be able to solve them and other problems by combining new technology with computational ingenuity.