An illustration of the interior of a driverless car, rendered in purples, blues and reds.

Travelling light: The technology behind the 3D Vision of the future

When we think about future technologies, our mind’s eye conjures up vast and complex scenarios, where even the most far-fetched ideas become real and take on lives of their own. Take Virtual and Augmented Reality, for example. They already have huge implications for the world, so it’s easy to overlook the many thousands of research hours and technologies that go into bringing such grand concepts to life – and continually improving them. Everything begins in the lab and breakthroughs like the world's first 1-megapixel SPAD image sensor are how the gap closes between science fiction and science fact.

Professor Edoardo Charbon of Ecole Polytechnique Fédérale de Lausanne (EPFL) is distinguished in this field of research and has guided dozens of PhD students through their theses since joining EPFL in 2002. Most recently he has been joined by Mr Kazuhiro Morimoto from Canon Inc. in Japan, who has authored a paper detailing this exciting development, which has real implications for applications that require 3D images, such as VR, AR and driverless vehicles.

“Our project was about creating very small pixels and very large arrays of such pixels in a technology known as ‘single photon avalanche diode’ or SPAD,” Prof. Charbon explains. “In 2003 we were the first to demonstrate SPAD in CMOS technology. This was a revolution because SPADs are highly sensitive to single photons, but they are also very fast and therefore can detect very fast events. This is particularly useful for applications involving timing.”

But what is a SPAD image sensor?

Simply put, it’s a sophisticated image sensor, capable of detecting and counting photons – the basic unit of all light – incredibly quickly. To look at, it’s a semiconductor chip, comprised of pixels, each with their own electronic element. When a single photon reaches one of these pixels, it frees up an electron (a unit of electricity) which is multiplied (forming the ‘avalanche’ referenced in the name) and creates one big electrical pulse. So, it stands to reason that the more pixels are available, the more electrical pulses are subsequently converted into digital form and the sensitivity in image capture and precision in distance measurement is greatly increased.

A diagram of the 1-megapixel chip – a green 11mm x 11mm square, with a thin frame showing the row controllers on the left and gate/recharge binary trees on top and bottom. The chip is divided into two, with each magnified (right) to show the different positioning of the pixels in each section.
The 1-megapixel chip is divided into two sections of 1024 x 500, with each organised slightly differently.

Today’s breakthrough – tomorrow’s world

So, what does this mean in real terms? Well, there are any number of applications in varying stages of technological infancy that would benefit from the ability to construct 2D and 3D images using very large numbers of pixels. As already mentioned, we are all familiar with VR and AR, both of which require 3D vision in order to capture and reconstruct images, or – as in the case of Augmented Reality – create and accurately position objects that don’t exist. We already use VR and AR in areas such as healthcare, education, entertainment and design, but as we reach a time where most normal activities are undergoing huge transformational shifts, we can only see its adoption accelerate in our day-to-day lives.

Another application for which this technology holds huge potential is drones. By combining the two, we will have the ability to better capture and reconstruct 3D images of the world, which is essential for global mapping, but can also have great implications in the future for navigation. In the world of medicine, a kind of imaging called NIR (Near Infra-Red optical tomography) is used to study brain activity, which requires the synchronisation of a laser with a detector capable of assessing single photons at very high speed – a perfect match for Charbon and Morimoto’s 1-megapixel SPAD sensor.

But applications that excite the layperson, media and science communities alike are driver assistance and use in autonomous vehicles. The ability to accurately and swiftly detect obstacles and obstructions at a distance and in motion can only be achieved with 3D vision. When you drill down into the complexity of what is required, it becomes obvious that there are near endless calculations that humans make almost automatically. To replicate human perception in 3D vision requires a perfect combination of clarity of vision, accuracy of depth perception and speed of reaction.

A LiDAR image of several objects (an Oski, a cup, a ball and three cars) represented in colours that represent their relative distances. Red for close up, blue for further away.
In this LiDAR image, captured by MegaX, you can see that distance of each object is represented by colour, from red to blue. However, there are areas of blackness where an obstruction prevents photons reaching the sensor. However, this is deliberate and proves that even when the conditions are not perfect, the image can still be reconstructed with great accuracy.

How does distance measurement work and why does it matter?

The immense speed the SPAD sensor is important in sensitive 2D imaging, but when applied to a 3D camera, (as Professor Charbon and Mr Morimoto have in the building of their ‘MegaX’ camera) it allows for very fast, very precise measurement of distance. And this is why the development of the 1-megapixel SPAD is so very important.

Built into the chip is something called a ‘gate’ or shutter, which – as you would expect – opens and closes, allowing the detection of photons in a limited window of time. In the new 1-megapixel SPAD, the shutters move at a speed of 3.8 nanoseconds. To put this in context, one nanosecond is a billionth of a second. Opening and closing the gate in a certain position as light comes towards it helps in the reconstruction of a 3D image, as Professor Charbon explains: 

“Imagine that you have a flash of light. The light emits photons and the photons propagate towards a target, which are then reflected back. That round trip time is called ‘time of flight’ and is the time that the photons take to go from the light source and back. By placing the gate in a certain position, you can capture all the reflections that happen at the very precise depth.”

The result is that MegaX can capture an extraordinary 24,000 frames per second – a thousand times faster than a movie camera. And because each frame contains information gathered from the photons that have passed through one million pixels, when the image is reconstructed, the distance accuracy and precision of the camera is accurate to a few millimetres – which is very good indeed. The trinity is completed by an extremely high dynamic range, so images can show very dark and very light objects simultaneously without saturation.

“All three are important, particularly in automotive,” says Prof. Charbon. “But this device can also construct multiple reflections and to do that you need to be able to detect more than one surface.” He gives the example of a driverless car that is moving towards glass. “You want the car to see that there is an obstacle and it happens to be transparent, but there’s also a second obstacle. But the car has to stop before the first obstacle.” Essentially, the camera can detect layers of obstruction, even when one may not even be clearly visible.

A black and white diagram that allows Prof. Charbon and Mr Morimoto to distinguish the megapixel capability of their sensor.
The world’s first megapixel image implemented in a SPAD. By distinguishing the lines (in zoom) Mr Morimoto proved a megapixel resolution.

As PhD research goes, Charbon and Morimoto took an exceptionally short amount of time to reach their goal. “Kazu (Mr Morimoto) managed to do the work that would take many people four years in just two and a half,” says Prof. Charbon, who also proudly and fondly jokes that his student “did not sleep very much”. Since publishing the paper in Optica, a scientific journal published by The Optical Society, they have been “flooded” by emails from other researchers in their field, interested in the camera for all manner of applications. The pair continue to be in daily online contact, working towards their next paper on the theory and practicality of scaling down pixels even further. Mr Morimoto has already demonstrated that pixels can be reduced down further (to 2.2 microns) for the first time ever and holds the record for the smallest pixel in the world at the moment based on SPADs.

The implications are huge, and Prof. Charbon acknowledges that “the race is on” to change history through research. “These experiments show us, in action, things that we have known from Einstein for many years and that we can now reconstruct. This is truly scientifically exciting.”

The paper can be read in full at Optica, with a summary of the key takeways available through Canon Inc.  

Written by Marie-Anne Leonard


Related Articles