One of the main challenges in computer vision stems from the fact that visual data is highly ambiguous: given any two-dimensional image, there are infinitely many possible hypotheses of real-world scenes that would explain what we see. In large part, this ambiguity can be traced back to the very mechanism of image capture: every pixel value of our camera is an integral of the so-called plenoptic function over space, angle, wavelength and time. It is only in the last 15 years that we have seen the development of novel imaging modalities that sample the temporal, angular and spectral dimensions to produce high-dimensional plenoptic images, facilitating many vision tasks that are considered to be very hard on monocular 2D image data.
The research direction of transient imaging operates on a time resolution fast enough to resolve non-stationary light distributions in real-world scenes. It enables the discrimination of light contributions by the optical path length from light source to receiver, a dimension unavailable in mainstream imaging to date. Until recently, such measurements used to require high-end optical equipment and could only be acquired under extremely restricted lab conditions. By introducing a computational imaging technique operating on standard time-of-flight image sensors, we for the first time were able to "film" light in flight in an affordable, practical and portable way. Just as impulse responses have proven a valuable tool in almost every branch of science and engineering, we expect light-in-flight analysis to impact a wide variety of applications in computer vision and beyond.