Figure 1 shows a generic flash LiDAR system where the scene of interest is uniformly illuminated using an active light source, typically, a wide-angle pulsed laser beam whose coverage area on a target plane depends on the fields-of-view (FOV) of the beam. A uniform illumination can be achieved by using an optical diffuser which spreads the incoming laser energy equally onto the coverage area or by illuminating the scene with a predefined light pattern, for instance, visualized as a matrix of light points covering the target area, where the number of those points may be defined based on the effective spatial resolution desired and the pixels in the LiDAR sensor itself. Irrespective of the method used for uniform illumination, the radiance of light on the target will comprise of both, the direct component and global component [17]. The direct component is basically the direct illumination of the laser source onto the target area which results in a single reflection onto the sensor while the global component may include indirect and/or multiple reflections caused from various physical phenomena. These could arise from inter-surface reflections simply due to the nature and geometry of different targets in the FOV or from light propagating through a scattering medium, such as, fog or cloud, resulting in ballistic and/or scattered photons.

Figure 1. Flash LiDAR operation.

Illuminating every point on a target scene using patterned light source allows us to mitigate the effects of global components, thus minimizing errors in range estimation and target recovery by primarily using only direct illumination [17,18]. However, it is important to mention that light transport involves interaction with targets made of diverse materials and geometry. While the direct component is easier to model, the global component, due to its aforementioned nature is complicated to model. Prior work show many efforts in the direction of modeling and separating direct and global components of light [17,18,19]. It is not in scope of this paper to analyze this further and therefore, only direct illumination from Lambertian targets has been considered throughout the paper. Furthermore, while the physical phenomena related to scattering medium such as fog or cloud will not be discussed, a time-gating feature in the proposed architecture will be discussed in Section 4.4, as a possible way to detect photons is such scenarios, particularly exploiting the advantages in a DTOF sensing method.

As shown in Figure 1, a pulsed laser with wavelength, λlaser and repetition rate, flaser, is uniformly illuminated over a target scene at distance, d. The target is assumed to be Lambertian in nature, thus, reflecting photons over a diffuse sphere with diameter, d [20,21]. The horizontal and vertical FOV, θH and θV respectively, on the target plane are obtained by applying Pythagoras theorem and the corresponding coverage area, Acov, over the scene is then written as,



The photons back-reflected from the target are collected through a receiving lens with a diameter, Dlens, assuming that the light propagation path does not observe any scattering after which they are detected by the sensor. Due to the fact that the entire scene is illuminated at once, a sensor with an array of pixels is desired in order to reconstruct the image covered by the FOV [13,14]. A SPAD-based DTOF sensor has been considered for all analyses in this paper. The block diagram showing the principle of DTOF sensing is shown in Figure 2. The returning photons are detected by the SPAD and a time-stamping circuitry, typically a time-to-digital converter (TDC), measures the time of arrival of these photons with a certain resolution, TDCres.

Figure 2. Principle of a direct time-of-flight (DTOF) sensor.

A DTOF measurement is generally performed using TCSPC, where detected events are accumulated over multiple laser pulses shone onto the target. The recovered signal is a train of pulses represented as a histogram corresponding to the time-of-arrival of individual photons incident on the SPAD with a distinguishable peak centered around the target location [8,10,11]. The measured timing information (ΔT in Figure 2) is then combined with the speed of light, c, to estimate the distance, d, of the target from the sensor.

More from

Tech Leader/PM who focuses on 3D (Imaging/Photonics/Sensing, LiDAR/ToF) and 3A (AV/Auto, AR/VR, AIoT/5G)….;

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store