Can Light’s Depth Perception Technology be an alternative for Lidar?

For the safe operation of autonomous vehicles, perception system is very critical. It is one of the fundamental pillars of autonomous driving that enables the car to detect in the surrounding environment moving and static objects.  In perception systems sensors play a major role and usually a combination of sensors primarily camera, radar and lidar are used to provide the car visuals of its surrounding.


LiDAR, which stands for Light Detection and Ranging, uses lasers to see the surrounding environment. It is relied by most of the companies for its advantages like accuracy and precision. But cost is one of the major disadvantages of LiDAR, although the cost has come down significantly but it is still an issue. Jamming and interference are also potential issues that are likely to come forth as the number of vehicles using this technology increases.


In this backdrop, Light, a depth-sensing and perception technology company, has come up with Depth Mapping technology that can possibly challenge Lidar. Its camera can determine a depth value for each pixel in the scene allowing the software to create a three-dimensional map of the objects in the image.

The company claims that its depth perception platform can provide long-range (1000 meters, compared with 200 or so for lidar), accurate sensing for ADAS and AV. The company says in it can offer precise object detection, definition, and tracking through extended ranges that too in real-time.

According to the company, the features of the technology being offered are:

  • 3D perception system detects and perceives objects at ranges well beyond 100m, with the ability to detect relevant objects beyond half a kilometer.
  • It uses multiple solid-state cameras to acquire data from the driving environment and create a detailed scene of distance based on the physics of multi-optic perception.
  • Machine learning and signal processing is used to fully understand the environment and identify and track objects within. It can also determine depth in flat textureless regions and at object edges.
  • Computational algorithms are used to determine exact geometry and depth across the entire optical field of view.
  • Algorithms can also be used to customize baselines to a manufacturer’s specific requirements, and online calibration keeps the system working through any circumstance.

At present, the company is testing first complete prototypes of the system consisting of different numbers of cameras in the array, with a variety of focal lengths, to optimize the design. The engineers are driving around in San Francisco Bay Area using an unmarked white van. The prototype is still in stealth and may be unveiled later this year with some announcement of partnerships.

Light is better known for its mobile cameras in Smartphone Industry, particularly for its sixteen lens L16 camera but in 2019 it bowed out of smartphone industry to focus on sensing systems in autonomous vehicles.


Leave a Reply

Your email address will not be published. Required fields are marked *