A vision system is an essential component of an autonomous vehicle. Real-time and accurate perception module to detect vehicle/pedestrian and its distance from the camera could assist self-driving cars to drive and predict for the safety movement. Great advance techniques have been made for high accuracy detection with deep networks, but small object detection is still challenging. Majority of existing techniques utilize complicated network or bigger image size, which generally leads to higher computation cost.
With our advanced visual perception technology, high-efficiency deep network is designed to detect vehicle/pedestrian and determine an accurate distance between camera and object using low-cost stereo image sensors which require an inexpensive device with low computation cost. In order to achieve this, we utilize a modularized feature fusion detector (MFFD), a lightweight deep network model for road objects detection, especially when they are far away from the camera and their sizes are small. The proposed method is efficient in terms of model size and computation cost, which is applicable for resource-limited devices, such as embedded systems for advanced driver assistance systems (ADAS).
The stereo sensing technology can be deployed on GPU-enabled system, such as PC, workstation, TX-2, and PX-2, and can be used for multi-sensors system or front-facing ADAS cameras. It requires low computation power while still keeps high accuracy and performance.
Visual perception technology has a high potential for Advanced Driving Assistance System (ADAS) and Autonomous Vehicles (AV). It’s a modular design for integration with multiple sensor technologies in order to improve the road driving safety for autonomous vehicles.
Furthermore, technology can be applied to several computer vision applications such as: