Semantic Depth Prediction System (SDPS) is a vision-based navigation solution that relies on a monocular camera to recreate a 3D scene by fusing object detection, semantic segmentation and depth estimation for the purpose of indoor navigation in GPS denied environments. This eliminates the need for expensive and weighty sensors such as LIDARS which are commonly used in most indoor autonomous drones and Autonomous Ground Vehicles (AGV). By using a monocular camera, this solution will allow indoor drones to be cheaper and smaller thus ensuring safer operations in confined indoor spaces.
This solution would allow a cheaper alternative for commercial usage. It can be integrated with other available sensors to provide a more robust navigation solution for indoor operations.
Navigation solution is in the form of modular APIs. It can be used to support any drones with specific flight computers that expose navigational functionalities. Components include:
The overall architecture is based on Tensorflow 1.2. It is easily extendable for future enhancements.
There has been an increase in the application of drones in an outdoor environment where GPS systems have been the key solution for localization and mapping in autonomous flights. However, autonomous drone systems for indoor localization and mapping is still maturing with most of the systems still under research and development.
Unlike the existing autonomous indoor system relies on expensive sensors which will be costly to operate for commercial businesses, our technology is cost-effective and can be integrated with other available sensors.
Applications include (but not limited):
Benefit factors for indoor purpose autonomous UAVs:
Many indoor localization techniques rely on sensors such as lasers, sonars or computer vision as a form of navigation tool as GPS is unavailable. Highly accurate autonomous systems will be heavy in weight and/or requires high processing requirements.