We offer a patented video analytics technology that opens up new possibilities for intelligence and surveillance tasks, especially for highly dense urban areas. Unmanned systems, such as drones, are nowadays equipped with a broad spectrum of cameras that allow the monitoring of broad areas from the sky. But exploiting the potential of drones for surveillance is also a highly challenging task: a huge amount of video data is produced and needs to be assessed in real-time. The sensors record hours of video data in which only a fraction of the footage contains events of interest. For the analyst, it is challenging to identify such events from the huge volume of video footage manually. This is especially true for wide-area motion imagery (WAMI) sensors, which generate an even higher volume of data.
New capabilities are needed to only automatically extract relevant information to be sent to the ground. Our software allows for the automatic tracking of objects and user-defined events in a high volume of video data registered from the sky.
The technology is a software video processing tool, based on a smart algorithm that combines a chain of computer vision detectors and machine learning techniques. The software is tuned to process video streams collected by WAMI sensors installed on board of aerial platforms (drones).
A WAMI sensor is an imagery sensor employing a combination of optical sensors (typically medium-resolution cameras) to realize maximum coverage when mounted on an aerial platform. An example of WAMI sensor is the CorvusEye® 1500 (from Harris Corporation), which records a full motion video from 4 RGB cameras, generating every 2 Hz a stitched image of 13200 x 8800 pixels. Depending on the altitude of flight of the platform, this sensor can cover an area of multiple square kilometres in one frame. This WAMI sensor can be mounted on aerial platforms such as medium-altitude long-endurance (MALE) UAVs, helicopters and aerostats.
The newest generation WAMI sensors are miniaturized and lighter, reaching below 35 pounds (~15 kg), such as the Redkite (from Logos Technologies). The performance is lower (50 Mpx at 2 Hz) but the light weight makes them deployable on smaller aerial platforms, such as tactical unmanned aircrafts (tactical drones).
Our technology is optimized to maximize detection and tracking performance of moving vehicles and moving people in a dense urban area. With similar techniques, the tool can be adapted to detect other interesting events defined by the user in WAMI video streams. For instance, the extracted vehicle movement can be used to detect more complex events, such as traffic jams and speeding vehicles.
The output is visualized on a GUI (see image) that allows the user to quickly navigate through the processed video data and to zoom in and more accurately inspect the target of interest. Additionally, the GUI allows for the coupling of third party location information, hence the user can formulate location-based queries, for example to select routes around a particular region-of-interest in a map. Additional queries can be made on other characteristics of the tracked vehicle, such as the colour or the “driving behaviour” of the vehicle. Finally, single tracks can be selected and the corresponding raw video data extracted and played.
(Courtesy to Harris Corporation for the availability of the CorvusEye® 1500 aerial images as shown in this tech offer.)
Product (bundle of products/services):
The attractiveness comes from uniquely high tracking performance on moving objects detected in near real-time from moving aerial platforms (see 'Improvement on State of the Art' section).
Market size still unclear
Market opportunities: see attached infographic
Improvement on State of the Art:
The smart algorithm has several advantages over other methods available in literature: it does not depend on image motion stabilization, it counters the inaccuracy of the GPS data that is embedded in the video data, and it can find matches when the vehicle detector would miss a certain detection. As a result, it is attractive to the software developers who come up with the readout displays (typically the same company manufacturing the sensor itself).
Our smart algorithms prove to be capable of detecting and tracking vehicles and persons in challenging multi-camera WAMI data, and of extracting additional meta-information on the “behaviour” of these objects. The algorithms can be adapted for applications in the fields of surveillance and traffic monitoring, and can form the base for event and anomaly detection in surveillance of large or complex areas. For instance, they can be used to automatically:
Additionally, the proposition can be enhanced if the information extracted from the WAMI sensor stream is fused to other sensors streams, for example for cross-cueing applications.