This technology leverages on deep learning in computer vision to develop a generic video analytics platform that is able to learn and infer different human activities or behavioural patterns automatically. The pipeline is able to stream images and automate identification of abnormalities in human poses using a trained AI model. Identified use-cases include monitoring of autistic children in the areas of:
- identifying behaviors such as self-inflicting harm,
- recognising actions such as banging of hard objects with bare hands, and
- detecting movements that indicate sudden running or dashing activities.
Technology Features & Specifications
The technology is based on configurable Long Short-Term Memory (LSTM) with human-pose estimation AI model. The system is able to track the human skeletal position in frame sequences. With its short-term and long-term memory components, LSTM model has been validated in research to be capable of solving problems with sequential linkages.
- Monitoring of high risk events in hospitals, nursing homes e.g., patient falling/tripping.
- Monitoring of suspicious behaviors in retail stores and malls e.g., shoplifting, pickpocketing.
- Monitoring of public places for incidences of unlawful activities e.g., fighting.
- Monitoring of schools for suspicious activities e.g., bullying, extortion.
- Conducting and evaluating human skills-related training in sports and other jobs e.g., assessing correct running postures for optimal performance.
- scalable for different human activity scenarios and settings
- able to cater for crowded environments
- supportive of live streaming from commercial CCTV cameras via standard protocols e.g., Real Time Streaming Protocol (RTSP)
- able to infer multiple abnormal behavioural patterns within a single video feed
Make an Enquiry