Effective large scale safety and security surveillance for public places and key installations is highly challenging and requires continual innovation and improvement in technologies to combat the ever changing and growing threat scenarios.
In the area of real-time sensing of safety and security threats from video streams, our research focuses on visual understanding of human/crowd action, behavior, and emotional states from 3D spatial-temporal analytics. We develop a unified supervised learning framework that learns from appearance, motion, posture, and trajectory to classify wide range of actions/behaviors ranging from extreme action, e.g. human aggression, to subtle action such as negative emotional states. Our research also covers end-to-end anomaly detection based on unsupervised learning framework and handling of high-dimensional data where similarity measure is ill-defined due to the fact that distances between vectors tend to a constant. This framework enables abilities to lean and adapt online for detection of potential safety and security events that are not predefined. We also work on metadata mining to discover insights and potential security threats from a larger perspective such as network of cameras.
We develop capabilities in audio detection and classification, source localization, far field spoken keyword detection and speech recognition under noisy, attenuated and reverberant conditions. Our aural analytic capabilities include multi-language speech recognition as well as large-scale real-time beamforming and localization using IoT acoustic sensor networks. Beside large area surveillance, we also develop unique capabilities to monitor a specific subject or group of specific subjects of interest. For examples, combining the speech text and emotion with nonverbal behaviors such as micro-facial cue, expression and body posture could reveal and convey much information about the internal state of a subject. Moving ahead, we plan to develop analytical capabilities to extract and integrate intelligence from other form of sensor sources.
For translational and deployment, we have the capabilities to develop intelligent solution that integrates the different types of sensor modalities as well as configure system solutions of sensors, network and edge devices to address the specific requirements of customers.
These solutions perform the security roles of Patrolling and Response, Forensics and Investigation, and Buddy Assistance, in collaboration with our security agencies. The design and development of these security robots requires the effective integration of Artificial Intelligence, Robotics Manipulation and Navigation, Communication and Sensor capabilities from our sister departments, as well as Advanced Edge Computing Hardware capabilities from our sister RI.
Robotic Systems currently under conceptualization and technology development comprise of the “Force Multiplier Robots” type of multiple autonomous robots capable of collaborating with human to perform security tasks such as patrolling, added presence, sensing and response capabilities, and the “Forensics Robots” with legged and rich tactile capabilities for handling and inspection at emergency sites, as well as dual arm manipulation and remote tele-operation.
Heartiest congratulations to the 2021 cohort A*STAR scholars and warmly welcomes these talents to our A*STAR family!
RSVP to join the the award ceremony event today!