Robotics perception is the module that endows a robotic system with the ability to understand and reason about its surrounding environment. This involves multimodal sensory data processing, environment modelling, and machine learning which are the key areas for the perception group. Depending on application requirements, we work with different sensing modalities and closely with the other groups to meet various goals such as robot localization, obstacles avoidance, object aware path planning, safety, and human robot interaction.
Active areas of research include optimal multimodal sensor fusion algorithm for a suite of commonly used sensors such as 2D/3D lidar, RGBD cameras, sonar array as well as new sensor modalities such as tactile elements and mm-wave radar for object detection, obstacle detection and environment understanding. Other active research topics include continuous perception under different lighting and weather conditions, robust object detection and affordance analysis, interactive perception and combining perception with task learning.
As robotic perception forms a close-loop tie with navigation and/or manipulation, we work closely to deploy our solution for various application domains including in hospitals, immigration check-points, perimeter and premise patrolling, infrastructure and aircraft inspection.