My research focuses on perception, machine learning, sensor-based motion planning and control. What I love most is computer vision and its mathematics.
RGBD SLAM, volumetric reconstruction and 3D incremental segmentation
PLVS is a real-time system which leverages sparse RGB-D SLAM, volumetric mapping and 3D unsupervised incremental segmentation. PLVS stands for Points, Lines, Volumetric mapping, Segmentation. The underlying SLAM module is sparse and keyframe-based. It relies on the extraction and tracking of keypoints and keylines. Different volumetric mapping methods are supported and integrated in PLVS. A novel “reprojection” error is proposed for bundle-adjusting line segments. This error allows to better stabilize the position estimates of the mapped line segment endpoints and improves SLAM performances. An incremental segmentation method is implemented and integrated in the PLVS framework. PLVS is a work in progress.
The software will be fully released as open-source. More details soon!
Perception and motion planning for UGVs
• Point cloud segmentation and 3D mapping by using RGBD and LIDAR sensors
• Sensor-based motion planning in challenging scenarios
• Robot dynamics/kinematics learning and control
Learning Algorithms for UGVs
• Online WiFi mapping by using Gaussian processes
• Algorithms for learning the model of UGVs dynamics/kinematics (Gaussian processes and neural networks)
Computer vision and visual servoing for UAVs
During my experience in Selex ES MUAS (now Leonardo), I designed and developed advanced real-time applications and managed applied research projects for UAVs (CREX-B, ASIOB, SPYBALL-B). In particular, one of these projects focused on near real-time 3D reconstruction from live video-stream. See more on my linkedin profile.
Computer vision and visual servoing for mobile robots
• Intercepting a moving object with a mobile robot
• Appearance-based nonholonomic navigation from a database of images
Robot navigation and mapping with limited sensing
In this work, we characterize the information space of a robot moving in the plane with limited sensing. The robot has a landmark detector, which provides the cyclic order of the landmarks around the robot, and it also has a touch sensor, that indicates when the robot is in contact with the environment boundary. The robot cannot measure any precise distances or angles, and does not have an odometer or a compass. We propose to characterize the information space associated with such robot through the swap cell decomposition. We show how to construct such decomposition through its dual, called the swap graph, using two kinds of feedback motion commands based on the landmarks sensed. See more in these papers
• Learning Combinatorial Map Information from Permutations of Landmarks
• Learning combinatorial information from alignments of landmarks
• Using a Robot to Learn Geometric Information from Permutations of Landmarks
Warning: The material (pdf or multimedia) that can be downloaded from this page may be subject to copyright restrictions. Only personal use is allowed.