My research mainly focuses on perception, sensor-based motion planning and control. What I love most is computer vision and its mathematics.
Perception and motion planning for UGVs
• Point cloud segmentation and 3D Reconstruction by using RGBD and LIDAR sensors
• Sensor-based motion planning in challenging scenarios
• Robot dynamics/kinematics learning and control
Computer vision and visual servoing for UAVs
During my experience in Selex ES MUAS (now Leonardo) I designed and developed advanced real-time applications and applied research projects for UAVs. In particular, one of these projects focused on near real-time 3D reconstruction from live video-stream. See more on my linkedin profile.
Computer vision and visual servoing for mobile robots
• Intercepting a moving object with a mobile robot
• Appearance-based nonholonomic navigation from a database of images
Robot navigation and mapping with limited sensing
In this work, we characterize the information space of a robot moving in the plane with limited sensing. The robot has a landmark detector, which provides the cyclic order of the landmarks around the robot, and it also has a touch sensor, that indicates when the robot is in contact with the environment boundary. The robot cannot measure any precise distances or angles, and does not have an odometer or a compass. We propose to characterize the information space associated with such robot through the swap cell decomposition. We show how to construct such decomposition through its dual, called the swap graph, using two kinds of feedback motion commands based on the landmarks sensed. See more in these papers
• Learning combinatorial information from alignments of landmarks
• Using a Robot to Learn Geometric Information from Permutations of Landmarks