I am working hard during my free time and having fun with PLVS to finalize the outcoming paper:
PLVS: An Open-Source RBG-D and Stereo SLAM System with Keypoints, Keylines, Volumetric Mapping, and 3D Incremental Segmentation
PLVS(*) is a real-time system which leverages sparse RGB-D and Stereo SLAM, volumetric mapping and 3D unsupervised incremental segmentation. PLVS stands for Points, Lines, Volumetric mapping and Segmentation. The system can run entirely on CPU or can profit by available GPU computational resources for some specific tasks. The underlying SLAM module is sparse and keyframe-based. It relies on the extraction and tracking of keypoints and keylines. Different volumetric mapping methods are supported and integrated in PLVS. A novel “reprojection” error is proposed for bundle-adjusting line segments. This error allows to better stabilize the position estimates of the mapped line segment endpoints and improves SLAM performances. An incremental segmentation method is implemented and integrated in the PLVS framework.
(*) You can pronounce it /plʌs/ (in Latin it would be read as “PLUS”, ‘U’ and ‘V’ were allographs).
PLVS is a work in progress. The software will be released as open-source with our outcoming paper very soon!
The following image shows some details of a 3D reconstruction: lines, normals, point cloud and segments.
Some of the following RGBD videos were shown at the last TRADR Review (Yr4). No GPU acceleration was used in their making. What you see are real-time screen recordings. A XTION camera was used for shooting the TRADR dataset.
The following videos show PLVS working in real-time on the stereo datasets KITTI and EUROC. PLVS is able to chew stereo images without the need of pre-generated depths.
PLVS for Circus Maximus Mixed-Reality Experience
I am currently testing my PLVS framework on an NVIDIA Jetson TX2 board which was kindly donated by NVIDIA Corporation.
More details about the work and the obtained performances on the TX2 board will be reported in the outcoming paper.