Sensors, Vol. 25, Pages 5448: MINI-DROID-SLAM: Improving Monocular Visual SLAM Using MINI-GRU RNN Network
Sensors doi: 10.3390/s25175448
Authors:
Ismaiel Albukhari
Ahmed El-Sayed
Mohammad Alshibli
Recently, visual odometry and SLAM (Simultaneous Localization and Mapping) have shown tremendous performance improvements compared to LiDAR and 3D sensor techniques. Unfortunately, attempts to achieve these improvements always face numerous challenges due to their complexity and insufficient compatibility for real-time environments. This paper presents an enhanced deep-learning-based SLAM system, primarily for Monocular Visual SLAM, by utilizing a Mini-GRU (gated recurrent unit). The proposed system, MINI-DROID-SLAM, demonstrates significant improvements and robustness through persistent iteration of the camera position. Similar to the original DROID SLAM, the system calculates pixel-wise depth mapping and enhances it using the BA (Bundle Adjustment) technique. The architecture introduced in this research reduces the time used and computation complexity compared to the original DROID-SLAM network. The introduced model is trained locally on a single GPU using monocular camera images from the TartanAir datasets. The training time and reconstruction metric, assessed using ATE (Absolute Trajectory Error), show robustness and high performance compared to the original DROID-SLAM.
Source link
Ismaiel Albukhari www.mdpi.com