Abstract:The visual SLAM technology of mobile robots can estimate their position in the environment in real time under certain conditions, and build and update sparse or dense 3D maps of the environment. This information can help robots improve their accurate perception and adaptability to unknown complex environments, and perform more complex tasks. However, the accuracy and stability of localization and mapping of visual SLAM using cameras as sensors largely depend on the quality of the collected images. In low-light environments, existing visual SLAM algorithms have difficulty working effectively. In response to the problems of reduced positioning accuracy and lost tracking faced by visual SLAM in low-light environments, a visual SLAM algorithm suitable for low-light environments, RLMV-SLAM was proposed. This algorithm used a lightweight neural network to preprocess the input images, enhancing their brightness, contrast, color, and denoising. At the same time, the algorithm applied a map point supplement strategy, Sparse BA, and a real-time incremental loop closure detection method to improve the accuracy and robustness of localization and mapping. The research experimentally verified this algorithm on public datasets and self-collected datasets, and compared it with other mainstream visual SLAM methods. The results showed that the method proposed can increase the effective tracking time in low-light environments by more than 30% and significantly reduce the pose error of pose estimation on public datasets, proving the effectiveness of the proposed algorithm and providing a reference for simultaneous localization and mapping in low-light environments.