UPDF AI

Fusing Direct and Indirect Visual Odometry for SLAM: An ICM-Based Framework

Jeremías Gaia,Javier Gimenez,3 Authors,Fernando Ulloa-Vásquez

2025 · DOI: 10.3390/wevj16090510
World Electric Vehicle Journal · 0 Citations

TLDR

A method that fuses visual odometry outputs from both direct and feature-based methods using Iterated Conditional Modes (ICMs), an efficient iterative optimization algorithm that maximizes the posterior probability in Markov random fields, combined with uncertainty-aware gain adjustment to perform pose estimation and mapping is presented.

Abstract

The loss of localization in robots navigating GNSS-denied environments poses a critical challenge that can compromise mission success and safe operation. This article presents a method that fuses visual odometry outputs from both direct and feature-based (indirect) methods using Iterated Conditional Modes (ICMs), an efficient iterative optimization algorithm that maximizes the posterior probability in Markov random fields, combined with uncertainty-aware gain adjustment to perform pose estimation and mapping. The proposed method enhances the performance of visual localization and mapping algorithms in low-texture or visually degraded scenarios. The method was validated using the TUM RGB-D benchmark dataset and through real-world tests in both indoor and outdoor environments. Outdoor experiments were conducted on an electric vehicle, where the method maintained stable tracking. These initial results suggest that the technique could be transferable to electric vehicle platforms and applicable in a variety of real-world conditions.