UPDF AI

OmniShape: Zero-Shot Multi-Hypothesis Shape and Pose Estimation in the Real World

Katherine Liu,Sergey Zakharov,4 Authors,Rares Ambrus

2025 · DOI: 10.1109/ICRA55743.2025.11128589
IEEE International Conference on Robotics and Automation · 0 Citations

TLDR

This work proposes OmniShape, the first method of its kind to enable probabilistic pose and shape estimation, based on the key insight that shape completion can be decoupled into two multi-modal distributions.

Abstract

We would like to estimate the pose and full shape of an object from a single observation, without assuming known 3D model or category. In this work, we propose OmniShape, the first method of its kind to enable probabilistic pose and shape estimation. OmniShape is based on the key insight that shape completion can be decoupled into two multi-modal distributions: one capturing how measurements project into a normalized object reference frame defined by the dataset and the other modelling a prior over object geometries represented as triplanar neural fields. By training separate conditional diffusion models for these two distributions, we enable sampling multiple hypotheses from the joint pose and shape distri-bution. OmniShape demonstrates compelling performance on challenging real world datasets. Project website: https://tri-ml.glthub.io/omnishape.

Cited Papers
Citing Papers