< UMERegRobust - Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration

UMERegRobust

Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration

ECCV 2024

Ben-Gurion University

Image 1
Image 2

Abstract

We adopt the Universal Manifold Embedding (UME) framework for the estimation of rigid transformations and extend it, so that it can accommodate scenarios involving partial overlap and differently sampled point clouds. UME is a methodology designed for mapping observations of the same object, related by rigid transformations, into a single low-dimensional linear subspace. This process yields a transformation-invariant representation of the observations, with its matrix form representation being covariant (i.e. equivariant) with the transformation. We extend the UME framework by introducing a UME-compatible feature extraction method augmented with a unique UME contrastive loss and a sampling equalizer. These components are integrated into a comprehensive and robust registration pipeline, named ${\it UMERegRobust}$. We propose the RotKITTI registration benchmark, specifically tailored to evaluate registration methods for scenarios involving large rotations. UMERegRobust achieves better than state-of-the-art performance on the KITTI benchmark, especially when strict precision of $(1^\circ, 10cm)$ is considered (with an average gain of $+9\%$), and notably outperform SOTA methods on the RotKITTI benchmark (with $+45\%$ gain compared the most recent SOTA method).

Method Overview

Overview of our method

$\textbf{UMERegRobust Overview.}$ $\mathcal{P}$ and $\mathcal{Q}$ are the input point clouds; (1) The UME Compatible Coloring Module resamples the point clouds into a uniform grid generating $\widetilde{\mathcal{P}}$ and $\widetilde{\mathcal{Q}}$, then assigns for each point a transformation invariant feature vector creating the colored point clouds $\mathcal{F}_{\widetilde{\mathcal{P}}}, \mathcal{F}_{\widetilde{\mathcal{Q}}}$. (2) Local UME descriptors, generated on a down-sampled versions of the point clouds $\widehat{\mathcal{P}}, \widehat{\mathcal{Q}}$, are denoted by $\{\mathbf{H}_{\boldsymbol{p}}\}_{k=1}^K,\{\mathbf{H}_{\boldsymbol{q}}\}_{k=1}^K$, respectively. A Matched Manifold Detector identifies corresponding local UME descriptors, forming a set of $K$ putative matched pairs $\mathcal{C}$. For each matched pair, an estimated transformation is obtained using the RTUME estimator. (3) Feature correlation is used to select the hypnosis that maximize the feature correlation between the point clouds.

Results

Citation

If you find this project useful for your research, please cite
    @InProceedings{haitman2024umeregrobust,
        title={UMERegRobust - Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration},
        author={Yuval Haitman and Amit Efraim and Joseph M. Francos},
        year={2024},
        booktitle = {ECCV}
    }
                    

References

  • Christopher and Jaesik Park and Vladlen Koltun: Fully Convolutional Geometric Features. In ICCV(2019). GitHub.
  • Huang, Shengyu and Gojcic, Zan and Usvyatsov, Mikhail and Wieser, Andreas and Schindler, Konrad: Predator: Registration of 3D Point Clouds With Low Overlap. In CVPR(2021). GitHub.
  • Yu, Hao and Li, Fu and Saleh, Mahdi and Busam, Benjamin and Ilic, Slobodan: CoFiNet: Reliable Coarse-to-fine Correspondences for Robust PointCloud Registration. In NeurIPS(2021). GitHub.
  • Zheng Qin and Hao Yu and Changjian Wang and Yulan Guo and Yuxing Peng and Kai Xu: Geometric Transformer for Fast and Robust Point Cloud Registration. In CVPR(2022). GitHub.
  • Q. Liu and H. Zhu and Y. Zhou and H. Li and S. Chang and M. Guo: Density-invariant Features for Distant Point Cloud Registration. In ICCV(2023). GitHub.

Acknowledgements

This work was supported by Israel Innovation Authority under Grant 77887.
We thank PaSCo contributors for the web-page template.