Segment Anything in 3D with NeRFs

1AI Institute, SJTU 2Huawei Inc. 3School of EIC, HUST
* denotes equal contributions.
denotes project lead.
NeurIPS 2023

Given a NeRF, just input prompts from one single view and then get your 3D model. overview

Abstract

overview

We propose a novel framework to Segment Anything in 3D, named SA3D. Given a neural radiance field (NeRF) model, SA3D allows users to obtain the 3D segmentation result of any target object via only one-shot manual prompting in a single rendered view. With input prompts, SAM cuts out the target object from the according view. The obtained 2D segmentation mask is projected onto 3D mask grids via density-guided inverse rendering. 2D masks from other views are then rendered, which are mostly uncompleted but used as cross-view self-prompts to be fed into SAM again. Complete masks can be obtained and projected onto mask grids. This procedure is executed via an iterative manner while accurate 3D masks can be finally learned. SA3D can adapt to various radiance fields effectively without any additional redesigning. The entire segmentation process can be completed in approximately 2 minutes without any engineering optimization. Our experiments demonstrate the effectiveness of SA3D in different scenes, highlighting the potential of SAM in 3D scene perception.

Results on 3D Segmentation of Single Object

Results on 3D Segmentation of Multi-objects

Citation

@inproceedings{cen2023segment,
      title={Segment Anything in 3D with NeRFs}, 
      author={Jiazhong Cen and Zanwei Zhou and Jiemin Fang and Chen Yang and Wei Shen and Lingxi Xie and Xiaopeng Zhang and Qi Tian},
      booktitle={NeurIPS},
      year={2023}
}

Acknowledgements

The authors would like to thank Xiaoyu Wu for her assitance in preparing this project page.