Obj-NeRF: Extracting Object NeRFs from Multi-view Images

1Tsinghua University, 2The Chinese University of Hong Kong

Obj-NeRF extracts object NeRFs from multi-view images and performs NeRF editing works


Neural Radiance Fields (NeRFs) have demonstrated remarkable effectiveness in novel view synthesis within 3D environments. However, extracting a radiance field of one specific object from multi-view images encounters substantial challenges due to occlusion and background complexity, thereby presenting difficulties in downstream applications such as NeRF editing and 3D mesh extraction.

To solve this problem, in this paper, we propose Obj-NeRF, a comprehensive pipeline that recovers the 3D geometry of a specific object from multi-view images using a single prompt. This method combines the 2D segmentation capabilities of the Segment Anything Model (SAM) in conjunction with the 3D reconstruction ability of NeRF. Specifically, we first obtain multi-view segmentation for the indicated object using SAM with a single prompt. Then, we use the segmentation images to supervise NeRF construction, integrating several effective techniques. Additionally, we construct a large object-level NeRF dataset containing diverse objects, which can be useful in various downstream tasks.

To demonstrate the practicality of our method, we also apply Obj-NeRF to various applications, including object removal, rotation, replacement, and recoloring.


The video can also be played by clicking here.

Object NeRFs Dataset

Extracting object NeRFs from large multi-view datasets with textual input "chair"

Extracting Object NeRFs from Large Indoor Scenario

Obj-NeRF extracts the "toy", "guitar", "closestool" from ScanNet large indoor scenario with a few prompts on one single image.

Compare to Existing Works

Scene 1: horns

Obj-NeRF will provide higher reconstruction resolution than existing works.

Scene 2: counter

Object NeRFs segmented from Obj-NeRF will have fewer floaters.


  title={Obj-NeRF: Extract Object NeRFs from Multi-view Images}, 
  author={Li, Zhiyi and Ding, Lihe and Xue, Tianfan},
  journal={arXiv preprint arXiv:2311.15291},