Lift3D: Zero-Shot Lifting of Any 2D Vision Model to 3D
CVPR 2024
- Mukund Varma T1
- Peihao Wang2
- Zhiwen Fan2
- Zhangyang Wang2
- Hao Su1*
- Ravi Ramamoorthi1*
- 1University of California at San Diego
- 2University of Texas at Austin
* Equal advising
Abstract
In recent years, there has been an explosion of 2D vision models for numerous tasks such as semantic segmentation, style transfer or scene editing, enabled by large-scale 2D image datasets. At the same time, there has been renewed interest in 3D scene representations such as neural radiance fields from multi-view images. However, the availability of 3D or multiview data is still substantially limited compared to 2D image datasets, making extending 2D vision models to 3D data highly desirable but also very challenging. Indeed, extending a single 2D vision operator like scene editing to 3D typically requires a highly creative method specialized to that task and often requires per-scene optimization. In this paper, we ask the question of whether any 2D vision model can be lifted to make 3D consistent predictions. We answer this question in the affirmative; our new Lift3D method trains to predict unseen views on feature spaces generated by a few visual models (i.e. DINO and CLIP), but then generalizes to novel vision operators and tasks, such as style transfer, super-resolution, open vocabulary segmentation and image colorization; for some of these tasks, there is no comparable previous 3D method. In many cases, we even outperform state-of-the-art methods specialized for the task in question. Moreover, Lift3D is a zero-shot method, in the sense that it requires no task-specific training, nor scene-specific optimization.
Overview of Lift3D: a) Given multi-view images of a scene, we first extract an intermediate feature map using an encoder for each view independently, b) Using the source view features, we estimate the target view feature map via an extended generalizable novel view synthesis pipeline that learns to correct feature space inconsistency and fuse information according to 3D geometry priors, and c) Directly use the decoder from the pre-trained visual model to process the estimated feature map and synthesize the final prediction for downstream tasks.
Gallery of Tasks
We illustrate a representative collection of tasks / 2D vision models Lift3D can realize in 3D. In practice, our model can generalize to any 2D vision operator out of the box without any extra tuning. In each row, we compare the outputs from the 2D vision models (left) operating independently on two views of a scene and our 3D "lifted" realization (right) of the same.