A comprehensive review of pose-free neural rendering and 3D reconstruction, covering NeRF and 3DGS methods with only noisy pose estimation or without camera pose priors.
Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have significantly advanced 3D scene reconstruction and novel view synthesis in recent years. Despite continuous improvements in accuracy, quality, and application scope, these methods fundamentally rely on a critical assumption: precise camera poses for all input images. The reconstruction pipeline typically employ multi-view geometry techniques such as Structure-from-Motion (SfM) or Simultaneous Localization and Mapping (SLAM) to estimate these camera poses. However, in many real-world cases, SfM/SLAM may fail due to weak textures, large viewpoint changes, or non-sequential inputs.
This shifts the focus to the joint estimation of scene structure and camera motion within a unified framework. Understanding and addressing this challenge is essential to unlock the full potential of neural 3D reconstruction in unconstrained real-world environments. To address these challenges of 3D scene reconstruction without reliable camera parameters, recent research has attracted growing interest in adapting and extending neural rendering paradigms (particularly NeRF and 3DGS) to perform pose-free or pose-robust reconstruction.
This survey systematically reviews these pose-free approaches into following categories:
The following datasets are widely used for evaluating pose-free NeRF and 3DGS reconstruction. Their characteristics differ in pose availability, scene type, trajectory patterns, and scale.
| Dataset | Camera Poses | Synthetic | Scene Type | Trajectory Type | #Views | #Scenes |
|---|---|---|---|---|---|---|
| NeRF Synthetic | Ground-truth | Yes | Indoor | Object-centric | ~100 | 8 |
| LLFF | Known | No | Indoor | Forward-facing | 20–60 | 8 |
| DTU | Known | No | Indoor | Object-centric | 49–64 | 124 |
| Replica | Known | Yes | Indoor | Complex trajectory | ~2K | 18 |
| Tanks and Temples | Known | No | Outdoor | Complex trajectory | ~200 | 8 |
| RealEstate10K | Unknown | No | Indoor | Complex trajectory | Video | ~70K |
| CO3D V2 | Estimated | No | Mixed | Object-centric | ~200 | ~40K |
| Static Hikes | Estimated | No | Outdoor | Complex trajectory | ~10K | 12 |
Synthetic datasets provide accurate pose annotations, while real‑world datasets capture diverse illumination, large motion baselines, and complex geometry—making them essential for benchmarking robust pose‑free reconstruction.
Representative scenes from major datasets can be visualized below.
Figure: Dataset gallery
We maintain a comprehensive Awesome Pose-Free NeRF & 3DGS list on GitHub, covering papers, datasets, benchmarks, and open-source implementations used in this survey.
A curated and continuously updated list of papers, datasets, and implementations.
Full paper list categorized by Base Model Enhancement / Strategy / Prior / Applications:
→ View Paper List (NeRF)
→ View Paper List (3DGS)
Includes LLFF, DTU, CO3D, Replica, RealEstate10K, Tanks and Temples, etc.
→ View Dataset Links
@article{posefree_survey_2025,
title={A Survey on Pose-Free Neural Radiance Fields and 3D Gaussian Splatting},
author={Dongbo Shi,Lubin Fan,Bojian Wu,Shen Cao,Jinhui Guo,Ligang Liu,Renjie Chen},
year={2025}
}