Fast Learning Radiance Fields by Shooting Much Fewer Rays

TIP 2023

1School of Software, Tsinghua University, 2Department of Computer Science and Technology, Tsinghua University, 3Kuaishou Technology 4Wayne State University

Abstract

Learning radiance fields has shown remarkable results for novel view synthesis. The learning procedure usually costs lots of time, which motivates the latest methods to speed up the learning procedure by learning without neural networks or using more efficient data structures. However, these specially designed approaches do not work for most of radiance fields based methods. To resolve this issue, we introduce a general strategy to speed up the learning procedure for almost all radiance fields based methods. Our key idea is to reduce the redundancy by shooting much fewer rays in the multi-view volume rendering procedure which is the base for almost all radiance fields based methods. We find that shooting rays at pixels with dramatic color change not only significantly reduces the training burden but also barely affects the accuracy of the learned radiance fields. In addition, we also adaptively subdivide each view into a quadtree according to the average rendering error in each node in the tree, which makes us dynamically shoot more rays in more complex regions with larger rendering error. We evaluate our method with different radiance fields based methods under the widely used benchmarks. Experimental results show that our method achieves comparable accuracy to the state-of-the-art with much faster training.

Method

Our method is a general framework of accelerating training procedure, which can be easily integrated with mainstream radiance fields based methods. This generalization ability comes from the way of how we shoot much fewer rays in volume rendering to perceive radiance fields better, which is a common procedure in radiance fields based methods. Our method is formed by two main components: (1) a probability-based sampling function which samples rays according to the input image context and (2) an adaptive quadtree subdivision strategy that learns where to reduce rays in simple regions and where to increase rays in complex regions.

The above figure illustrates our method. Given an input view, we first (a) calculate a probability distribution according to the context around each pixel. Then we (b) use it as a prior probability distribution to sample pixels from which we shoot rays. These rays are fed into (c) a radiance fields backbone network and an average rendering loss is gathered for quadtree leaf nodes. Then (d) an adaptive quadtree subdivision algorithm is applied on each leaf node according to its average rendering loss to adjust the distribution of sampling rays.

Visualization Results

Comparison on Synthetic Dataset

Comparison on LLFF Dataset

Comparison on LF Dataset

Comparison on Tanks And Temples Dataset

Comparison on Mip-NeRF 360 Dataset

Comparison on DTU Dataset

Quadtree Visualization And More Comparison Results

BibTeX

@article{zhang2023fast,
      title={Fast Learning Radiance Fields by Shooting Much Fewer Rays},
      author={Zhang, Wenyuan and Xing, Ruofan and Zeng, Yunfan and Liu, Yu-Shen and Shi, Kanle and Han, Zhizhong},
      journal={IEEE Transactions on Image Processing},
      volume={32},
      pages={2703--2718},
      year={2023},
      publisher={IEEE}
    }