Neural Radiance Fields
Eric R. Chan, Marco Monteiro, Petr Kellnhofer,Jiajun Wu,Gordon Wetzstein
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches however fall short in two ways: first, they may lack an underlying 3D representation or rely on view-inconsistent rende...
Cited by13BibtexViews74
0
0
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
With qualitative and quantitative results, we demonstrate that the approach achieves high image quality with a greater than 10× improvement in render time compared to the state-of-the-art in neural volume rendering
Cited by9BibtexViews87
0
0
Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
3) While we have demonstrated our method on real data from the DTU dataset, it should be acknowledged that the DTU dataset was captured under controlled settings and has matching camera poses across all scenes with limited viewpoints
Cited by0BibtexViews108
0
0
Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt,Pratul P. Srinivasan,Jonathan T. Barron,Ren Ng
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We demonstrate the benefits of using meta-learned initial weights optimized to reconstruct a specific class of signals
Cited by0BibtexViews91
0
0
computer vision and pattern recognition, (2021)
We have presented D-Neural radiance fields, a novel neural radiance field approach for modeling dynamic scenes
Cited by0BibtexViews107
0
0
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We presented an approach for monocular novel view and time synthesis of dynamic scenes by Neural Scene Flow Fields, a new representation that implicitly models scene time-variant reflectance, geometry and 3D motion
Cited by0BibtexViews113
0
0
Wenqi Xian,Jia-Bin Huang,Johannes Kopf, Changil Kim
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We showcase free-viewpoint video rendering of several challenging dynamic scenes captured with hand-held cellphone cameras
Cited by0BibtexViews81
0
0
Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li,Kwang Moo Yi,Andrea Tagliasacchi
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We have presented Decomposed radiance fields – Decomposed Radiance Fields – a method to increase the inference efficiency of neural rendering via spatial decomposition
Cited by0BibtexViews137
0
0
Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi,Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (2021)
We present Neural Radiance Fields-W, a novel approach for 3D scene reconstruction of complex outdoor environments from unstructured internet photo collections
Cited by0BibtexViews83
0
0
european conference on computer vision, pp.405-421, (2020)
We believe that this work makes progress towards a graphics pipeline based on real world imagery, where complex scenes could be composed of neural radiance fields optimized from images of actual objects and scenes
Cited by134BibtexViews537
0
0
NIPS 2020, (2020)
We present Neural Sparse Voxel Fields that consists of a set of voxel-bounded implicit fields, where for each voxel, voxel embeddings are learned to encode local properties for high-quality rendering;
Cited by31BibtexViews420
0
0
Neural Radiance Fields++ improves the parameterization of unbounded scenes in which both the foreground and the background need to be faithfully represented for photorealism
Cited by14BibtexViews96
0
0
Keunhong Park, Utkarsh Sinha,Jonathan T. Barron,Sofien Bouaziz,Dan B Goldman, Steven M. Seitz, Ricardo-Martin Brualla
We present the first method capable of photorealistically reconstructing a non-rigidly deforming scene using photos/videos captured casually from mobile phones. Our approach -- D-NeRF -- augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric de...
Cited by8BibtexViews83
0
0
Alex Trevithick, Bo Yang
We present a simple yet powerful implicit neural function that can represent and render arbitrarily complex 3D scenes in a single network only from 2D observations. The function models 3D scenes as a general radiance field, which takes a set of posed 2D images as input, constru...
Cited by0BibtexViews40
0
0
NIPS 2020, (2020)
We have introduced Generative Radiance Fields for high-resolution 3D-aware image synthesis
Cited by0BibtexViews144
0
0
小科