Neural Inverse Rendering of an Indoor Scene from a Single Image

2019 IEEE/CVF International Conference on Computer Vision (ICCV)(2019)

引用 127|浏览255
暂无评分
摘要
Inverse rendering aims to estimate physical scene attributes (e.g., reflectance, geometry, and lighting) from image(s). As a long-standing, highly ill-posed problem, inverse rendering has been studied primarily for single 3D objects or with methods that solve for only one of the scene attributes. To our knowledge, we are the first to propose a holistic approach for inverse rendering of an indoor scene from a single image with CNNs, which jointly estimates reflectance (albedo and gloss), surface normals and illumination. To address the lack of labeled real-world images, we create a large-scale synthetic dataset, named SUNCG-PBR, with physically-based rendering, which is a significant improvement over prior datasets. For fine-tuning on real images, we perform self-supervised learning using the reconstruction loss, which re-synthesizes the input images from the estimated components. To enable self-supervised learning on real data, our key contribution is the Residual Appearance Renderer (RAR), which can be trained to synthesize complex appearance effects (e.g., inter-reflection, cast shadows, near-field illumination, and realistic shading), which would be neglected otherwise. Experimental results show that our approach outperforms state-of-the-art methods, especially on real images.
更多
查看译文
关键词
neural inverse rendering,indoor scene,physical attributes,reflectance,lighting,single objects,scene attributes,learning based approach,Residual Appearance Renderer,complex appearance effects,inter-reflection,self-supervised learning,input image,estimated components,synthetic data,reconstruction loss,SUNCG-PBR,large-scale synthetic dataset,inverse rendering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要