Specular BSDF Approximation for Efficient Specular Scene Rendering

Guillaume Bouchard, Jean Claude Iehl,Victor Ostromoukhov,Bernard Peroche, Stéphane Albin, Romain Guenegou, Carmen Uson

annual simulation symposium(2012)

引用 22|浏览1
暂无评分
摘要
We propose a simple and robust adaptive specular BSDF evaluation algorithm based of stochastic progressive photon mapping. This algorithm can handle scenes that are considered difficult for most of current simulation approaches which deal with mixed diffuse and specular objects. By contrast, our approach can handle highly specular scene such as car lamps and light guide. The proposed method is simple to implement, needs very small memory for data structures and the resulting image does not depend on parameters. The method can produce bias- and noise-free images. The contribution of this paper is in two folds. First, we propose a simple and straightforward method for estimation of light transport of highly specular scenes. Then, we demonstrate an efficient approximation of the exact method by a real-time GPU-based implementation. As with progressive photon mapping, the algorithm repeats two passes, the eye-pass and the photon-pass. The first pass traces rays from the observer through the scene and stores a hit-point on the first surface hit. This hit-point is associated with a search radius and the BSDF of the surface. On the second pass, photons are traced from the light sources through the scene and their energy is splatted at each hit-point. The contributed energy for each hit-point depends on its associated BSDF and gathering radius. At the end of each eye-pass the gathering radius of hit-points is reduced. This ensures that the error associated with the light density estimation associated with each hit-point will diminish and that it will converge to the correct value. In our method, the rendering starts with a near-diffuse glossy BSDF and, through passes, the glossiness is raised to converge to a highly glossy BSDF which will converge to a specular BSDF in the limit. This method behaves exactly as progressive photon mapping regarding convergence properties. The trade-off between initial bias and variance is controlled by initial gathering radius. The amount of variance that unbiased method gives on this kind of scene is replaced by initial bias which will converge to zero as more photons are used in the approximation. Because the method relies on storing the hit-points on the first intersected surface, we can do the gathering step in screen-space which perfectly fits on GPU device. If bias-free results are not needed, the algorithm becomes point of view-independent and therefore suitable for real-time visualization of scene with quality only limited by the amount of photons the GPU device can store.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要