Image Based Methods for Photo Realistic Rendering
Contact: Jonas Unger
Photo-realistic computer graphics images are today a fundamental goal in many application areas ranging from product visualization and virtual reality to special effects in movies and computer games. In this project we use High Dynamic Range (HDR) imaging to capture and model radiometric and geometric properties of real world scenes.
A prototype HDR-video camera in a setup where the scene is imaged through the reflection in a mirror sphere. The sphere is used to capture a near 360 degree panoramic image.
For this purpose, the project has developed HDR-video imaging systems capable of capturing high quality video, where each frame covers a dynamic range of up to 10.000.000 : 1 at high speed. The image below displays five virtual 8-bit exposures generated from an HDR image that visualizes the dynamic range of the scene in which it was captured. Using such data as input, the project develops algorithms and methods for scene reconstruction and photo realistic rendering.
A set of 8-bit exposures generated from an HDR image. The images displays the dynamic range of the real scene.
Key research challenges investigated within the projects relates to processing and display of HDR-video, novel algorithms for reconstruction of scene geometry, measurement and modeling of material properties (reflectance, color etc.) for efficient rendering, as well as data structures for storage of geometric and radiometric scene information, and algorithms for efficient image synthesis. The rendering algorithms enables virtual objects to be placed into captured real scenes and appear as if they were actually there.
A scene is captured by reconstructing a model that describes the geometric and radiometric properties of the scene. The model enables virtual objetcs to be seamlessly placed into the captured scene.
Simulating the light transport in complex scenes poses several challenges. To handle general scenes including a wide range of surface geometries, reflection models and lighting effects, we use Monte Carlo methods. Monte Carlo methods randomly sample light paths that connects light sources in the scene to the image sensor. Unbiased methods such as path tracing, bidirectional path tracing and Metropolis light transport sample complete paths connecting a light source and the sensor. On the other hand biased methods such as irradiance caching and photon mapping uses an intermediate step whee intermediate light transport paths are stored and reused between several viewing rays. Biased methods, however, have the disadvantage that they introduce a small blurriness in the results (non-zero expected error). On the other hand, there are certain types of light paths that are very challenging to sample using unbiased methods. These are paths where emitted light is transported via glossy surfaces to a diffuse surface and then again via a glossy surface to the image sensor.
While previous methods have often treated renderings of separate frames in an animated sequence as independent light transport solutions, we seek to investigate both biased and unbiased methods to effectively reuse information from previous (correlated) frames to provide more efficient Monte Carlo estimators. For this purpose, we will investigate the use of Sequential Monte Carlo methods in light transport simulation. Such methods are useful in a number of scenarios, for example, a moving camera in a static scene, interactive updates to material properties (with direct applications to transfer function exploration in Medical visualization), and most generally fully dynamic scenes.