High Quality Volume Rendering
Contact: Anders Ynnerman
In medical volume visualization, the volume rendered images are primarily achieved using a static mapping from voxel amplitudes to color quadruples (rgba). In this project we explore the application of state-of-the-art signal processing techniques into volumetric visualization to make the extraction of information from data a dynamic process using prior knowledge about the content inside the dataset. One possible approach is to use spatially varying interpolation and reconstruction filters during rendering to account for the expected behavior of known materials and their transitional regions.
The derived filtering techniques can be used to improve the ray casting, either directly by improving classification and feature separation or indirectly by allowing multi-dimensional transfer functions, probabilistic models or uncertainty visualization. One obvious important advantage is the possibility to integrate prior information about body tissues found in the body, including their respective value ranges and probabilistic distributions. As a result, the commonly applied assumption of complete data continuity can be relaxed, reducing the impact of interpolation related artifacts. In addition, the derived techniques can also be used to formulate a broader feature-based approach to volume visualization, opening up new possibilities and challenges for both classification and illumination of medical data.