M2R 2019-2020 project :
« Revisiting volumetric ray-tracing to make it well-posed »



Advisor

Fabrice NEYRET   - Maverick team, LJK, at INRIA-Montbonnot (Grenoble)



Context

Volumetric ray-tracing in voxel grids is interesting for rendering very complex and detailed scenes for many theoretical and practical reasons: it is not limited to surfaces, and even for solid objects it is a more reasonable representation if numerous and smaller than pixels anyway (e.g., foliage at distance, SVO), it relates to more general physics of light transport, voxels are an ordered set along a ray so visiting complex content can stop as soon as the ray gets opaque, it relies on direct access to location and neighborhood which opens signal processing formalism and (pre/post) filtering for adaptive resolution: as for the MIPmapping techniques, we could rely on voxel resolution fitting the pixel size to represent all subscale content.
Alas, things are more tricky that they seem: along a ray transparency multiplies but in screen plane direction it’s opacity that should be interpolated, so the reconstruction and blending of local density is ambiguous and ill-posed as soon as the density field is varying, especially if it’s fast (e.g. object borders). Also when the voxel content represent subscale geometry, its effective opacity is view-dependent (e.g. sheet seen for side vs front), and is often correlated with neighbor voxels: on large objects, a silhouette covering half a voxel (thus semi-transparent), should nevertheless totally hide the near voxels behind.


Description of the subject

We already did some work related to internal correlations on surfaces or in volumes. In this project, we want to explore a more direct representation of view-dependent voxels so as to use only well-posed blending along the ray and encode some of the parallax of the voxel content to account for correlation in the interpolation in screen plane direction.



Prerequisite