Representing efficiently and accurately ultra large and detailed scenes using smart voxels

équipe: Maverick / INRIA-LJK

Volume of voxels has been shown a promising alternative [] to polygon meshes in order to represent and render massive scenes both very large and detailed: their total order allows to directly access only the visible data, neighborhood is directly at hand and signal processing tools can apply when it comes to pre-filtering to prevent aliasing. Swarms of details can naturally be represented as fuzzy data. This makes same-appearance hierarchical Levels of Detail well posed and thus integration along conic rays very efficient.

Still, since our voxel is indeed a proxy to unresolved geometry, all its appearance parameters are potentially view and light dependent [], and its visibility correlated to neighborhood (in particular, but not only, on silhouettes) []. This requires conceiving compact continuous interpolatable models to account for all these. Besides, the screen-wise (linear) interpolation should differs to the depth-wise (opacity-dependent) interpolation since what is filtered is appearance, which is a non-linear function of the density. This thus requires to consider less naive signal processing operators than what is currently done in order to obtain a truly faithful alternate representation.

Another aspect is that since memory is limited (especially on a GPU) compare to the potentially infinite amount of data in our target scenes, on-the-fly data generation (including loading, decompression, amplification) is better required on demand along the rendering of flyovers. The performances of previous approaches were limited by the possibilities of the graphics hardware. To conceive a seamless real-time rendering scheme on such basis, recent years brought many new tools on the GPU ( threads able to send threads without CPU synchronization, RT cores for managing Bounding Volume Hierarchies and intersections, Tensor Cores for accelerated linear algebra, etc ) that could suggest new tracks.