I joined the EVASION team in september 2006 in order to work on real time rendering of natural landscapes as a whole. I'm interested in the animation and realistic rendering of terrain, atmosphere, ocean, vegetation, rivers, clouds, etc. I'm looking for real-time and scalable algorithms allowing users to navigate freely anywhere in very large landscapes (up to whole planets), from ground to space, without visible transitions. I left INRIA in 2011 and I am no longer doing research on this topic.
We present a new algorithm for the real-time realistic rendering and lighting of forests. Our method can render very large forest scenes in realtime, with realistic lighting at all scales, and without popping nor aliasing. Our method is based on two new forest representations, called z-fields and shader-maps, with a seamless transition between them. Our first model builds on light fields and height fields to represent and render the nearest trees individually, accounting for all lighting effects. Our second model is a location, view and light dependent shader mapped on the terrain, accounting for the cumulated subpixel effects. Qualitative comparisons with photos show that our method produces realistic results.
We present a new algorithm for modelling, animation, illumination and rendering of the ocean, in real-time, at all scales and for all viewing distances. Our algorithm is based on a hierarchical representation, combining geometry, normals and BRDF. For each viewing distance, we compute a simplified version of the geometry, and encode the missing details into the normal and the BRDF, depending on the level of detail required. We then use this hierarchical representation for illumination and rendering. Our algorithm runs in real-time, and produces highly realistic pictures and animations.
Extension: an improved version using an FFT method to synthesize the surface.
We present a new and accurate method to render the atmosphere in real time from any viewpoint from ground level to outer space, while taking Rayleigh and Mie multiple scattering into account. Our method reproduces many effects of the scattering of light, such as the daylight and twilight sky color and aerial perspective for all view and light directions, or the Earth and mountain shadows (light shafts) inside the atmosphere. Our method is based on a formulation of the light transport equation that is precomputable for all view points, view directions and sun directions. We show how to store this data compactly and propose a GPU compliant algorithm to precompute it in a few seconds. This precomputed data allows us to evaluate at runtime the light transport equation in constant time, without any sampling, while taking into account the ground for shadows and light shafts.
We present a method to populate very large terrains with very detailed features such as roads, rivers, lakes and fields. These features can be interactively edited, and the landscape can be explored in real time at any altitude from flight view to car view. We use vector descriptions of linear and areal features, with associated shaders to specify their appearance (terrain color and material), their footprint (effect on terrain shape), and their associated objects (bridges, hedges, etc.). In order to encompass both very large terrains and very fine details we rely on a view dependent quadtree refinement scheme. New quads are generated when needed and cached on the GPU. For each quad we produce on the GPU an appearance texture, a footprint texture, and some object meshes, based on the features vector description and their associated shaders. Adaptive refinement, procedural vector features and a mipmap pyramid provide three LOD mechanisms for small, medium and large scale quads. Our results and attached video show high performance with high visual quality.
The goal of this work is to render in real time planet-sized terrains populated with plants and trees. Since it is not possible to precompute and store the position of each plant (there are billions of them) we generate them on the fly. For this we generate candidate positions with a pseudo random generator, and we test each candidate against a land cover classification (LCC) map in order to reject all positions that fall outside vegetation areas (our Earth LCC map is quite coarse - 1 km per pixel - so we amplify it on the fly with procedural noise to add small scale variations). We then pack the validated positions using a GPU stream reduction algorithm, and we use this packed structure to draw many (> 100000) plant instances with appropriate LOD using hardware instancing.
This work was done by Yacine Amara as part of his PhD at Ecole Militaire Polytechnique d'Alger, during a five months visit in the EVASION team that I supervised, based on previous work done in collaboration with Xavier Marsault in 2007.
We present an algorithm for the interactive simulation of realistic flowing fluids in large virtual worlds. Our method relies on two key contributions: the local computation of the velocity field of a steady flow given boundary conditions, and the advection of small scale details on a fluid, following the velocity field, and uniformly sampled in screen space.
This work was done by Qizhi Yu, a former EVASION PhD student codirected by Fabrice Neyret and me.
We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior. Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2-10 frames per second.
This work was done by Antoine Bouthors, a former EVASION PhD student with which I worked during his PhD. I then directed two Master students on this topic, Vincent Vidal and Laurent Belcour.
After my PhD thesis done at INRIA in the SIRAC team (now named SARDES), defended in 2001 and whose tile was "A framework for the adaptation of the non-functional properties of distributed applications", I worked during 5 years at France Telecom R&D as a research engineer, on component oriented programming. This work resulted in several publications:
and in two Open Source software distributed by the ObjectWeb consortium:
In parallel of my professionnal activities I tried to model Rama, a huge spacecraft described by A.C. Clarke in "Rendez vous with Rama":
This led to a movie (YouTube) that was accepted at the SIGGRAPH 2006 computer animation festival. And this is how I became interested to work on the above theme!