Part 1 – Nvidia GVDB
All path tracing and ray tracing applications relies on data that are stored in the memory some way. For textures, a lot of memory layouts are available but for volumetric data the best layout is to store them in voxels. You can think of voxels as pixels in a 2d texture but with an extra dimension.
Traversing a 3d space inside voluemetric data bounding box is straightfowrad but a lot of space is wasted if the data is sparse or most of it is empty. This is the kind of volumetric data we deal with everyday in vfx and medical imaging.
For this very problem Dr. Ken Museth developed VDB in 2012. Unlike uniform grids that stores voxels with 3d indices (or positions) Open VDB stores voxels in a hierarchical tree that gives many advantages in ray traversal and memory consumptions.
Open VDB on gpu
If you want to store your volumetric data on gpu, the data must be flattened into a 3d texture. For directly converting and storing very large sparse data on gpu, Dr. Rama Karl Hoetzlein (Nvidia) developed GVDB Voxels framework. And I’ve choosen this library to integrate into my path tracer to render VDB files directly on gpu.
Integrating GVDB to a path tracer is easy and requires very low maintenance in code but bootstrapping is a little non-trivial. To start integration one needs to build the open-exr and open vdb libraries and then start building the Cudpp and gvdb libraries. To cut down the complex dependency cycles, I’ve pre-built the windows x64 binaries and libraries with MSVC 14.16 and placed them as source files. For Linux and Mac users building these libraries and some change in CMake file is necessary.
Besides storing very large sparse data on gpu atlases, GVDB also comes with many house keeping modules that maintain scene structure and parameters. For example the camera used in my path tracer uses the gvdb.scene module to update the scene if comes a change in windows size or a focal change.
GVDB voxels however comes with one shortcoming. There is currently no way to store volumetric data in nodes other then leaf nodes. This causes some transmittance and shadow problems that are especially visible if inside of the cloud is uniform and contains spaces filled with child nodes.
To quickly fix this problem I’ve filled the inside the cloud vdb files with active voxels and exported them from Houdini as vdb files. This Houdini file is also provided in github repository in “assets” folder.
Part 2 – Feature Plans & Bugs
Possible feature plans
Currently Volumetric Path Tracer supports three types of lights: Procedural sky map, HDRI texture and distant light. Future plans include support for area lights, spot lights and point lights. Also equangular sampling for these lights can also be added.
Rendering a single cloud vdb file is a struggle by itself but to create stunning cloudscapes one has to think about optimizing the current structure. Since gpu memories are currently limited to a couple gigabytes, fitting a cloudscape of couple hundred gigabytes is not currently possible. One solution is to use instancing and using the same couple vdb files with random rotations and transforms. Currently VPT traverses a single gvdb instance but extending this might be good path.
Another option for a cloudscape might be using procedural cloud textures. This option is throughly inspected during rnd of Horizon Zero Dawn and has many ideas that can be used. An option might be is to implement same ideas with cuda arrays.
Thin parts of the volume is very hard to sample and most of the rays misses these parts. A recent paper by Pixar (Efficient Unbiased Rendering of Thin Participating Media) addresses this issue and it might be good challenge to implement this paper.
Multiple scattering in dense clouds needs many samples to capture the features specific to clouds. For this I’ve started to implement the 2015 paper by Magnus Wrenninge: Art-Directable Multiple Volumetric Scattering. But more work is needed.
Volumetric path tracing can also be extended beyond volumetric data and maybe sdf data can be rendered with pbr materials and lighting.
As there are bugs present in any repository mine is not an exclusion. Some visible and non visible bugs infest VPT at the moment. One of them is the black pixels that occur during rendering with procedural sky light. This is most probably a ray returning NaN value and possibly connected to the sampling scheme.
Another bug is the fireflies showing up when rendering with sun light. This maybe due to transmittance and can be easily dealed with clamping, but this introduces a bias.
Camera in VPT also requires some work and there is a memory allocation error when window size is changed and image is saved by pressing keyboard “S” button.
Part 3 – Github Repo
When I’ve started working on this project I kept the github repo private to ensure that it’s mature enough, at least in my standards. Now the repo is public to help anyone that prefers examining actual code over rather then reading articles and books. The repo is located at:
If you dive into the core of the renderer (“render_kernel.cu” file), you will notice that there are now 3 pathways to render the volumetric objects. One of them is the “volume_integrator” that uses the pbrt implementation. “direct_integrator” uses the legacy rendering path that came with Ray Tracing Gems Vol. 28 which is the structural base for the volumetric path tracer. This integrator is much faster and simpler than pbrt integrator. The last one is the now wip “art_directable_integrator” which I will be testing the ideas from the 2015 paper.
The installation instructions are located in readme file in the repo. And please contact me or file an issue in github if you have any question.
Thank you for stopping by. See you in the next posts.