All contents, including the /*jupiter jazz*/ name and logotype are © 2000-2004 Paolo Berto, all rights reserved. Reproduction prohibited.
Maya is a trademark of Alias systems, Mental Ray of Mental Images GmbH, other eventual trademarks are property of their respective owners.


created by: paolo berto
software engineer, special projects, mental images - paolo@mental.com - www.mental.com
principal, /*jupiter jazz*/ - pberto@jupiter-jazz.com - www.jupiter-jazz.com

INTRO  |  ADVANCED FINAL GATHERING  |  SUBSURFACE SCATTERING  |  HARDWARE RENDERING  |  MEMORY OPTIMIZATIONS

Chapter #1 - Advanced Final Gathering
  • What is FG and how does it work
    • FG in origin
    • FG for artists
    • the FG process:
      • phase 1: precomputation
      • phase 2: rendering
    • shading writing notes

  • Illumination techniques with FG
    • things to know in Maya
    • ambient occlusion pass with Final Gathering
      • via camera environment (Maya 5.0.1 and 6.0)
      • via IBL node (Maya 6.0)
    • bouncecard objects
    • Image Based Lighting
      • 1. IBL from geometry sphere (Maya 5.0.1)
      • 2. IBL from mental ray environment shader (Maya 5.0.1)
      • 3. IBL from IBL node (Maya 6.0)
    • compositing

  • Advanced settings and optimizations
    • global FG view (dynamic)
    • FG diagnostics
      • MapViz node
      • diagnostic mode (hidden)
    • FG rebuild freeze (dynamic)
    • FG falloff
      • mi_ray_falloff (for shader writer)
    • mi_ray_offset (dynamic)
    • FG file
    • FG filter
    • FG trace depth
    • -rebuild i (standalone only)
    • RCFG_FORCE_FACTOR = 0...1 (standalone only)
    • RCFG_RASTER_FACTOR = ? (standalone only)
    • -finalgather_display B (standalone only)

  • Guidelines:

    • Reduce low-frequency flickering when using FG in animations
    • FG and photon tracing (with FG fewer I photons are needed, less GI accuracy is required)
    • FG and baking

files on: ../masterclass_files/chapter1_FG/

What is final gathering and how does it work

This is a detailed explanation on what FG is, it's purpose and how does it work.


FG in origin


FG was initially thought as a performance-wise caching technique to improve and 'complete' global illumination (GI) simulations by gathering, after photon tracing, an approximation of the local irradiance coming from the first illumination bounce (though now multi bounce diffusive and specular effects are supported in mr3.3) and by using such information at rendertime for further interpolation with the obvious advantage of requiring a less accurate, therefore faster, GI simulation and still have a physically correct simulation. In extension, a skilled use of both GI and F (which are both catchable) reduces 'dark corner effect' and low-frequency noise (and its relative animation flickering) when compared to a GI only solution.

The FG process happens in a 'final' phase, hence why the name, which is after the photon tracing phase but before the rendering phase (though the process is more complex: FG splits into precomputation at FG phase and extra computation with interpolation at rendertime; in extension either one or the other sub-phase can be skipped):
  • photon tracing phase
  • final gathering phase
    • precomputation of FG points
  • rendering phase
    • computation of extra FG points & interpolation of all FG points

FG stores such local irradiance information in a 3D data structure called FG map, in the same way as photons are stored in the photon maps. These maps are technically called KD-trees (note that KD-tree API are now provided with MR 3.3) and they can be memory temporary or disk resident, depending on the user's need.


FG for artists


On the other side, artists who don't require physical correctness (such as games artists, but not only!) can take advantage of the single FG computation (no GI), since FG still calculates an equivalent of the 1st-bounce of global illumination photon tracing (note that diffusive and specular multi-bounce FG is now supported in mr 3.3) effects such as color bleeding and ambient occlusion are possible, even without any light in the scene. 


How does it work


Final gathering is a two step process:
  1. Precomputation

    A hexagonal (triangular) grid is built in raster space, for each point of the grid an eye ray is sent by the camera to the scene, if there is an intersection with geometry who's shading needs to evaluate irradiance, then a number of FG rays is hemispherically sent around the normal at the intersection.

    The MapViz node in Maya can display such hexagonal/triangular grid of FG points generated in the precomputational phase. It does NOT display points coming from the second phase (rendering/interpolation).



    model courtesy from Olivier Renouard


    Rays will travel until they don't hit something in the surroundings (except if you specify a falloff range which will fade the result with the environment color, a sort of FG ray lifespan and can have a large effect on performance because FG stops pulling in almost irrelevant distant geometry) and will return color informations.

    This is a simplified diagram of the precomputation phase:



    All this information is cached on disk in a 3D FG map file (temporary held in memory or disk resident) for that specific frame or for bundle of frames.

    These precaching FG points are representer as green dots when using the 'diagnoseFinalg' diagnostic mode.

    diagnoseFinalg is activated trough MEL:

    setAttr miDefaultOptions.diagnoseFinalg 1;

    see advanced settings & optimizations, or trough the standalone flag:

    -diagnose finalgather

    Green dots are the precomputation FG points.
    FG radius is set to max = 10, min = 1

    Feedback of first phase using the info messages verbosity level:

    RCFG 0.2  info : scheduling early final gather jobs
    RCI   0.2  info : using scanline algorithm for eye rays
    RCI   0.2  info : using BSP algorithm for secondary rays
    ----------
    RCFG 0.2  info : depth  #finalgather points
    RCFG 0.2  info :   0                   1345
    RCFG 0.2  info : ray type                  number
    RCFG 0.2  info :   eye                       1628
    RCFG 0.2  info :   finalgather             134500
    ----------
    RCFG 0.2  info : wallclock  0:00:01.29 for computing finalgather points
    ----------
    RCFG 0.2  info : optimizing final gather points access (1345 points) (stored)


  2. Extra computation & interpolation

    When rendering phase starts the cached information in the FG map is reused for all irradiance shading calls. FG points calculated in presampling were expensive so nearby fg points are
    reused: depending on the two radius settings interpolation is used for nearby points, radius defines the influence zone of nearby irradiance values to be integrated without spawning more rays. Dynamically if this this influence zone is exceeded, more FG rays are shot during rendertime, this logically happens more often on the geometry edges.

    The following diagram represents the interpolation phase:



    Radius values are expressed by default in world units, but via the 'finalGatherView' attribute they can be expressed in raster space (pixels).

    These rendertime FG points are representer as red dots when using the diagnoseFinalg diagnostic mode:

    The FG diagnostic mode shows also the rendering FG points as red dots, usually present on edges.

    This image also contains more FG green dots to show the effect of reducing the FG radius to max = 1 , min = 0.1




    If new FG points are calculated at rendertime, they are automatically  appended  to the 3D FG map file in memory or on disk, letting the map grow (except if you 'freeze' the map).

    Feedback of second phase using the info messages verbosity level:

    RC   0.2  info : rendering statistics
    RC   0.2  info :   type                       number   per eye ray
    RC   0.2  info :   eye rays                    55569          1.00
    RC   0.2  info :   finalgather rays (added)  1352200         24.33
FG is controlled by accuracy, the number of rays to cast per FG point (default = 1000). Max radius limits what existing fg points can be reused and interpolated, if there are too few FG points in max radius, a new FG point is computed. Min radius forces FG points into the interpolation, and it is normally 1/10th of max radius.

Please note that radius are used for interpolating FG points, there is no relation between radius and the length of the path of a FG ray which is actually controlled by another couple of parameter called FG falloffs.


Shading writing notes


FG is evaluated only for frontside faces (meaning that by default you don't compute irradiance for the back faces, no matters if you make the surface semi-transparent!), this is indeed an advantage, since if new part of the object are becoming visible in an animation, the relative extra FG points will be computed and eventually added to a FG map on disk, without recomputing all points.

At a shading writing level, the key lookup function for FG is mi_compute_irradiance:

miColor color;
mi_compute_irradiance (&color, state);
    result->r += color.r * diffuse->r / M_PI;
    result->g += color.g * diffuse->g / M_PI;
    result->b += color.b * diffuse->b / M_PI;


As you can see RGB results are divided by the 'PI' value, shaders in Maya 6.0 already take this into account, this is not true on Maya 5.0.1 where you may experience a too bright indirect illumination when using FG, what you need to do is to divide Maya shade's irradiance color of 'PI', therefore setting a value of 1/3.14 = 0.318 for the RGB components.


T
here are cases where calculating all points is useful (for example when doing lightmap rendering). If you are a shader writer comfortable with Mental Ray API interface you can use the mi_compute_irradiance_backside function to evaluate irradiance on backside faces:

miBoolean mi_compute_irradiance_backside(
        miColor   *result,
        miState   *state)


Finally mi_finalgather_store allows adding explicit finalgather points. This is mostly useful for light mapping because the finalgather precomputing pass doesn't see the back side of lightmapped objects.

miBoolean mi_finalgather_store(
        miColor   *color,
        miState   *state,
        int       mode)



Illumination technique with FG

Things to know in Maya

There are various parameters which affect FG computations, therefore you should be aware of where they are and why they are affecting illumination.


FG is a raytracing process which enables/requires Mental ray raytracing mode. When Ray tracing is set to off, FG is not active.



FG shoots 'FG rays' which are considered trace rays, therefore objects flagged as 'trace = no' won't contribute to FG. By default the derive from Maya flag is set to on, but you can tweak this behavior manually.

The trace per-object trace flags are important. FG is a subset of the trace flag, hence if you turn it off that surface won't take part of raytracing and FG computations.

They are in the transform nodes.



Every shader in Maya has irradiance attributes which control how indirect illumination will affect shading.

RGB results are divided by the 'PI' value, shaders in Maya 6.0 already take this into account, this is not true on Maya 5.0.1 where you may experience a too bright indirect illumination when using FG, what you need to do is to divide Maya shader's irradiance color of 'PI', therefore setting a value of 1/3.14 = 0.318 for the RGB components.

This is an important one, the camera environment color when different from black will be considered the color of the environment you set your scene in, and will greatly affect FG. However it's useful to use all possible ranges from dark grey to colored to  superwhite depending on what you are doing.

It is not mappable with a texture, if you were wondering.

It's easy to forget about the fact Maya Default Light is on by default. When using FG it is a better idea to shut it off otherwise you'll get unexpected (and uncontrollable) extra illumination. Shut it off in:

RG settings window> common> render options

This is valid in general: colors in Maya are just numbers, you are not limited to the (clamped) zero-one range, but you can use a higher dynamic range for every color, try for example to set your camera environment color to 2 (RGB: r,g,b = 2, HSV: v = 2).

Many times this is useful when using emissive geometry.

Same applies to the irradiance color fields and to any other color field.

It's a good idea to turn off the "casts shadow" flag in the per-object render stats


These are some of the parameters which affect FG computation so we can now kick in with illumination techniques based on FG, these are the most common ways to use FG as an approximation for indirect illumination. I won't cover the use of FG and Global Illumination together in this section, but you have a guideline at the end of the page in howto use FG and GI togheter.



Ambient occlusion with Final Gathering


Ambient occlusion is simply a ratio of how much environment (ambient) light a surface point would be likely to  receive. The following image + diagram should give you an idea of what happens under the hood (take in mind FG rays are sent only for visible points from the camera pow):



You can use Final Gathering to get an ambient occlusion pass (or additional pass, to use later on in comp), note that Final Gathering can provide more informations about indirect illumination since it support multi-bounce, diffusive and specular effects, so ambient occlusion is just a subset of what final gathering can do.

There are two ways to use FG in an 'ambient occlusion' mode:
  • Using the new IBL node (Maya 6.0 only)

    RenderGlobals> MentalRay> Image Based Lighting (IBL)



    In the IBL AE (find later on what the IBL node exactly is) you need to set:

    • Type: Texture
    • Texture color: non-black (50% grey)
    • Visible in Final Gather flag 'on' (other flag are optional)



  • The other way doesn't use the IBL node: just simply set a flat 'environment color' in the camera AE (both Maya 6.0 and 5.0.1)

    Camera> Environment



    Final Gathering rays by default lookup the camera environment color.
Once you've decided which way you want to do your ambient occlusion pass be sure to:
  • turn off maya default light (don't create any other light or sett all your light intensity to 0)
  • create/import geometry
  • if you need to catch occlusion on a floor create a ground plane
  • assign diffusive shaders (lambertian)
  • use pixel accuracy radius via the per object or per scene FG view flags (see FG advanced settings and optimizations) to easily catch fine details specifying radius in raster space:



  • Since the Maya "Use Background" shader cannot *yet* mask the occlusion-shadowing effect which FG produces you can render multiple passes and masks to use in a following compositing stage, by the use of a matte shaders on the model in order to get the plane irradiance, then you can render a pass without the ground floor and without FG just to get an alpha. Now you have all you need to the the magic in compositing.


Emissive objects (bounce cards)


Emissive geometry is very useful because it will illuminate your scene when FG is activated, you can imagine them as area lights. If the geometry is not emissive it can still be seen in specular reflections/refractions, you can also set their primary visibility to off if the object is inside your viewing frustum.

This is what photographers in the real world call 'bounce cards'.


Some people I don't know holding bounce cards (image courtesy: google)


In the CG world things are more flexible and easy. You can turn objects into area lights trough the 'object' area light and the 'user' area light mental ray standalone options, though using arealight is expensive in computations.

FG can be used to have an object act as a "light emitter" even if in reality it is not emitting light: it is just bright in the irradiance lookup.

In Maya, in order to turn an object into a "light emitter" with FG, there are two common ways:
  • set an incandescence color (attribute present n all maya standard shading model) different than black
  • use surface shaders and set their outcolor different than black
I personally prefer to use Surface Shaders because they're simpler hence lighter than other shading models.
  • add emissive/bounce card geometry



  • apply emissive shader to bounce cards: Maya Surface Shader (here you can push color values to superwhites)
  • temporarily turn off all flags in IBL node or set camera env color to black
  • be sure your FG falloff range (see FG advanced settings & optimizations) reaches the bouncecards
  • use specular shaders (blinn) to get reflections of bounce cards on your model



    Note that bounce cards contribute both to create specular reflections and general illumination.

  • you can then apply more complex shaders




  • and use the ambient occlusion setup with CG light(s) and RT shadows for adding realism

Note that you can render all passes separately and flexible weight them in compositing.


IBL Image Based Lighting



It was a common workflow for users to enclose their scene by a geometry sphere where an high dynamic range texture was mapped onto.

The following texture formats are commonly used for such purposes:
  • floating point .TIF (type: fp RGBA)
  • .HDR (type: RGBE)
  • .EXR (type: RGBE, HALF) (Maya 6.0 only)
The HDR format is optimized to contain the same dynamic range of colors a floating point .TIF can contain but it does that packing this information more efficiently in a different datatype (RGBE, ~ 1/4 of the disk space).

ILM's EXR format provides slightly more dynamic range stored in even a smaller disk space, making it the best format currently available for such purposes.

Mental Ray 3.3, therefore Maya 6.0, supports openEXR format trough the mi_openexr.{DSO} which is loaded by default.

So, the color information enclosed in the texture will then be picked up by final gathering rays and incorporated into surface illumination (it will affect reflections, refractions and irradiance). This is what we Image Based Lighting or IBL.

We are going to see three techniques of IBL, and we are starting from the Maya 5.0 implementation to understand how the limitation were fixed and what are the real advantages of using the new IBL node.
  1. IBL from geometry sphere (Maya 5.0.1)
  2. IBL from mental ray environment shader (Maya 5.0.1)
  3. IBL with IBL node (Maya 6.0)
1. IBL from geometry sphere (Maya 5.0.1)

The first technique we are going to see is wrapping a geometry sphere around your scene and texturize it with the HDR texture. This technique is NOT efficient (performance wise) for many reasons, hence why a second more efficient technique is proposed (though it introduces non-performance wise limitations). both techniques should be used only with Maya 5.0.1 since Maya 6.0 provides a new, better workflow which you'll find later on.

IBL with Geometry sphere
  • real shader highlights need real lights hence you should have a light in your scene, maybe a light which emits 'specular only'
  • we need a blurred texture for the environment. It should be blurred to avoid speckles artifacts caused by low number of FG rays. If a sharp texture is used more FG rays should be shoot. I suggest in any case to blur of a certain amount.



  • be sure your FG falloff range reaches the texture
  • wrap your scene around a sphere and apply to it a Maya Surface Shader with an HDR file texture color and render:



  • Although this workflow is appropriate if the distance to the environment is required, there are several drawbacks:


    • the sphere is actual geometry, hence mental ray will tessellate and include it in the BSP tree, the default and generally fastest method for building the  raytracing acceleration structure. This slows down the rendering process significally, due to the encompassing nature of the sphere.

    • The interactive Maya workflow can be hindered if such a large sphere present in the scene.

    • If not hidden, the geometry sphere is considered in the alpha channel

    • If hidden (primary visibility set to off) the geometry sphere will still be visible trough transparent geometry.

2. IBL from mental ray environment shader (Maya 5.0.1)

This is a more efficient technique for IBL than the previous on, infact, to workaround the previous drawbacks we can use Mental Ray environment shaders, which do not involve any real geometry, but just sit passively and gets hit from rays only when necessary.

So, to implement this better technique for IBL the shader we need to use is a custom shader called:
  • mib_spherical_environment
Which you need to attach to the Camera Mental Ray Environment Shader port. A limitation is that this shader will correctly reflect *only* custom shaders.

Be sure to use specify in the "tex" field of mib_lookup_spherical the name of a custom texture (which embeds a color texture declaration) since Maya standard file textures won't work.

So, the guidelines to take in mind are:
  • The mib_lookup_spherical, the camera environment shader, creates an infinite *virtual* sphere wrapped around the scene
  • On this spherical environment an HDR texture (blurred for instance in HDRShop) is applied
  • FG is activated, so FG rays will be spawned to lookup color information from the environment from the texture and return back to the shader illumination information
  • The environment texture appears in the render frame buffer but can be cutout since the alpha is not affected
  • When specular highlights are needed, real lights and bounce cards objects can be used



Limitations for this technique:
  • controlling the placement of the HDR texture is awkward (you need to setup a Mental Ray texture network with: mib_texture_vector, mib_texture_remap etc.)

  • highlights which seems fine actually reflect what is behind the geometry, this is complex to explain so let's just say reflections are not computed correctly.

3. IBL with the IBL node (Maya 6.0)

Now, if you have Maya 6.0, all problems are solved and you should always use the IBL node which we previously used for the "ambient occlusion" setup.

RenderGlobals> MentalRay> Image Based Lighting (IBL)



The IBL node creates a "virtual" "infinite" sphere around your scene. This sphere is in reality an environment shader attached to your currently rendered camera, it basically is mib_lookup_spherical of technique "2." but enhanced in its functions. In Maya's OpenGL view it looks like a yellow wireframe sphere where the texture is mapped with 50% of hardware alpha.

You should always use a diffusive texture, possibly slightly blurred.



The IBL is "virtual" in the sense that is not real geometry, therefore it doesn't affect BSP at all (while instead technique 1. suffers of this limitation), in extension it fully supports shader reflections and refractions correctly (while instead technique "2." can't). This makes IBL node the best way to setup your Image Based Lighting scenes.

Since the IBL is at "infinity", it does not matter if you add something to infinity (translation) or multiply it by anything other than zero (scale): the result will always be infinite. Therefore, even if you were able to
translate or scale IBL, there would be no effect.
  • rotation: allowed
    rotate the IBL sphere as you prefer, this is important if your texture is visible and if the texture mapped onto it is non-uniform in its color range (eg if there are localized white spots which indeed affect illumination strongly)
  • translation: no change
  • scale: no change
So let's see howto setup IBL with the IBL node: in this case we need to get the environment illumination, so we won't use anymore a flat color texture like when ambient occlusion was required, but a real environment texture (spherical and angular format are supported, for the moment,).

So, let's create an IBL node and set the attributes for our initial purpose:

  • Mapping: spherical
  • Type: Image file           
  • Image name: texture path  
  • Visible in Reflections: on
  • Visible in Refractions: on
  • Visible in FG rays: on

Once  you've setup the IBL node you can render out some tests:



This is the result if we use fully diffusive shader (lambert).
Of course changing texture implies an illumination change both as for tonality and intensity, depending also on how "dynamic" is the "range".





And this is the result if we use a chrome shader, just to show that the environment is all around and it's correctly reflected.
Finally, we can apply our final shaders to the model.


Now, it's time to look at the images. Since there are no CG lights in the scene we miss:
  • flexibility of "3 point lighting":
    meaning we don't have forms of control such as the ones provided by 'key lights' or 'rim lights', while we do use the environmental indirect illumination as a 'filling light'

  • highlights

  • direct shadows

    Hence to make this scene more realistic we need to add light(s) and we have two methods to do that:

    1. create standard Maya CG lights
    2. Use IBL in "light emission" mode
We could also consider to mix the both of them using CG lights and "light emitter" IBL. Anyway the pros and cons of each method are:

Methods
Pros
Cons
1. Create standard Maya lights and setup relative shadows, while highlight will be handled at a shader level - flexibility: the light(s) are not linked to our environment and we can freely & artistically setup a lighting rig (though it's best to look up your environment brightest spot as the main light origin/direction).

- speed: it render pretty fast
- It takes a bit of time to setup

- If you want to have lights positioned coherently with the environment and with a proportional intensity to its dynamic range it doesn't provide an automatic solution
2. Use the IBL node not only for FG but also as a light emitter - Setup takes a jiffy - Since it automatically (trough texture simplification & analisys) creates and position an invisible array of CG light, each casting shadows (and even emitting photons, if required) which basically can be comparable to a huge area light, it takes lots of time to render and it requires skills to reduce low frequency noise

1. Create standard Maya CG lights

Method 1 is straight forward, just create, position and set parameters of your lights; as an example I have added a directional, shadow caster light with emit diffuse and emit specular set to on. If you want a more flexible control over your lighting you can create one light for speculars (specular only), another for shadows (diffusive only), and eventually extra key and fill lights.




This image features only directional lighting, no IBL lighting from FG. Notice we now have highlights and shadows
Here FG is activated and indirect illumination from the IBL environment is present together with the directional  light effects.

A full resolution image, with a glossy floor of the last rendering would be:



And you can still add bouncecards for custom reflections.
I leave that as an exercise.



2. Use IBL in "light emission" mode

In this case we will use the built in light emission function of the IBL node:




The IBL node is basically the sum of three mental ray shaders in one single node, as explained in the Maya documentation:

Environment Shader
Along with Final Gathering this shader implements classic style image-based lighting. The color of the environment is picked up by final gather rays and incorporated into surface illumination. An environment shader is passive. It doesn't actively contribute to the scene's lighting; instead, it gets sampled only as needed. Best results are achieved if the IBL texture is diffuse. A specific case would be a texture consisting of a single color; this results in ambient occlusion computation.


Photon Emission Shader
Photons are emitted from the IBL environment sphere. These photons pick up their energies (or colors) from the IBL texture. A photon emission shader emits all its photons once per frame. It is more active than an environment shader in this sense. Photons work best with mostly diffuse IBL textures.


Light Shader
A low-resolution control texture is computed (from the file or procedural IBL texture) and mapped to the IBL environment sphere. Whenever direct lights are sampled, the light shader is invoked. In this sense, the light shader approach is the most active one, and the most expensive. The IBL environment can be seen as one big area light. This approach works best (also due to importance sampling) if the IBL texture contains sharp features, and preferably contains many more black than non-black pixels.

So what happens under the hood when "light emission" is activated, is that a intermediate texture of QualityU*QualityV tiles is created based form color analisys of the original environment texture, then for each of this tiles a directional light is created emitting light and tracing shadows. If we use our texture With a UV quality of 32 x 16:



We can see that the resulting control texture is very bright (this control texture is simulated in a compositing package), we don't have basically any black tile, hence we will have many (32x16) directional lights emitting light and tracing shadow, this is computationally speaking, very expensive.

Anyway the resulting image is:



Note that here we have more direct light all around the car and also the shadow has a different direction: this is due to the nature of the texture we are using which is very diffusive and has a slightly brighter area (a cloudy sun) which affects the shadow direction.

Controlling lighting is therefore tricky because we need to work on the texture, for instance if we create a custom texture with a white spot over a black background we can test further the IBL:
A more logical effect would be getting the average color of our original texture and add over a bright white spot with higher dynamic range:


But the best thing would be using two IBL nodes, one as "environment shader" with a bright texture, and the other one as a "light shader" with the same texture but reduced f-stops of dynamic range. This is currently not supported.

I can Think of two workarounds which I will leave as a personal development:
  • Alternatively "1.a" to bake environment indirect illumination from IBL "environment shader" using the bright HDRI texture; or "1.b" to use mib_environment_spherical

  • Then "2" to use IBL as "light shader" only, using a lower f-stops texture over the baked maps.


Compositing


Just two quick words about compositing since it is useful to render with FG a separate "ambient occlusion" pass: we can infact weight the amount of ambient occlusion we need in a later compositing stage avoiding the need to rerender everything everytime.

As a matter of fact it is useful to render out some more passes:
  • ambient occlusion pass
  • color pass
  • diffuse pass (from lights and/or environment texture)
  • specular pass (from lights and/or environment texture)
  • shadow pass (from lights and/or environment texture)
  • alpha/matte (for FG and BG)
  • z depth
  • bent normal pass
This is not a compositing class so I won't enter in detail on them, but I will mention that the "bent normal" pass can be used togheter with the ambient occlusion pass and your diffuse and specular lighting passes from CG lights to control the "directionality" of the lighting, by "driving" your lighting passes with the information stored in the single color channels of a bent normal pass.

Also, another possible use of a "bent normal pass" is baking the occlusion and bent normal, and merely indexing that into an environment map consisting of the diffuse light from the environment.
Please check the following "Chapter#2 - SubSurface Scattering" for misss_simple_occlusion_shader which handles various ambient occlusion passes at a shading level.


Here's an useful Shake script written by Damian Allen, as he explains:

"...the script is useful for doing Surface Normal Lighting using a horizontal normal in the R channel and vertical in the G. The input values expected are between 0 and 1, where 0 is, (for the red/horizontal channel) 90 degrees left of camera facing angle (for the red) and 1 is 90 degrees to the right of facing angle. Same is true for the vertical, top to bottom."

You can find the script and read the whole discussion at Highend2D



Advanced settings and optimizations

In this section you'll find various (useful) FG related settings which are hidden or non-obvious in Maya.
Some of them need to be dynamically created, some of them are just hidden in the UI; some non-hidden attributes are explained aswell. F
or precision's sake I've stripped out from the online mental ray reference manual definitions of most of the parameter covered, text is in italic font.

There are other parameters on the manual and I strongly suggest you to spend some time on it.


Find at the end a Maya 5.0.1 script with more UI attributes and it's relative 6.0 version.


Global FG view | dynamic | to use both in Maya 6.0 and 5.0.1 |

"FG Radius values are in world space units unless view is specified, in which case the values are in pixels."

Final gathering accuracy (min and max radius) can be specified in 'pixel' units rather than in world space, by providing 'view' accuracy. This can be enabled by creating a dynamic attribute on the mental ray options node (miDefaultOptions) with the name 'finalGatherView' (boolean).

FG view is very useful because allows to set FG radiuses in pixel values instead of using world units (Maya default for instance is centimeters). Hence it is easy to reveal the level of detail you need, just decide up to how many pixels you want to interpolate FG points.

In Maya 6.0 you can find in all shape nodes some nice overrides of FG values, including the one we are speaking about: FG view.



Yet this is useful on a shape basis, therefore on an object basis, so what happens if we have zillions of objects? Or if we want to set FG view on for the whole scene?

Well, there is a dynamic attribute ready for that:


select miDefaultOptions;
addAttr -ln finalGatherView -at bool  miDefaultOptions;



Some detail is not revealed  by FG Notice the amount of detail revealed



Diagnose FG

Mental ray accelerates its high-profile FG technique by means of a three-dimensional map called "final gather map" which contains the distribution of final gathering points with stored irradiance.

It is very helpful to visualize this map because provides a visual feedback often helpful for tuning parameters to get the desired look, moreover helps understanding non expert users go get a visual feeling of the instead abstract nature of the FG map.


Normally in Maya 6.0 you should always use the MapViz node for diagnosing your FG based renderings, because it is fast, intuitive and provides a three dimensional visualization. The Map Visualizer thought takes into account and displays only FG points generated in the "precomputation phase" so if you need infos also on the "rendering phase" (and if you're still on Maya 5.0.1) the 2D FG diagnostic mode is the answer.

3D MapVizualizer node | Maya 6.0 | only FG precomputation points

The MapViz node in Maya can display precomputation FG points (aswell as photons from caustics, GI and volumetric effects). It does NOT display points coming from the second phase (rendering/interpolation).

It is straight forward to visualize such informations: just enabling the Map Visualizer node in the render globals FG settings and render.
You'll find in openGL the visualization of such points tweakable in the MapViz node Attribute Editor. You can easily select the MapViz node from outliner or via MEL.

The following set of images shows 4 OpenGl view in Maya with the MapViz on and all the rest hidden.
Note that FG points are evaluated ONLY for visible front faces, from the pow of the rendering camera.






If you are curious, here's he rendered image:




2D FG diagnostic mode | hidden | both Maya 6.0 and 5.0.1 | both FG precomputation and rendering points

Besides the use of the Maya 6.0 MapViz node, there is another way to visualize a 2D diagnostic mode called "diagnoseFinalg" which will print onto the frame buffer information about the FG points in the 3d map as green dots for initial (raster-space) FG points and as red dots for render-time final gathering points.

select miDefaultOptions;
listAttr miDefaultOptions;

you will find the output of the command on the script editor, between all you'll find:

diagnoseFinalg

this means you can just set this attributes manually:

setAttr miDefaultOptions.diagnoseFinalg 1;

or you can use my modified version of the renderglobals UI

Anyway, here's a visual feedback of diagnoseFinalG in action, the general rule to take in mind is that you shouldn't have too many green dots allover aswell as having red dots only were there's geometry detail: edges, displaced objects...

This simple scene contains vivid edges, concave and convex geometry, lambertian, specular and semitransparent materials:

You can see there very few FG rendering points, just some at the corners of the cube. The image looks not too noisy but the approximation of the scene irradiance is too smoothed out by the high radius values.


Here we are using almost correct radius values, we have some red points, only at the edges, where is truly necessary, the only thing we need to do is increase the number of FG rays to get a smoother result.


Here we are using way too small radius values, as you can see there are too many green dots and also red dots where's not necessary.


Once you've understood radius range, you can play with it and with the number of FG rays.
In general if you use small radius you will need lotsa rays to keep noise down, with high radius you can use less rays, irradiance will be less noisy but also less accurate.





 
FG rebuild freeze (dynamic)
"This is equivalent to finalgather rebuild off, except that the final gather map, once created by reading it from a file or building it for the first frame, will never be modified (unless the finalgather file filename or the finalgather accuracy is changed). Extra finalgather points created during rendering will not be appended, and the finalgather file on disk will not be modified. The user is responsible to make sure that the finalgather map matches the scene and viewpoint in an animation. This is useful if multiple concurrent renderers share the map."

Final gathering supports a 'rebuild' mode called 'freeze' (.mi syntax). It prevents final gathering from touching the cache (final gather file) at all in subsequent renderings of the same scene. This is more restrictive than 'rebuild off', which permits to append new final gather information to the cache. To enable the 'freeze' mode a new attribute needs to be created on the mental ray options node that is in effect (miDefaultOptions), named 'finalGatherFreeze' (boolean). It is respected if the final gather rebuild mode has been turned to 'off' in the final gather section of the render globals.

So, howto add the dynamic attribute:

   
select miDefaultOptions;
addAttr -ln finalGatherFreeze -at bool  miDefaultOptions;

In this scene there is a model and an emissive cylinder just around it. The cylinder is an emissive Surface Shader mapped with an inverted ramp texture. Environment color is black, no default light in the scene and of course FG activated.


Here we have rebuild set to ON and a FG map name specified. When rendering we can see the FG map size on disk is ~151 KB.

You can find the FG map on disk, when a project is set, the directory is:


..\maya\projects\your project\finalgMap



Here we rotate the view. We know from the first chapter that FG is a process for visible surfaces only, so new FG points will be calculated for the new visible surfaces (the back of the car in this case). We don't want to rebuild the whole map so we turn rebuild to OFF and keep the same map name.

Result is that the we do calculate new FG points and these points are appended to the map which is increased in KB: 262KB


We do the same another time to complete the map" now we have points for all views of the car (except bottom) in a 363 KB map.



After step two there is another thing we can do: freeze the map. Turning freeze to ON will avoid calculating new FG points so nothing will be appended to the FG map which will keep the same disk size of step 2.

Since we don't have irradiance information of the new visible points lots of artifacts will be present in the image.

Freezing the map is useful when multiple concurrent renders share the map.




FG falloff

"Limits the length of final gather rays to a distance of  stop in world space. If no object is found within a distance  of  stop, the ray defaults to the environment color.  Objects farther away than  stop from the illuminated point  will not cast light. Effectively this limits the reach of  indirect light for final gathering (but not photons). The  start parameter defines the beginning of a linear falloff range;  objects at a distance between  start and  stop will  fade towards the environment color. This option is useful for  keeping final gather rays from pulling remote parts of the  scene, which may not affect illumination very much, into the  geometry cache. This allows mental ray to render with a much  smaller memory footprint."

So, mi_ray_falloff makes rays fizzle out after a specified distance, with specified falloff range. This can have a large effect on performance because final gathering stops pulling in almost irrelevant distant geometry.

When falloff stop is specified it can largely effect memory and performance since FG will stop pulling in almost irrelevant distant geometry (out of the falloff range) which therefore can be discard from this computation. You should always specify a good falloff range to optimize your FG-based renderings.

This is a simple setup to visually explain what falloff does, again remember that FG rays starts at the geometry: the cylinder is an emissive surface shader, the sphere at the center is just a reference object.






A distance of 24 is specified: FG rays will travel for 24 units then pickup & fade the environment color (black).

Notice that FG rays when a distance of 3 is specified will stop way before.


A function useful for shader writers is:

miBoolean mi_ray_falloff(
        miState   *state,
        double    *start,
        double    *stop)

"All rays and photons cast from this point on will reach a maximum distance of *stop. Objects farther away will not be hit; instead, the environment will be returned (unless cleared in the state by the shader). The *start parameter defines a falloff distance; objects at a distance between *start and *stop will fade from the object color to the environment color (or black if none), to avoid hard edges in animations. This function is useful to limit the reach of rays, especially finalgather rays, to keep computation local and not fill the geometry cache with distant objects that do not matter much anyway. The old start and stop values are returned, and should be restored before the shader returns. This function takes precedence over falloffs in the options and objects."


mi_ray_offset | Maya 6.0 | dynamic |

mi_ray_offset decrees that rays should start some distance away from the point from which they are cast. Closer intersections are ignored. This avoids interference between high-resolution visible geometry and low-resolution trace (remember: FG rays are a subset of 'trace' rays!) geometry.

    Set the per-object 'ray offset' value by adding the following
    dynamic attribute to the shape node: 'miRayOffset' (float).



finalgather file

"Tells mental ray to use the file  filename for loading and  saving final gather points. If the finalgather file does  not exist, it is created and the final gather points are saved.  If it exists, it is loaded, and the points stored in it become  available for irradiance lookups. If mental ray creates extra final  gather points, they are appended to the file. This means that  the file may grow without bounds."

mental ray 3.2: finalgather file " name "
mental ray 3.3: finalgather file [ " name ", " name ",  ... ]"

mental ray 3.3 and later allow attaching a list of finalgather  file names instead of a single file name. All files are read  and merged. Newly-computed finalgather points are appended to the first finalgather map file (but not the content of the other files). This allows to use several FG maps as "helpers", for the first FG map the same rebuild rules as if there were no "helpers" apply.


FG filter

"Final gathering uses an speckle elimination filter that prevents  samples with extreme brightness from skewing the overall energy  stored in a finalgather hemisphere. This is done by filtering neighboring samples such that extreme values are discarded in  the filter size. By default, the filter size is 1. Setting this  to 0 disables speckle elimination, which can add speckles but  will better converge towards the correct total image brightness  for extremely low accuracy settings. Size values greater than 1  eliminate more speckles and soften sample contrasts. Sizes  greater than 4 or so are not normally useful."


FG trace depth

"This option is similar to  -trace_depth  but applies only to finalgather rays. The defaults are all 0,  which prevents finalgather rays from spawning subrays. This  means that indirect illumination computed by final gathering  cannot pass through glass or mirrors, for example. A depth  of 1 (where the sum must not be less than the other two) would  allow a single refraction or reflection. It is not normally  necessary to choose any depth greater than 2. This is not  compatible with mental ray 3.1 and earlier, which used the  trace depth (which defaults to 2 2 4) for final gathering."

So, normally FG rays stop at the first intersection, with this parameter subrays are spawned but It is important to understand that multi-level FG is useful for specular effects and not for diffusive one for which GI is strongly suggested.

Maya 5.0.1 UI shows only finalGatherTraceDepth attribute, you can however set them by MEL or modify the UI script:

select miDefaultOptions;
listAttr miDefaultOptions;

you will find the output of the command on the script editor, between all you'll find:

finalGatherTraceDepth (already in UI)
finalGatherTraceReflection
finalGatherTraceRefraction

this means you can just set these last two attributes manually:

setAttr miDefaultOptions.finalGatherTraceReflection 2;
setAttr miDefaultOptions.finalGatherTraceRefraction 2;

or you can use the modified version of the renderglobals UI.

Anyway here's an explanatory couple of images, the scene is a simple setup with an emissive plane and some geometry, in between there's a refractive framed window.
Take in mind that FG rays starts from the geometry and not from the light (and that the white emissive plane is not a light).

FG rays starting from the dark side of the image hit the glass window (lambertian, ior 1.5) but can't refract because they're not allowed by a trace refraction depth of zero, hence they can't record the light produced by the emissive plane.




When trace refraction depth is increased, FG rays can hit the window and refract one time, hence they will reach the emissive plane and irradiance will be recorded.




A shader writer can also adjust the number of levels in the shader state.
The code necessary in the shader for a finalgather trace depth of 'n' is:

if (state->type == miRAY_FINALGATHER) {
state->reflection_level -= n-1;
state->refraction_level -= n-1;
}


-rebuild i | hidden | mental ray 3.3 |

This flag works with mental ray 3.3 standalone only and it skips the FG presampling phase starting the rendering phase immediately.


MI_RCFG_RASTER_FACTOR

It controls the density of FG points in the raster space. If set to 2, there are roughly 2 (X axis) * 2 (Y axes) fewer FG points created.  If set to 0.5, there are 4 times more FG points. Values close to 0 would make rendering extremely expensive, big values would reduce precomputing stage at the price of quality.

Note: this is not a supported feature.

-finalgather_display

In mental ray standalone this flag will preview FG precomputational points in real time during the precomputing stage.

This is a very useful option since you can already have an idea of your lighting setup during that phase hence you can quickly abort your rendering if you realize you need to refine some values.

Maya 6.0 provides built in preview of FG precomputation points trough the renderglobal> final gathering:  "Preview Final Gather Tiles" option. The image will appear in the renderview.


an example of preview FG on Maya render view


 
Modified renderglobals script for Maya 5.0.1 and 6.0



With Maya 5.0.1 new attributes can be used to tune FG.

Following here I will go trough attributes which are already present, dynamically added and present but not in UI, these last one can be used/visualized via MEL.

Some of them will be visualized in the (Mental Ray) renderGlobalsWindow, some others in the miDefaultOptions node, these are a couple of screenshots of what you can get in addition in the relative attribute editors of the just mentioned nodes:




I have included here the original and modified (and unsupported) version of the createMentalRayGlobalsTab.mel (which you must have after installing Maya 5.0.1)

Just make a backup of your file and put the modified version in the folder:

../AliasWavefront/Maya5.0/scripts/others/




Guidelines


Reduce low-frequency flickering when using FG in animations


This is a two steps approach which requires mental ray standalone since Maya 6.0 does not provide support for the 'FG only' rendering mode use in step 1. 'FG only' just creates FGmaps on disk, no images are rendered when this mode is used. These maps will contain only FG precomputing points.

So, you are going to render an animation, say from frame.1  to frame.100 and the trick here is to pre-render in a first step one single FGmap or multiple FGmaps every n-th frames.

Notice that multiple FGmaps are useful in two situations:
  1. when single FG map size is too big
  2. when there are many hosts (more than mental ray can utilize effectively for multihosted rendering), so multiple FG maps are required to parallelize step 1 over many hosts.
If none of these apply, rendering with multiple FG maps is roughly same effective as rendering with one big finalgather map.

Either you have one or multiple FGmaps they will both act as helper(s) in the following Step 2, where we will use such FGmap(s) without evaluating any FG precomputation point (rebuild freeze), only eventual rendering points will be computed.

Step 1 - Rendering FG only

The first thing to do is decide if you want one single FG map or if you want to use n-FGmaps every n-th frames:

1.a - Rendering one single FGmap file
  • set finalgather 'only'
  • set FG rebuild to 'off'
  • specify one single FGmap fine name
  • Render  each n-th frame (n depends on the camera movement speed and the scene).
You will end up with one single FGmap file which contains irradiance informations sampled every n-th frame allover our timeframe. Since rebuild is off we are sure we didn't recompute any point, but just new points and those were appended to the single FGmap.

1.b - Rendering multiple FGmaps files (say we render every 10 frames, keep in mind the gap depends on the camera movement speed and the scene)
  • set finalgather 'only'
  • set FG rebuild to 'off'
  • render frame 10-th on host A with a FG name ["map010"]
  • When this is ready, render frame 20-th on host A with ["map020", "map010"]
  • and *simultaneously* render frame 30-th on HOST B with ["map030", "map010"]
  • and so on...
With multihosted rendering, the idea is to render at least 1 frame on a single (or few with multihosting) host, and when render further frames using all maps which are already finished.

If possible, it is a good idea not to interleave subsequent frames across hosts, but make host A render maps for each 10th frame in the range 0 to 100, host B render each 10th frame in the range 101 to 200, etc. This reduces the amount of redundancy.

You will end up with n FGmap files which contains irradiance informations sampled  every n-th frame, since rebuild id set to off each FGmap will contain only the new FG points compared to the previous FGmap, without recomputing any point. Obviously the first map will contain a complete set of points for its relative first frame.


IMPORTANT: In this phase no images are rendered: only FG map(s) are generated.

Step 2 - Rendering with rebuild freeze

In this phase you are not calculating any precomputation FG points, only rendering FG points, and those points won't be added to the FG maps on disk. This logic should guarantee a more coherent set of cached irradiance points trough time, where extra points are gathered only when requested at rendertime, therefore producing less flickering.
  • finalgather 'on' (or finalgather 'fastlookup' if you are also using GI)
  • set FG rebuild to 'freeze'
  • specify either the single FGmap name of step 1.a or all FG maps created in step 1.b (note that using as many of FG maps as possible is recommended).






FG with photon tracing


This is a simple guideline on how you should proceed when rendering FG and GI togheter:
  • FG on, GI off: try to get a good image quality with decent accuracy

  • FG off, GI on: try to get a good result with not so much low-frequency noise

  • reduce GI quality:
    • reduce GI accuracy (number of GI photons to lookup) up to 50%
    • reduce number of GI photons emitted from the light up to 50%
    • double GI radius

  • reduce FG quality:
    • you may slightly reduce FG accuracy (number of FG points to lookup) of 10%
    • do not change FG radius!

  • if you are rendering an animation or if the number of FG rays is several times higher than
    the number of GI photons
    , then turn on fastlookup mode:

    The  fastlookup mode alters the global illumination photon tracing stage by  computing the irradiance at every photon location, and storing  it with the photon. This means that the photons carry a good  estimate of the local irradiance. Photon tracing takes longer than before and  requires slightly more memory, but rendering becomes faster. Actually the number of FG point would be rather the same, the difference is that when a finalgather ray hit an object, just the pre-estimated irradiance is looked up: one only need to search for 1 closest photons, not many of them. This reduces the cost of each FG ray, thus FG, thus the time spent in finalgathering.

    This is a diagram of what happens when FG and GI are both active, if the fastlookup mode is activated, when FG sends a ray to a photon location (the pink circle), it doen't need to compute irradiance since the photon site already carries such calculation.




  • render with both GI and FG with reduced settings


FG and lightmap rendering (baking)


The following is a list of informations useful when you're baking FG in Maya 6.0 and 5.0.1.

Above all remember that there is no precomputation phase with lightmap rendering, so when baking FG reduce max radius to obtain comparable result to a regular (non-lghtmap) FG rendering!
  • Things fixed in 6.0 (hence not fixed in 5.0.1) are:

    • quality improvement for FG
    • light linking does not change after texture baking anymore
    • 3d textures mapped to bump channel will not bake
    • mental ray baked bump appears inverted
    • baking subtract color mode is inverted
    • subD texture baking supported
    • not possible to bake unless objects had unique names
  • Current limitations in Maya 6.0

    • incoming illumination is slower than other modes
    • incoming illumination gives different result if different objects are selected
    • FG + reflective object produces fragmented lightmap
    • lightmap always contains alpha
    • baking transparency should pick up color beyond the transparent material (instead it doesn't)
  • Improve FG baking quality
FG quality in Maya 6.0 is greatly improved, besides the limitation that FG + reflective object produces fragmented lightmap, thanks to the new "Final Gather Quality" option in a baking set
introduced in 6.0.

Maya slider in this case is 0 to 1, that's bad because you may think max quality is 1.
Set it to 2 or more, although 2 should be fine.

  • Bump don't appear
This is actually unrelated to FG but it's worth to mention: 2d file texture bumps bake fine in both 5.01 and 6.0, but 2d proc doesn't bake in both 5.01 and 6.0.
This is because view dependent filtering is disabled during lightmap rendering! A workaround is to set "Bump Filter" to 0 and "Bump Filter Offset" to a small value (like 0.001). These attrs exist on a bump2d node.

Special thanks to:

Steve, ThomasS, Juri @ mental images
Joohi @ Alias
Xavier @ Electronic farm

Damian Allen
Olivier Renouard


This document was last updated on July 2004, it was originally edited and therefore is best viewed with www.mozilla.org
Please direct your inquiries to the principal of /*jupiter jazz*/, Paolo Berto - mailto:pberto@jupiter-jazz.com
1