Smooth Geometry in Real-Time Rendering

And the frequent lack thereof.

Recent advances in shading, lighting and lens effects in real-time rendering have been incredible and those advances have greatly increased the fidelity of otherwise extremely coarse geometry. I am in awe of this work, but as these domains continue to whittle away at feature film parity, I cant help but feel an important aspect of visual fidelity has taken a back seat.

At the SIGGRAPH 2018 Advances in Real-Time Rendering course, an impressive talk was given by Guillaume Abadie on improving the visual fidelity of real-time depth of field (DoF) in Unreal Engine. The talk focused on the handrail from the Infiltrator demo. It’s a challenging composition for screen-space DoF, because the near/focus/far geometric segmentation requires a highly complex blur.

SIGGRAPH 2018: Life of a Bokeh, Infiltrator Demo

While watching the presentation, I couldn’t help but momentarily set aside DoF and focus on the appearance of the rail. The straight portion of the railing (where a hand would rest) appears completely smooth, despite its low polygon modeling. Clearly this section of the railing owes its visual fidelity to advances in shading and lighting. However, the smooth appearance is also greatly supported by the absence of curvature in the silhouette.

Now let’s take a look at the curved, supporting rail with the depth of field technique enabled. The horribly faceted geometry is now clearly exposed, marring an otherwise beautiful render.

Silhouette curvature exposes the undersampled surface.

But I’m picking on a background prop in a tech demo, which really isn’t fair, is it? My claim is that this type of obvious surface undersampling is actually rampant in games and breaks immersion. So instead, let’s take a look at a high fidelity game released in 2018, Detroit: Become Human.

Everything about this game is reaching for feature film quality and much of it lands beautifully. So lets look at a hero asset that is intended to be close to camera. Surely we wont find such artifacts here.

The following shots are taken from a scene where the player is closely inspecting items on a desk.

Detroit: Become Human, desk investigation.

The player must inspect the coffee mug to advance. So far, so good. In the next screenshot, the camera is focused on the coffee mug. Note that this is the actual zoom in-game and notice the contrast in silhouette curvature between the rendered cup and the reference hologram floating next to it.

Camera focused on coffee mug.

Now zooming in a bit more, this is how I perceived the cup while playing the game. The undersampled surface of the mug broke the immersion for me; suddenly the story was gone and I was thinking about polygons and technical details of the game.

Curvature exposes undersampled geometry in a hero prop.

How did this happen? I suspect someone at Quantic Dream cringes every time they see this scene. My guess is that most objects have multiple levels of detail (LODs) and someone probably noticed this and reported it, but it wasn’t high enough priority to fix.

Challenges and Solutions

In games there are many solutions to this problem, but the common approach is to generate multiple levels of detail for each object and smoothly transition between them as needed. Film also uses LODs, however another common solution is to leverage specialized surface representations to preserve detail; both solutions come with drawbacks.

Levels of Detail

While good LOD tooling does exist, generating, maintaining and transitioning LODs can be a significant production effort. Even in feature film where geometric complexity has exploded, often the solution is to brute force render complex geometry without LODs, due to the overhead of LOD solutions.

It should also be noted that level of detail is typically a local optimization per asset. While it can be expanded to non-local optimization (for example, in an open world city, each city block may have a distant LOD with aggregated distant geometry), this is expensive to compute, maintain, invalidate, and store.

Subdivision Surfaces

Subdivision surfaces allow artists to work naturally at multiple resolutions, while preserving the underlying surface signal at high fidelity. The now freely available (previously restricted due to patents) creases, holes, and sharpness further increase the power of subdivision surfaces.

While open source technology such as OpenSubdiv is readily available, it is difficult to integrate with game engines due to render pipeline complexities for dynamic tessellation.

Static tessellation can be more easily leveraged to generate LODs, but the assets must be stored as a subdivision representation, which most game engines don’t support. Just as with LODs, this is a local optimization with the same caveats of global optimization / aggregation.

It should be noted that real-time subdivision surfaces are used extensively in digital content creation tools and have been used in some games, most notably in the Call of Duty series. The tech used in CoD was based on research pioneered by Wade Brianerd & Tim Foley (et al).

Reverse Subdivision

A subdivision surface adds detail to the lowest level of refinement, however typical mesh reduction schemes desire to work in the opposite direction, generating coarser LODs from the highest level of refinement. While working on OpenSubdiv, two things occurred to me:

  1. Most assets get their density from manual subdivision (while modeling) and hence already have subdiv-compatible topology.
  2. The process of subdivision can be applied to reduce poly count below level-zero.

Combining these observations with learned fractional crease weights, it seems possible to recover the original surface after reverse subdivision. To be clear, this is speculation.

Still, in my opinion, reverse subdivision is an under explored solution. While there was some research during the hay-day of subdivision surfaces, I don’t know of any examples where it has been used in production. [update: Tom Forsyth reported using a similar technique (though without crease weights) on Blade 2, circa 2002]

Conclusion

In a time when visuals in real-time rendering are approaching feature film quality and incremental improvements require a careful eye with A-B comparisons, geometry feels like it has been somewhat left behind. It’s not that no one cares, but rather that even the most popular solutions are cumbersome for production. So an ideal solution will need to be artist-friendly, locally and globally applicable, automatic, and robust.

Silhouette curvature exposes the undersampled surface.