**Implementing diffuse BRDF** In the [first post](../part-1/index.html) we've prepared an environment and traced our first triangle using Metal Performance shaders. Here I will describe how to load geometry from .OBJ file using [https://github.com/syoyo/tinyobjloader](tinyobjloader) and implement simple diffuse BRDF with next event estimation. [Source Code](http://github.com/sergeyreznik/metal-ray-tracer/tree/part-2) for this post is also available on my GitHub page. Loading .OBJ file into the ray-tracer ====================================================================== As soon as we can trace a single triangle we are not actually limited to any number of triangles. We just need to provide proper vertices and indices to the ray-tracer. So, let's load classic Cornell Box scene and trace it. In the first post we've described our vertices as Let's now extend it to have normals and texture coordinates. We will be using `packed_float3` for positions and normals and `packed_float2` for texture coordinates: Also, let's add a simple class called `GeometryProvider` which will load files and create vertex and index buffers. Inside we will be using `tinyobjloader` to actually load file or return a placeholder triangle from the first post in case something went wrong and file was not loaded. `tinyobjloader` loads .obj file as a vector of `tinyobj::shape_t` types for geometry and a vector of `tinyobj::material_t` for materials. Let's just go through these vectors and extract data to linear array of our vertices. In the end we will have a vector of `Vertex` structures and an index buffer, containing indices (0, 1, 2, ..., N), where `N` equals to `3 x (Number of triangles)`. So now we are ready to load and fill vertex/index buffers with contents of .obj file: For now, we will not bother ourselves with staging buffers (to put contents directly into the GPU memory) and just use [MTLResourceStorageModeManaged](https://developer.apple.com/documentation/metal/mtlresourceoptions/mtlresourcestoragemodemanaged) option for our buffer contents. Everything is set up for the tracing Cornell box. But we need to modify our ray generator shader, because in the first post we just used simple orthographic projection. Now we want some kind of pinhole camera to look at our model. So instead of generating parellel rays uniformly in [0..1] space, we will send them from one point. For now, let's not bother ourselves with implementing physically-based camera or even FOV settings. So we will throw rays from the one constant position, and their directions will be computed as: Having the same intersection kernel as in the first post (which outputs color encoded barycentric coordinates), we will get the left image. Visualizing the primitive index of the intersection point will give us the right image: ![Visualizing barycentric coordinates](images/pic-1.png)![Visualizing primitive index](images/pic-2.png) Adding materials ====================================================================== Now it is time to exploit primitive index property of the intersection. Let's make the array of materials loaded from the .obj file available in the intersection shader and read the material's diffuse color. We will start from declaring our `Material` structure, which will be extended later, but for now will hold diffuse and emissive colors with material type property (for now either `MATERIAL_DIFFUSE` or `MATERIAL_LIGHT`). We will be need emissive color and `MATERIAL_LIGHT` type for light sources because we going to implement "glowing" triangles, rather than analytical light sources (like point/sphere/area/etc.). Also, we will be using another array of `Triangle` structures, which for now will hold only material index and later will be extended with area and probability properties, which will be utilized in the next event estimation. It is time to update our intersection shader to sample material index and materials properties: ![](images/pic-3.png) The light source on the ceiling is black, becase it does not contain diffuse component, just emissive. Also we don't need a test compute shader which fills the screen with colored pattern, so let's remove it. Adding some noise ====================================================================== In order to perform Monte-Carlo integration we will need to have random values. We are going to use them in light sampling, antialiasing, etc. Usual way to obtain random values in the shader is to have tiled noise texture. But instead we will use buffer of `float4` with random values inside. Advantage of such appoarch is that buffer is more flexible and can easily be extended to contain more values, or even have more dimensions. Later, we will replace random values with low discrepancy sequence, but for now let's just fill our buffer with random values. We will be doing this each frame, and since we are using [tripple buffering](https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/MTLBestPracticesGuide/TripleBuffering.html) by default, we will be need three nosie buffers, each will be updated in it's frame. So, let's declare three buffers and new variable, holding the frame index: and then update buffers each frame: Accumulating image ====================================================================== The first usage of the noise buffer would be to implement anti-aliasing. We will slightly jitter each ray's direction each frame in the ray generation shader, so now it will look like: Where `NOISE_BLOCK_SIZE` will define a size of the noise buffer in one dimension (i.e total buffer size would be `NOISE_BLOCK_SIZE * NOISE_BLOCK_SIZE`). Now, we need to average images over the time in order to get smooth, anti-aliased edges. For this purpose, we will be need to add a new stage to our ray-tracing pipeline, where we will read stored value in the output image and update it with a new frame. Also, in this shader we should know current frame index, and if this is the very first frame - just store store obtained results in the output image, rather than averaging them. So, there would be two major changes now: 1. a new kernel for the image accumulation; 2. modification to our ray structure in order to store radiance along this ray; 3. remove any direct writes to the output texture from the ray-tracing kernel(s). Let's add a radiace value to our ray structure, so now it will look like: And rather than writing diffuse color of the material to the output image in `handleIntersections` kernel, we will store this value in the ray's radiance field. Now, lets add a new compute shader, which will accumulate image. But before this we need to add a new buffer which will store frame index for now and later could be extended with a specific application's data. Similar to the noise buffers we will have three of them for each frame, and update together with the noise. On the CPU side we will also have a frame counter: Now everything is prepared to have an anti-aliased image: we have the noise buffers and all required information, also rays now store radiance value, so we can process them after the actual ray-tracing routines. We just need to add a new kernel, which will read data from the output image, linearly interpolate it with the current frame and store updated value. Since every frame should have equal weight in the output image, we will compute weight of the stored data as: `t = frameIndex / (frameIndex + 1)` Which will give us 0 in the first frame (stored data is not affecting output), 1/2 in the second frame (new frame will have equal weight), 2/3 in the third frame (new frame will have 1/3 weight), and so on. Running this kernel after the ray-tracing will produce an anti-aliased image: ![](images/pic-4.png) Adding shadows ====================================================================== Now, when most of the helpers and support methods are implemented we can start playing with lighting and shadows. Let's start from casting shadow rays in the direction of light source (remember, we only have glowing triangles) and if we hit one - assume the origin of such ray to be lit, otherwise it is in the shadow. In order to do this, we will be need to: - collect all glowing triangles; - sample one of them in the main intersection handler kernel; - sample random point inside selected emitter triangle; - fill a new buffer with shadow rays; - intersect shadow rays with the geometry we already have; - process intersection and determine whether point from which we casted our shadow ray is lit. Let's start from collecting all glowing triangles. It is pretty easy to do, because we already processing loaded geometry. We will just put triangles with emissive material to the separate buffer. But before we do this - let's declare our EmitterTriangle structure: Vertices will be used for sampling point inside a triangle. We will use globalIndex (i.e index of the glowing triangle among other all scene's triangles) to determine if we really hit it in the shadow intersection handler. `probability distribution function (pdf)` and `cumulative distribution function (cdf)` will be used to randomly sample one of the emitter triangles, and area will be used later for sampling light sources. Let's now fill a buffer with this stuctures on the CPU side. At first, for each triangle we will check if it's material is emissive and add it to the list of emissive triangles: Then we will sort triangles by their area and compute probability distribution function and cumulative distribution function values for each: And now let's add one dummy triangle with cdf equals to one to the back of the list, it will be used to randomly sample one triangle. We just need it's cdf to be one, but no other fields are requied, since this fake triangle will never be selected in the shader: Now we need to sample one triangle from this array. Here I will provide a brief description and code for the sampling. For details please refer to section **13.3** (SAMPLING RANDOM VARIABLES) in the **PBRT book**. We have an array of triangles, each having cdf value larger than previous one, so let's say we have a random value ξ, we can sample one of these triangles according to it's probability (which is nothing but area of the triangle divided by total area of all triangles). It means that with uniform distributiuon of the random values we will more likely select larger triangles than smaller ones. In order to sample triangle, we need to iterate over them and find one with cdf value not larger than our random value ξ: Notice `triangles[index + 1].cdf` here. This why we added a dummy triangle with cdf equals to one to the end of the list. If ξ will be equal to one - last triangle in the array will be selected. Now, having one sampled triangle we can choose a random point within it. We will be need another two random values (luckily, we have 4-component floats in our noise array). In order to select a random point inside triangle we need to generate a random barycentric coordinates and interplolate triangle vertices to obtain a point. Converting random values to the barycentric coordinates looks like: Everything seems to be prepared for generating a shadow rays. And in the main intersection handler we will determine an exact position of the intersection and cast a ray from this position to randomly selected point within randomly selected triangle. Of course, we need to add more buffers as a parameters to our `handleIntersections` kernel. We will require access to vertex and index buffers, as well as to the emitter triangles and light sampling rays buffer. Pretty impressive list of the parameters: At first, we will determine an exact position of the intersection: Now, let's sample a triangle and a vertex inside: And generate shadow ray: On the CPU side we will be need to add another ray intersector object and encode it to the command buffer, finding intersections of shadow rays with existing geometry (using existing acceleration structure): Also, we need to add a new kernel, which will be handling intersections of the shadow rays with geometry. For now, it will be pretty straightforward - determine if we really hit a triangle we supposed to, and if yes - just add one to the original ray's radiance value: Putting everything together and launching our ray-tracer will give us nice image with shadows (not taking materials into account for now): ![](images/pic-5.png) Sampling light sources ====================================================================== Final step for this post would be to implement correct light sampling. Please refer to the section **14.2** (SAMPLING LIGHT SOURCES) of the **PBRT book** for more theory. Here we will be sampling area lights (each emitter triangle considered as a separate light source in our case). Luckily, most of the changes will be only on GPU side (in kernels), but we need to add emissive value to the EmitterTriangle structure, so we can actually know it's luminance. Also, we will add a `throughput` value to the light sampling ray structure, and in case of successfull intersection will add it to the main ray's radiance. Basically, we already have light sampling, we just need to do it more carefully and physically-based. In order to do this we have to determine a probability of sampling specific point inside specific triangle. Also, we need to take into account surface's BSDF. For now we will assume that all surfaces are perfectly diffuse and have Lambertinan BSDF (`1/π`). Since we already selected a point on the emitter triangle, we can easily determine BSDF value as: Now we need to compute a probability of selecting a point inside a specific triangle. Fortunatelly, we already know probability of selecting one triangle in our array (which is `triangle_area / total_area`) so we can just use it directly. Probability of selecting on point uniformly on the triangle's surface equals to `1.0 / triangle_area`, but we need to convert it to the probability of selecting direction on unit sphere. There is a simple relationship between this values: \(pdf(direction) = d^2 / (cosθ * triangleArea)\) Where \(d\) is distance between current point and selected point on the triangle, and \(θ\) is an angle between direction to the selected point and the normal on the light's surface at the selected point. For more theory please refer to the section **14.2.2** (SAMPLING SHAPES) of the **PBRT book** or **Chapter 7** (Sampling Lights Directly) in **Ray Tracing: The Rest Of Your Life** by Peter Shirley. Let's take a look at the code: Taking into the account material diffuse color and emissive property of the light, the throughput of the light sampling ray would be: Putting everything together and running our small ray-tracer will give us an image: ![](images/pic-6.png) Wait, why it is so dark? What did we miss? Of course, we forget to enable sRGB conversion :) Let's do this in the GameViewController, by setting view's `colorPixelFormat` property: Now it is much better: ![](images/pic-7.png) And actually, this is very close to the image produced by [Mitsuba](https://www.mitsuba-renderer.org) (I've included test scene for Mitsuba in the `media/reference` folder). **That's it!** [Return to the index](../index.html)