## Real depth in OpenGL / GLSL

So, many places will give you clues how to get linear depth from the OpenGL depth buffer, or visualise it, or other things. This, however, is what I believe to be the definitive answer:

This link http://www.songho.ca/opengl/gl_projectionmatrix.html gives a good run-down of the projection matrix, and the link between eye-space Z (`z_e`

below) and normalised device coordinates (NDC) Z (`z_n`

below). From there, we have

A = -(zFar + zNear) / (zFar - zNear); B = -2*zFar*zNear / (zFar - zNear); z_n = -(A*z_e + B) / z_e; // z_n in [-1, 1]

Note that the value stored in the depth buffer is actually in the range [0,
1], so the depth buffer value `z_b`

is:

z_b = 0.5*z_n + 0.5; // z_b in [0, 1]

If we have rendered this depth buffer to a texture and wish to access the real depth in a later shader, we must undo the non-linear mapping above:

z_e = 2*zFar*zNear / (zFar + zNear - (zFar - zNear)*(2*z_b -1));

This is similar to the example given here: http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-...,
except that their example is divided through by `zFar`

, which obviously gives
a better value for visualisation. Their example also uses the depth buffer value `z_b`

directly as `z_n`

, without shifting back to [-1, 1], which, as you can see above is wrong.

If you want to verify the equations for `z_n`

and `z_b`

, you can try this in your shader on the render-to-texture (RTT) pass for your scene:

// == RTT vert shader ==================================================== varying float depth; void main(void) { depth = -(gl_ModelViewMatrix * gl_Vertex).z; } // == RTT frag shader ==================================================== varying float depth; void main(void) { float A = gl_ProjectionMatrix[2].z; float B = gl_ProjectionMatrix[3].z; float zNear = - B / (1.0 - A); float zFar = B / (1.0 + A); // float depthFF = 0.5*(-A*depth + B) / depth + 0.5; // float depthFF = gl_FragCoord.z; // gl_FragDepth = depthFF; }

If you play with uncommenting the lines in the frag shader, you can try writing to `gl_FragDepth`

manually, using either the value `gl_FragCoord.z`

, which is supposed to be identical to the fixed-functionality, or alternatively using the value calculated using `depth`

, `A`

, and `B`

. In all cases the results should be identical.

If you want to access the depth buffer in a later shader and recover the true depths, you then need to pass the `zNear`

and `zFar`

values from the first camera’s projection matrix as uniforms into the shader:

// == Post-process frag shader =========================================== uniform sampler2D depthBuffTex; uniform float zNear; uniform float zFar; varying vec2 vTexCoord; void main(void) { float z_b = texture2D(depthBuffTex, vTexCoord).x; float z_n = 2.0 * z_b - 1.0; float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); }

Alternatively, you can write your own depth value to a colour texture. If, like me, you don’t have floating point textures at your disposition, you can pack the depth value into 24 bits of a regular `RGBA`

/ `UNSIGNED_BYTE`

colour texture as follows:

// == RTT vert shader ==================================================== varying float depth; void main(void) { depth = -(gl_ModelViewMatrix * gl_Vertex).z; } // == RTT frag shader ==================================================== varying float depth; void main(void) { const vec3 bitShift3 = vec3(65536.0, 256.0, 1.0); const vec3 bitMask3 = vec3(0.0, 1.0/256.0, 1.0/256.0); float A = gl_ProjectionMatrix[2].z; float B = gl_ProjectionMatrix[3].z; float zNear = - B / (1.0 - A); float zFar = B / (1.0 + A); float depthN = (depth - zNear)/(zFar - zNear); // scale to a value in [0, 1] vec3 depthNPack3 = fract(depthN*bitShift3); depthNPack3 -= depthNPack3.xxy*bitMask3; // gl_FragData[0] = colour rendering of your scene gl_FragData[1] = vec4(depthNPack3, 1.0); // alpha should equal 1.0 if GL_BLEND is enabled } // == Post-process frag shader =========================================== uniform sampler2D myDepthTex; // My packed depth texture uniform sampler2D depthBuffTex; // Texture storing OpenGL depth buffer uniform float zNear; uniform float zFar; varying vec2 vTexCoord; void main(void) { const vec3 bitUnshift3 = vec3(1.0/65536.0, 1.0/256.0, 1.0); float z_e_mine = dot(texture2D(myDepthTex, vTexCoord).xyz, bitUnshift3); z_e_mine = mix(zNear, zFar, z_e_mine); // scale from [0, 1] to [zNear, zFar] float z_e_ffun = texture2D(depthBuffTex, vTexCoord).x; z_e_ffun = 2.0 * z_e_ffun - 1.0; z_e_ffun = 2.0 * zNear * zFar / (zFar + zNear - z_e_ffun * (zFar - zNear)); gl_FragColor = vec4(vec3(z_e_mine/zFar), 1.0); // divide by zFar to visualize // gl_FragColor = vec4(vec3(z_e_ffun/zFar), 1.0); // divide by zFar to visualize }

## Comments 8 Comments

z_e = 2*zFar*zNear / (zFar + zNear - (zFar - zNear)*z_n);

using the information in http://www.songho.ca/. Correct?

But when I try to do the derivation, I get

z_e = 2*zFar*zNear / (-zFar - zNear + (zFar - zNear)*z_n);

What am I doing wrong ?

Thank you

Indeed, you appear to be correct, and your answer is the negative of mine. I think I probably negated the answer for visualization, since the depths from your formula will all be negative. In the worst case, you can just try both and see which works for you.

Oliver

There is one part which I can't quite fathom in a similar equation used in Depth Of Field Calculation to calculate Z. (From GPU Gems)

The method is as follows

Z = Znear * Zfar / (-Zfar + Zb ( Zfar - Znear) )

Where Zb if the Z buffered non linear value.

Comparing this to the eye space z the differences are

You use a normalized value for Zb

The first part of the equation has a multiplier of 2<c> In the denomenator -Zfar - Znear is replaced with Zfar

Why the difference ?

Thanks. I'm afraid I'm not able to verify this completely right now, but I think the difference is that the normalised value z_n is between -1 and 1, while the buffer value z_b is between 0 and 1. The equations above use z_n, except for in the 3rd code block, while your equation uses z_b. The relationship between z_b and z_n is in the 2nd code block, and the depth is written directly in terms of z_b in the 3rd. If you expand the 3rd block, do you get back to your equation?

cameraToBuffer[ze_, near_, far_] := (far (ze + near))/(ze (far - near));

bufferToCamera[zb_, near_, far_] := far * near / (zb * (far - near) - far);

They correctly solve for the {0} <-> {-n} and {1} <-> {-f} transformation and invert perfectly.

Anyway thanks for a nice treatment of those formulas. This page is very useful!

@Bohumír Zámečník

I disagree with your conclusion that formula is incorrect. I too made the calculations myself (in Maple) and found them to be correct. I also got a subresult similar to your bufferToCamera function which returns z_e values in the [-n, -f] range, but I never came across anything that contradicts Oliver's original post.

Here is my Maple document: http://tinyurl.com/c49pytt

Here is a PDF of the Maple document: http://tinyurl.com/c54k5kb

A million thanks to you, sir. This is a very helpful article.