Metal Shaders: Color Adjustments

The first few shaders I’ve covered in this series utilized an algorithm that was applied uniformly to the entire image. However, many algorithms we use in image processing, we would like to control the degree of effect we want to apply to an image. We don’t always want all or nothing. The purpose of this post will to cover several adjustment filters present in GPUImage 3:

  • Brightness
  • Contrast
  • Exposure
  • Gamma
  • Saturation
  • Red-Green-Blue Channel Adjustment

Before we jump into these shaders, I would like to briefly cover how you can receive user input to determine the percentage of effect you would like to apply to the image.

Encoding Parameters in Metal

There are two different ways to get parameters to the GPU:

  • Hard code them to constant memory space
  • Encode them to buffers to be accessed by the GPU

In the previous post about luminance we utilized the first method. The luminance algorithm doesn’t change and it is being used by multiple shaders in the GPUImage library.

For our adjustments in this section, we have a slider value to pass from the UI to the GPU. This value is encoded into a buffer that can be accessed by the GPU. We are already encoding the image we are processing as a texture. You can read more about encoding here.

Here is an example of one of our Swift classes defining a new shader operation with a single parameter to encode:

public class BrightnessAdjustment: BasicOperation {
    public var brightness:Float = 0.0 { didSet { uniformSettings[0] = brightness } }
	
    public init() {
	super.init(fragmentFunctionName:"brightnessFragment", numberOfInputs:1)
		
	uniformSettings.appendUniform(0.0)
    }
}

Pay close attention to this line:

uniformSettings.appendUniform(0.0)

This is where we are encoding a value into our uniform buffer. We want to start out with the value set to 0.0. We want the uniform to update and respond to user input, so we take advantage of Swift’s didSet functionality:

public var brightness:Float = 0.0 { didSet { uniformSettings[0] = brightness } }

Any time the brightness variable changes, the value of the uniform setting gets updated to the current value. Since we only have one value, these are set to the [0] index of the buffer.

So we have a buffer with a value of 0.0 that can be accessed by the shader. But the shader doesn’t know what this value correlates to. In order to do that, we need to set up a custom data structure:

typedef struct
{
    float brightness;
} BrightnessUniform;

We use this in the function signature to help the shader “decode” what this value is used for in our fragment equations:

fragment half4 brightnessFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant BrightnessUniform& uniform [[ buffer(1) ]])

All of the shaders I detail in this blog post follow this pattern. The only real change between them is the name of the fragment function, the uniform structure, and the constant passed into the fragment function. Let’s look at the math that goes into these effects next.

Brightness

Brightness is the intensity of color within an image. Here is the brightness filter’s code:

fragment half4 brightnessFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant BrightnessUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);
	
    return half4(color.rgb + uniform.brightness, color.a);
}

You need to augment the intensity of each color by increasing the amount. The brightness shader is very similar to the luminance filter, except the adjustment colors are not weighted. Each of the red, green, and blue values are being augmented by the same amount. This isn’t a super refined color adjustment algorithm but it gets the job done. Changing the brightness does not fundamentally change the dynamic range of the image.

Brightness Filter

Contrast

While brightness represents the overall intensity of an image, contrast represents the difference between the lightest parts of an image and the darkest parts. The larger the difference between these values, the more contrast you have in an image. A black and white graphic novel page has incredible contrast because each point on the page is either all or nothing.

Here is our contrast filter:

fragment half4 contrastFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant ContrastUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);
	
    return half4(((color.rgb - half3(0.5)) * uniform.contrast + half3(0.5)), color.a);
}

While brightness was an additive operation, contrast is a multiplicative operation. Smaller, darker values are impacted less from these operations than larger, brighter values. This means that the larger the contrast value you use, the wider the disparity will be from the darkest and lightest pixel values.

Contrast Filter

Exposure

In photography, exposure is the amount of light you allow allow through the lens. If you are in a low light situation, such as astrophotography, you want the exposure set very high. In full light situations like a mid-afternoon picnic, you need to tamp down the exposure to avoid having your image be blown out.

Here is the shader that we use to emulate exposure:

fragment half4 exposureFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant ExposureUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);
	
    return half4((color.rgb * pow(2.0, uniform.exposure)), color.a);
}

The shader is taking the base value passed into the shader and multiplying it by two to the power of the exposure value. The exposure value can be anywhere between 0.0 and 1.0. Any number, besides zero, that is raised to the zero power is one. So because of how we clamp the exposure values, the exposure result will always be a value between 1.0 and 2.0.

Exposure Filter. Notice how easy it is to “blow out” the image.

Gamma

Light, similarly to sound, are not experienced by humans linearly. Small amounts of light are perceived to be much brighter and increases in brightness at the full end of the spectrum do not appear to be significantly brighter despite having similar proportional increases.

Here is a good link to an article about what gamma is.

fragment half4 gammaFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant GammaUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);
	
    return half4(pow(color.rgb, half3(uniform.gamma)), color.a);
}

This formula utilizes a new math function: the pow function. The pow function takes two parameters:

  • The value to be multiplied
  • The power to which it will be multiplied

So, for example, if you had

pow(2, 8);

the result would be 256 (two raised to the eighth power).

For our gamma correction, we are passing a value set by the user and raising each of the color channel’s values to that value.

Gamma Filter. Notice how much brighter this image is than brightness and exposure without blowing out the image.

Saturation

Saturation is how much chrominance is present in an image. In our earlier post about luminance we discussed how to create a monochromatic image. We are going to take this a step further and allow the user to adjust the amount of color they want in their image. Here is the shader:

fragment half4 saturationFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant SaturationUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);

    half luminance = dot(color.rgb, luminanceWeighting);

    return half4(mix(half3(luminance), color.rgb, half(uniform.saturation)), color.a);
}

First we create a three component vector to contain the same luminance weighting we initially created back in our luminance shader. We will use these values to adjust the color saturation.

Back when we simply wanted the luminance, we uniformly applied this value to all three color channels. Now we need to use a portion of this value along with a portion of another value. For this, we need to use another new Metal function to this blog: mix.

mix takes three parameters:

  • First color value
  • Second color value
  • Percentage of first color

The actual math behind mix looks like this:

T mix(T x, T y, T a)
x + (y – x ) * a

The first value is added to the second value after the second value is subtracted from the first value and multiplied by the percentage. In our case, the first value is the luminance value of the color. We want to determine how much of the original color value should be added back to the image. We subtract the smaller luminance value from the original color value and then multiply it by the percentage the user wants.

This shader is a good example of how many shaders in GPUImage are composed and build upon smaller, simpler shaders. One reason for this series of blog posts starting with very simple shaders is to show the reader how to intuitively build more complex shaders.

Saturation Filter

RGB

So far all of our shader functions have affected all color channels equally. One powerful aspect of having multiple color channels is that they can be adjusted independently.

In order to adjust each channel independently, we need more than one uniform setting:

public class RGBAdjustment: BasicOperation {
    public var red:Float = 1.0 { didSet { uniformSettings[0] = red } }
    public var blue:Float = 1.0 { didSet { uniformSettings[1] = blue } }
    public var green:Float = 1.0 { didSet { uniformSettings[2] = green } }
	
    public init() {
	super.init(fragmentFunctionName:"rgbAdjustmentFragment", numberOfInputs:1)
		
	uniformSettings.appendUniform(1.0)
	uniformSettings.appendUniform(1.0)
	uniformSettings.appendUniform(1.0)
    }
}

This could be slightly confusing, so I’ll break it down a little. We have three uniform variables for each color channel. These are connected to three separate sliders in the UI. Any time one of those sliders is changed, it updates the value of the specific variable it is attached to. This is the same as our previous shaders, but if you look at the initial uniform settings, we append three identical uniforms. We have to explicitly set the uniforms once upon launch, which is why we have those three identical lines in the Swift file. Above in the public variables, we explicitly associate the red, green, and blue slider variables with a “slot” in the uniform buffer. Once the uniforms are initially set, we don’t care anymore about the code in the initializer. The sliders take over responsibility for their own specific slot.

In order to keep this strait on the GPU side, we create a data structure emulating these public variables so the GPU can sort out how the data it’s being sent is laid out:

typedef struct
{
    float redAdjustment;
    float greenAdjustment;
    float blueAdjustment;
} RGBAdjustmentUniform;

This buffer of data is again passed into the shader as a parameter:

fragment half4 rgbAdjustmentFragment(
    SingleInputVertexIO fragmentInput [[stage_in]],
    texture2d inputTexture [[texture(0)]],
    constant RGBAdjustmentUniform& uniform [[ buffer(1) ]])
{
    constexpr sampler quadSampler;
    half4 color = inputTexture.sample(quadSampler, fragmentInput.textureCoordinate);
	
    return half4(
	color.r * uniform.redAdjustment,
	color.g * uniform.greenAdjustment,
	color.b * uniform.blueAdjustment,
	color.a);
}

Each color channel is multiplied by the percentage associated with the slider in the UI. We access each color channel by name using dot notation.

Maxed Out Red Channel
Maxed Out Green Channel
Maxed Out Blue Channel

Conclusions

Many of the most common image processing functions we take for granted in programs like Photoshop are surprisingly simple. From these simple building blocks we can build many large and impressive effects. These posts might seem like humble beginnings, but big things come from small beginnings.