There has been much contention recently that we're being unfair on high megapixel cameras which show higher levels of noise than lower megapixel cameras. The almost universal argument is that "you can downsample the high megapixel image to reduce noise". This statement is often made with no evidence or example, this article is intended to provide some examples so that you can make your own mind up.

Firstly lets be clear here (and not blind everyone with science), downsampling four pixels into one averages noise - that makes sense without understanding the maths. But downsampling four pixels means you're halving resolution, effectively turning your twenty megapixel camera into a five megapixel (or your G10 into a 3.7 megapixel), ignoring the improvement in sharpness you should see.

In order to provide some samples I took our standard noise test shot in both JPEG and RAW from a Canon PowerShot G10 at ISO 800 (a sensitivity which is a stretch for almost all compact cameras). I downsampled these 14.6 MP images to three specifically chosen resolutions:

  • 10.0 MP (3648 x 2736)
  • 6.5 MP (2944 x 2208) *
  • 3.7 MP (2208 x 1656) **

* 1.5 x 1.5 input pixels for one output pixel
** 2.0 x 2.0 input pixels for one output pixel

To be sure I've got a fair cross-section of what the average user (not the average fanboy) would do in downsampling I chose five different methods of downsampling:

  • JPEG: Photoshop Bicubic
  • JPEG: Photoshop Bicubic Sharper
  • JPEG: Photoshop Bilinear
  • JPEG: Canon Digital Photo Professional
  • RAW: Canon Digital Photo Professional

So lets see what effect this has on noise levels (standard deviation of middle gray):


Lets start with the JPEG image (red / orange / green lines), all methods except Bicubic Sharper (no surprise) result in a reduction in noise but it's hardly significant. Indeed even at 3.7 MP (a 4 pixel into 1 reduction) we're seeing very little reduction in measured noise. In RAW things are a bit stranger, using default noise reduction settings DPP delivers a noisier image at full resolution which suddenly dives to 'JPEG levels' at ten megapixels and then tracks as we'd expect it to 3.7 MP.

If we take the most commonly used downsampling method (Photoshop Bicubic) we get a 4% reduction in standard deviation at 10.0 MP, at 10% reduction at 6.5 MP and a 20% reduction at 3.7 MP. Twenty percent is a nice number and it sounds good except don't forget you've now got an image which is a quarter of its original size.

Enough graphs and figures, lets have a look at the images, as ultimately that's the most important thing (although some may classify debating as being more important). Below are (a) the most common (Photoshop Bicubic), the best performing (DPP JPEG) and RAW (DPP RAW)

Photoshop Bicubic downsampling

(left to right: 14.6 MP original, 10.0 MP downsampled, 6.5 MP downsampled, 3.7 MP downsampled)

Canon DPP downsampling

(left to right: 14.6 MP original, 10.0 MP downsampled, 6.5 MP downsampled, 3.7 MP downsampled)

Canon DPP RAW downsampling

(left to right: 14.6 MP original, 10.0 MP downsampled, 6.5 MP downsampled, 3.7 MP downsampled)

As with all of our reviews / articles we provide you with the samples and will let you draw your own conclusions, mine is that you have to downsample a long way (like 4 into 1) before you get any really noticeable gain and even then noise is still visible and you've got a much smaller image. At the end of it all downsampling is no substitute for larger sensors or larger photosites.

Why theory is great but grain size isn't

One of the reasons that theories about downsampling reducing noise don’t appear to work in practice is that the theory assumes noise is random. Unfortunately, this isn’t necessarily true. Noise at a single photosite will effect adjacent pixels as part of the demosaicing process. So noise doesn’t occur as individual pixels but as grain. The mathematical theory may tell you that downsampling works but it won’t if your noise grains are any larger than one pixel (and they nearly always are from a camera with a bayer color filter array).

In the example below we have three noisy images: one with a grain size of 1 pixel (commonly used for non-real-world demonstrations), one with a grain size of 1.5 pixels and one with a grain size of 2 pixels. If we downsample each of these by 50% (using Photoshop Bicubic) we see noise drop substantially for the 1 pixel grain image but much less so for the 1.5 and 2.0 grain size images.

Original images (crops)

(1 pixel; 11.4 std dev, 1.5 pixels: 11.1 std dev, 2.0 pixels: 11.1 std dev)

After 50% Photoshop Bicubic downsampling (crops)

(1 pixel; 4.9 std dev, 1.5 pixels: 8.0 std dev, 2.0 pixels: 9.9 std dev)


TrackBack URL for this entry:

Listed below are links to weblogs that reference Downsampling to reduce noise, but by how much?: