Take the 2-minute tour ×
Super User is a question and answer site for computer enthusiasts and power users. It's 100% free, no registration required.

PNG files are said to use lossless compression. However, whenever I am in an image editor, such as Gimp, and try to save an image as PNG file, it asks for the Compression parameter, which ranges between 0 and 9. If it has a compression parameter that affects the visual precision of the compressed image, how does it make PNG lossless? Can someone explain this please? Do I get lossless behavior only when I set the compression parameter to 9?

share|improve this question
22  
Most lossless compression algorithms have tunables (like dictionary size) which are generalized in a “how much effort should be made in minimizing the output size” slider. This is valid for ZIP, GZip, BZip2, LZMA, ... –  Daniel B Nov 27 at 10:12
9  
The question could be stated differently. If no quality is lost from the compression, then why not always use the compression producing the smallest size? The answer then would be, because it requires more RAM and more CPU time to compress and decompress. Sometimes you want faster compression and don't care as much about compression ratio. –  kasperd Nov 27 at 12:12
8  
PNG compression is almost identical to ZIPping files. You can compress them more or less but you get the exact file back when it decompresses -- that's what makes it lossless. –  mikebabcock Nov 27 at 14:31
7  
Most compression software such as Zip and Rar allow you to enter "compression level" which allow you to choose between smaller file <--> longer time. It does not mean these software discard data during compression. This setting (in GIMP, pngcrush, etc) is similar. –  Salman A Nov 27 at 17:50
2  
@SalmanA: I guess you meant to say "between smaller file <--> shorter time" (normally "smaller file" implies "longer time"). –  Andriy M Nov 29 at 2:09

5 Answers 5

up vote 105 down vote accepted

PNG is lossless. GIMP is most likely just not using the best word in this case. Think of it as "quality of compression", or in other words, "level of compression". With lower compression, you get a bigger file, but it takes less time to produce, whereas with higher compression, you get a smaller file that takes longer to produce. Typically you get diminishing returns (i.e., not as much decrease in size compared to the increase in time it takes) when going up to the highest compression levels, but it's up to you.

share|improve this answer
    
Makes sense. Thanks for explaining it! –  pkout Nov 26 at 18:38
15  
Also, PNG compression actually has many tunable parameters where adjustments in either direction can shrink output size depending on the contents of the source - it's far more complex than a simple "better" and "worse" slider. For general purposes, it's not too important, but if you want the absolute smallest then use a tool like pngcrush that can compare many variations for the smallest possible. –  Bob Nov 27 at 3:32
1  
@jjlin If you want to donate rep to LordNeckbeard, put a bounty on this question and then award it to him. –  Dan Neely Nov 27 at 14:10
1  
@Dan Neely Good idea, thanks. –  jjlin Nov 27 at 18:11
3  
@Nolonar Generally no; if anything a higher compression level usually decreases decompression time because there's less data for it to have to read and process. The longer compression time is due to doing a more thorough job of finding patterns to compress (oversimplifying). –  fluffy 2 days ago
up vote 118 down vote
+100

PNG is compressed, but lossless

The compression level is a tradeoff between file size and encoding/decoding speed. To overly generalize, even non-image formats, such as FLAC, have similar concepts.

Different compression levels, same decoded output

Although the file sizes are different, due the the different compression levels, the actual decoded output will be identical.

You can compare the MD5 hashes of the decoded outputs with ffmpeg using the MD5 muxer.

This is best shown with some some examples:

Create PNG files:

$ ffmpeg -i input -vframes 1 -compression_level 0 0.png
$ ffmpeg -i input -vframes 1 -compression_level 100 100.png
  • By default ffmpeg will use -compression_level 100 for PNG output.

  • A quick, sloppy test showed that 100 (highest compression) took roughly 3x longer to encode and 5x longer to decode than 0 (lowest compression) in this example.

Compare file size:

$ du -h *.png
  228K    0.png
  4.0K    100.png

Decode the PNG files and show MD5 hashes:

$ ffmpeg -loglevel error -i 0.png -f md5 -
3d3fbccf770a51f9d81725d4e0539f83

$ ffmpeg -loglevel error -i 100.png -f md5 -
3d3fbccf770a51f9d81725d4e0539f83

Since both hashes are the same you can be assured that the decoded outputs (the uncompressed, raw video) are exactly the same.

share|improve this answer
1  
+1 for giving concrete examples. –  njzk2 Nov 27 at 19:36
13  
+1 did not know that ffmpeg could handle pngs. –  Lekensteyn Nov 27 at 21:49
13  
@Lekensteyn It's great for making screenshots. Example to skip 30 seconds and take screenshot: ffmpeg -ss 30 -i input -vframes 1 output.png Also good for making videos out of images and vice versa. –  LordNeckbeard Nov 27 at 23:16
    
Does it mean that the PNG needs to be decompressed every time it has to be rendered? Because if that is true, we must be –  akshay2000 Nov 28 at 9:25
    
If you reread the file from disk or cache, yes, it has to be decompressed. Inside the same page the cache can probably reuse the decompressed version though. –  David Mårtensson Nov 28 at 9:56

PNG compression happens in two stages.

  1. Pre-compression re-arranges the image data so that it will be more compressible by a general purpose compression algorithm.
  2. The actual compression is done by DEFLATE, which searches for, and eliminates duplicate byte-sequences by replacing them with short tokens.

Since step 2 is a very time/resource intensive task, the underlying zlib library (encapsulation of raw DEFLATE) takes a compression parameter ranging from 1 = Fastest compression, 9 = Best compression, 0 = No compression. That's where the 0-9 range comes from, and GIMP simply passes that parameter down to zlib. Observe that at level 0 your png will actually be slightly larger than the equivalent bitmap.

However, level 9 is only the "best" that zlib will attempt, and is still very much a compromise solution.
To really get a feel for this, if you're willing to spend 1000x more processing power on an exhaustive search, you can gain 3-8% higher data density using zopfli instead of zlib.
The compression is still lossless, it's just a more optimal DEFLATE representation of the data. This approaches the limits of a zlib-compatible libraries, and therefore is the true "best" compression that it's possible to achieve using PNG.

share|improve this answer
2  
Note: Decompression time is the same regardless of the compression level, or iteration count when using zopflipng. –  Adria Nov 28 at 10:05

A primary motivation for the PNG format was to create a replacement for GIF that was not only free but also an improvement over it in essentially all respects. As a result, PNG compression is completely lossless - that is, the original image data can be reconstructed exactly, bit for bit - just as in GIF and most forms of TIFF.

PNG uses a 2-stage compression process:

  1. Pre-compression: filtering (prediction)
  2. Compression: DEFLATE (see wikipedia)

The precompression step is called filtering, which is a method of reversibly transforming the image data so that the main compression engine can operate more efficiently.

As a simple example, consider a sequence of bytes increasing uniformly from 1 to 255:

1, 2, 3, 4, 5, .... 255

Since there is no repetition in the sequence, it compresses either very poorly or not at all. But a trivial modification of the sequence - namely, leaving the first byte alone but replacing each subsequent byte by the difference between it and its predecessor - transforms the sequence into an extremely compressible set :

1, 1, 1, 1, 1, .... 1

The above transformation is lossless, since no bytes were omitted, and is entirely reversible. The compressed size of this series will be much reduced, but the original series can still be perfectly reconstituted.

Actual image-data is rarely that perfect, but filtering does improve compression in grayscale and truecolor images, and it can help on some palette images as well. PNG supports five types of filters, and an encoder may choose to use a different filter for each row of pixels in the image :

image

The algorithm works on bytes, but for large pixels (e.g., 24-bit RGB or 64-bit RGBA) only corresponding bytes are compared, meaning the red components of the pixel-colors are handled separately from the green and blue pixel-components.

To choose the best filter for each row, an encoder would need to test all possible combinations. This is clearly impossible, as even a 20-row image would require testing over 95 trillion combinations, where "testing" would involve filtering and compressing the entire image.

Compression levels are normally defined as numbers between 0 (none) and 9 (best). These refer to tradeoffs between speed and size, and relate to how many combinations of row-filters are to be tried. There are no standards as regarding these compression levels, so every image-editor may have its own algorithms as to how many filters to try when optimizing the image-size.

Compression level 0 means that filters are not used at all, which is fast but wasteful. Higher levels mean that more and more combinations are tried on image-rows and only the best ones are retained.

I would guess that the simplest approach to the best compression is to incrementally test-compress each row with each filter, save the smallest result, and repeat for the next row. This amounts to filtering and compressing the entire image five times, which may be a reasonable trade-off for an image that will be transmitted and decoded many times. Lower compression values will do less, at the discretion of the tool's developer.

In addition to filters, the compression level might also affect the zlib compression level which is a number between 0 (no Deflate) and 9 (maximum Deflate). How the specified 0-9 levels affect the usage of filters, which are the main optimization feature of PNG, is still dependent on the tool's developer.

The conclusion is that PNG has a compression parameter that can reduce the file-size very significantly, all without the loss of even a single pixel.

Sources:

Wikipedia Portable Network Graphics
libpng documentation Chapter 9 - Compression and Filtering

share|improve this answer
    
I don't think compression level setting changes the use of filters. The level 1-9 setting probably just chooses the zlib compression level 1-9, and level 0 means that deflate algorithm is not used at all. Most implementations probably do not change the filters per row, but just use Path filter all the time. –  Pauli L yesterday
    
@PauliL: I don't agree, because in all comparisons of PNG compression software, there are very large differences between the sizes of the generated images. If all products used the same parameters for the same library, then all the sizes should have been the same, as well as the speed. –  harrymc yesterday
    
Do you have any links to such comparisons? –  Pauli L yesterday
    
@PauliL: A quick search came up with this comparison. –  harrymc yesterday
    
@PauliL: You are probably right that the zlib compression levels are affected by the compression levels of PNG. I have modified my answer accordingly, although no compression tool documents what they do exactly. Perhaps the explanation for the tools with the worst size results is that they use no filters at all, only zlib compression. –  harrymc 15 hours ago

OK, I am too late for the bounty, but here is my answer anyway.

PNG is always lossless. It uses Deflate/Inflate algorithm, similar to those used in zip programs.

Deflate algorithm searches repeated sequences of bytes and replaces those with tags. The compression level setting specifies how much effort the program uses to find the optimal combination of byte sequences, and how much memory is reserved for that. It is compromise between time and memory usage vs. compressed file size. However, modern computers are so fast and have enough memory so that there is rarely need to use other than the highest compression setting.

Many PNG implementations use zlib library for compression. Zlib has nine compression levels, 1-9. I don't know the internals of Gimp, but since it has compression level settings 0-9 (0 = no compression), I would assume this setting simply selects the compression level of zlib.

Deflate algorithm is a general purpose compression algorithm, it has not been designed for compressing pictures. Unlike most other lossless image file formats, PNG format is not limited to that. PNG compression takes advantage of the knowledge that we are compressing a 2D image. This is achieved by so called filters.

(Filter is actually a bit misleading term here. It does not actually change the image contents, it just codes it differently. More accurate name would be delta encoder.)

PNG specification specifies 5 different filters (including 0 = none). The filter replaces absolute pixel values with difference from previous pixel to the left, up, diagonal or combination of those. This may significantly improve the compression ratio. Each scan line on the image can use different filter. The encoder can optimize the compression by choosing the best filter for each line.

For details of PNG file format, see PNG Specification.

Since there are virtually infinite number of combinations, it is not possible to try them all. Therefore, different kind of strategies have been developed for finding an effective combination. Most image editors probably do not even try to optimize the filters line by line but instead just used fixed filter (most likely Paeth).

A command line program pngcrush tries several strategies to find the best result. It can significantly reduce the size of PNG file created by other programs, but it may take quite a bit of time on larger images. See Source Forge - pngcrush.

share|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.