I think in most cases is better to compress less in exchange for speed.
I mean, the paper shows that gzip -9 gets a 36445248 size and Zopfli 34995756, but is Zopfli 81 times slower with a 1449492 difference. Is 3,5 times slower than 7z and just a difference of 107220.
In my opinion, the more useful task for this compression algorithm could be backup storage or something where you don't care about how many time it takes to compress but how many space it takes. For example: pcap network traffic samples (sizes of 1TB, 770GB, etc.)
But decompression speed is quite good, and that is good for my examples. I'm sure you will improve compression speed over the time, good work!
Like the article says, this is best used for compress-once, distribute-many situations like static web content.
Squeezing a few extra kb from all the pngs on your site could make them load noticeably faster.
An even better example is the many javascript libraries that google hosts for probably millions of websites. ( http://goo.gl/iKFHs ) Pre-compress those babies using this and BAM 3-8% less bandwidth used (read: terabytes).
Very interesting indeed. 3-8% less bandwidth could help third world countries with dialup or low speed connections to see pictures of their loved ones more quickly, help health professionals see diagrams of human anatomy more quickly, and help the prehospital provider (in first/second/third world countries) with those valuable seconds needed to save someone's life.
Just to put it in perspective... If i am in a third world country, and need to know what to do/give someone in an emergency and only have 12kb/s of bandwidth, that 8% bandwidth difference (or possibly one split second) could be the difference between a successful conversion of a Life threatening cardiac arrhythmia and a failed conversion resulting in death.
Many other examples can come to mind in just the world of health care where every millisecond counts.
Another example: I need to quickly review a medicine I am giving. If i review 200 medicines in a day, each second saved on downloading content give me one more second of face time with my patients. (May not seem like much, but it can add up to be the 200 seconds they need me most... Think: A more rapid response to a changing patient status because i am not stuck at the computer)
Each time I log in to a computerized charting system, certain aspects never change (icons that are static etc) 8% less bandwidth could save 3-5 seconds of loading time. If the health system's bandwidth pipeline is bogged down with many users. It could allow more users on a network without having to upgrade. Saving costs, and passing on the savings to patients....
In health care alone as I said, this technology could be paramount. 8% more users per pipeline... In a facility that has 1000 users, 80 more users could use the system at one time.
So while some may think it is useless, further compression of static content just may save your life!
if only one would not have to download the same things over and over again, when one looks at various pages of the same website.
Someone should invent that. Oh wait, there's an app for that! Browser caches, cache servers, and OPTIMIZED WEBPAGES.
Then you can tell me what an amazing difference 8% can make. When you make sure the webpages you visit don't mess up your entire pipeline by, say, puttin css files at the end of the html.... That nearly doubles the page loading time.
Every Step is a step forward my friend; for something at the scale of Google, CloudFlare or Amazon S3, this could potentially reduce bandwidth by a significant amount.
Browser Caches are a completely different thing as are cache servers, I feel you may be missing the point of these things.
Yes, some Web Developers are under educated in small optimizations.
A few years ago, Ben Jos Walbeehm's deflopt was known to improve zip deflate compression, even on kzip's output.
Do you know if the tricks it used could still apply on zopfli's gzip deflate output and improve the compression ratio a bit more? AFAIK, its algorithms are still unpublished; they've been discussed there: http://encode.ru/threads/455-Ben-Jos-Walbeehm-s-DeflOpt-what-does-it-actually-do
Very nice work! Any ideas how it compares to bzip2?
I'm Swiss myself, but that picture is no Zöpfli :-) You never put that little bread into a squared baking-form... look at: http://de.wikipedia.org/wiki/Zopf_(Brot)
Compared to bzip2, this is a completely different animal. Zopfli is useful when you want better compression, but the other side supports nothing else than gzip. Overall, the compression is still worse than with today's state-of-art compressors.
Also, bzip2 has been superseded by LZMA/XZ, which beats it in both performance and compression ratio almost universally.
How does the compression compare on jar and javascript files?
ReplyDeleteHow does the decompression rate compare to others?
That is, the fluff up rate. :D
I think in most cases is better to compress less in exchange for speed.
ReplyDeleteI mean, the paper shows that gzip -9 gets a 36445248 size and Zopfli 34995756, but is Zopfli 81 times slower with a 1449492 difference.
Is 3,5 times slower than 7z and just a difference of 107220.
In my opinion, the more useful task for this compression algorithm could be backup storage or something where you don't care about how many time it takes to compress but how many space it takes. For example: pcap network traffic samples (sizes of 1TB, 770GB, etc.)
But decompression speed is quite good, and that is good for my examples.
I'm sure you will improve compression speed over the time, good work!
You are right, in most cases. But not in all cases. This is useful when it's useful but not when it's not.
DeleteLike the article says, this is best used for compress-once, distribute-many situations like static web content.
DeleteSqueezing a few extra kb from all the pngs on your site could make them load noticeably faster.
An even better example is the many javascript libraries that google hosts for probably millions of websites. ( http://goo.gl/iKFHs ) Pre-compress those babies using this and BAM 3-8% less bandwidth used (read: terabytes).
This comment has been removed by the author.
DeleteThank you very much my friend LODE..
ReplyDeleteVery interesting indeed. 3-8% less bandwidth could help third world countries with dialup or low speed connections to see pictures of their loved ones more quickly, help health professionals see diagrams of human anatomy more quickly, and help the prehospital provider (in first/second/third world countries) with those valuable seconds needed to save someone's life.
ReplyDeleteJust to put it in perspective...
If i am in a third world country, and need to know what to do/give someone in an emergency and only have 12kb/s of bandwidth, that 8% bandwidth difference (or possibly one split second) could be the difference between a successful conversion of a Life threatening cardiac arrhythmia and a failed conversion resulting in death.
Many other examples can come to mind in just the world of health care where every millisecond counts.
Another example: I need to quickly review a medicine I am giving. If i review 200 medicines in a day, each second saved on downloading content give me one more second of face time with my patients. (May not seem like much, but it can add up to be the 200 seconds they need me most... Think: A more rapid response to a changing patient status because i am not stuck at the computer)
Each time I log in to a computerized charting system, certain aspects never change (icons that are static etc) 8% less bandwidth could save 3-5 seconds of loading time.
If the health system's bandwidth pipeline is bogged down with many users. It could allow more users on a network without having to upgrade. Saving costs, and passing on the savings to patients....
In health care alone as I said, this technology could be paramount. 8% more users per pipeline... In a facility that has 1000 users, 80 more users could use the system at one time.
So while some may think it is useless, further compression of static content just may save your life!
if only one would not have to download the same things over and over again, when one looks at various pages of the same website.
DeleteSomeone should invent that. Oh wait, there's an app for that! Browser caches, cache servers, and OPTIMIZED WEBPAGES.
Then you can tell me what an amazing difference 8% can make. When you make sure the webpages you visit don't mess up your entire pipeline by, say, puttin css files at the end of the html.... That nearly doubles the page loading time.
Every Step is a step forward my friend; for something at the scale of Google, CloudFlare or Amazon S3, this could potentially reduce bandwidth by a significant amount.
DeleteBrowser Caches are a completely different thing as are cache servers, I feel you may be missing the point of these things.
Yes, some Web Developers are under educated in small optimizations.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteHello,
ReplyDeleteA few years ago, Ben Jos Walbeehm's deflopt was known to improve zip deflate compression, even on kzip's output.
Do you know if the tricks it used could still apply on zopfli's gzip deflate output and improve the compression ratio a bit more?
AFAIK, its algorithms are still unpublished; they've been discussed there: http://encode.ru/threads/455-Ben-Jos-Walbeehm-s-DeflOpt-what-does-it-actually-do
Best regards
Very nice work! Any ideas how it compares to bzip2?
ReplyDeleteI'm Swiss myself, but that picture is no Zöpfli :-) You never put that little bread into a squared baking-form... look at: http://de.wikipedia.org/wiki/Zopf_(Brot)
Question - when will Google use thin on their hosted repositories? Instant faster web (for non-cached files).
ReplyDeleteCompared to bzip2, this is a completely different animal. Zopfli is useful when you want better compression, but the other side supports nothing else than gzip. Overall, the compression is still worse than with today's state-of-art compressors.
ReplyDeleteAlso, bzip2 has been superseded by LZMA/XZ, which beats it in both performance and compression ratio almost universally.
Wow, you beat kzip! Very impressive!
ReplyDelete7zip can produce gzip archives within 1% of zopfil size in a fraction of the time by using 32kb dictionary and 128 word size.
ReplyDeletePerhaps that can be explored to improve zopfil performance if compression has reached its theoretical maximum for the deflate algorithm.
Rudimentary python bindings can be found here: https://github.com/wnyc/py-zopfli
ReplyDeleteI wonder how Zopfli compares with plzip, lbzip2 and pbzip2 in terms of compression speed and compression ratios http://vbtechsupport.com/1614/ ?
ReplyDeleteSo, where's a good tool for implementing? I would like to test it on some css and js libraries out on Cloudfront.
ReplyDeleteFor anyone interested in seeing it shrink images, I hacked it into my open source png optimizer available here: https://github.com/depsypher/pngtastic
ReplyDeleteNice work!
ReplyDelete