Can reading zipped files be faster than uncompressed? - compression

Is there any chance that packing a large file with some simple algorithm enables me to read the data faster than from an uncompressed file (due to the hard drive being slower than uncompressing)?
What kind of compression rate would I need to have? Can any fast compression algorithm do that?

Yes. That is often the case with deflate compression, used by zip, gzip, and zlib, when reading from hard drives with a typical compression factor of, say, four.
From SSDs, you may need to go to something with faster decompression. One you could try is lz4.
Your mileage may vary.

You could also try Density, its command line client "sharc" is benchmarked here.

Related

Which decompression algorithms are safe to use on attacker-supplied buffers?

I want to save network bandwidth by using compression, such as bzip2 or gzip.
Attackers, as well as normal users, may send compressed messages.
Are there sequences of bytes which will cause some decompression functions to become stuck in an infinite loop, or to use vast amounts of memory?
Is so, is this a fundamental property of those algorithms, or just an implementation bug?
I can only speak for zlib's inflate. There is no input that would result in an infinite loop or uncontrolled memory consumption.
Since the maximum compression of deflate is less than 1032:1, then inflate when working normally can expand up to almost 1032:1. You just need to be able to handle that possibility.

gzip - break common compression levels

I know that gzip supports 9 compression levels, from fast to strong.
The decompression algorithm does not care about the compression level at all.
Is it possible to reach a "higher" level than 9 by another tool than the common gzip application?
I mean, someone could have created a modified gzip compressor which is more effective than gzip level 9.
The background is that I have a webserver which hosts compressed gz files. It would be nice to reduce the sizes of those files and I do not care how long my server has to work in order to reduce those files even by 1 byte at the end. It is a one-time task, so it does not matter.
Is there something like a hacked version of gzip supporting higher levels or offering higher compression?
Yes. It's called zopfli. It is painfully slow, but will compress about 5% better than zlib level 9. zopfli is built in to pigz, which is a gzip equivalent that makes use of multiple processors and cores. Compression level 11 in pigz invokes the zopfli compressor. (pigz goes up to 11. Get it?) Using multiple cores on large inputs helps mitigate the slowness of zopfli.

Does GZIP Compression Level Have Any Impact On Decompression

I understand that GZIP is a combination of LZ77 and Huffman coding and can be configured with a level between 1-9 where 1 indicates the fastest compression (less compression) and 9 indicates the slowest compression method (best compression).
My question is, does the choice of level only impact the compression process or is there an additional cost also incurred in decompression depending on the level used to compress?
I ask because typically many web servers will GZIP responses on the fly if the client supports it, e.g. Accept-Encoding: gzip. I appreciate that when doing this on the fly a level such as 6 might be the good choice for the average case, since it gives a good balance between speed and compression.
However, if I have a bunch of static assets that I can GZIP just once ahead of time - and never need to do this again - would there be any downside to using the highest but slowest compression level? I.e. is there now an additional overhead for the client that would not have been incurred had a lower compression level been used.
Great question, and an underexposed issue. Your intuition is solid – for some compression algorithms, choosing the max level of compression can require more work from the decompressor when it's unpacked.
Luckily, that's not true for gzip – there's no extra overhead for the client/browser to decompress more heavily compressed gzip files (e.g. choosing 9 for compression instead of 6, assuming the standard zlib codebase that most servers use). The best measure for this is decompression rate, which for present purposes is in units of MB/sec, while also monitoring overhead like memory and CPU. Simply going by decompression time is no good because the file is smaller at higher compression settings, and we're not controlling for that factor if we're only using a stopwatch.
gzip decompression quickly gets asymptotic in terms of both time-to-decompress and memory usage once you get past level 6 compressed content. The time-to-decompress flatlines for levels 7, 8, and 9 in the test results linked by Marcus Müller, though that's coarse-grained data given in whole seconds.
You'll also notice in those results that the memory requirements for decompression are flat for all levels of compression at 0.1 MiB. That's almost unbelievable, just a degree of excellence in software that we rarely see. Mark Adler and colleagues deserve massive props for what they achieved. gzip is a very nice format.
The memory use gets at your question about overhead. There really is none. You don't gain much with level 9 in terms of browser decompression speed, but you don't lose anything.
Now, check out these test results for a bit more texture. You'll see how the gzip decompression rate is slightly faster with level 9 compressed content than with lower levels (at level 9, decomp rate is about 0.9% faster than at level 6, for example). That is interesting and surprising. I wouldn't expect the rate to increase. That was just one set of test results – it may not hold for other scenarios (and the difference is quite small in any case).
Parting note: Precompressing static files is a good idea, but I don't recommend gzip at level 9. You'll get smaller files than gzip-9 by instead using zopfli or libdeflate. Zopfli is a well-established gzip compressor from Google. libdeflate is new but quite excellent. In my testing it consistently beats gzip-9, but still trails zopfli. You can also use 7-Zip to create gzip files, and it will consistently beat gzip-9. (In the foregoing, gzip-9 refers to using the canonical gzip or zlib application that Apache and nginx use).
No, there is no downside on the decompression side when using the maximum compression level. In fact, there is a slight upside, in that better-compressed data decompresses faster. The reason is simply fewer compressed bits that the decompressor has to process.
Actually, in real world measurements a higher compression level yields lower decompression times (which might be primarily caused by the fact that you need to handle less permanent storage and less RAM access).
Since, actually, most things that happen at a client with the data are rather expensive compared to gunzipping, you shouldn't really care about that, at all.
Also be advised that for static assets that are images, usually huffman/zlib coding (PNG simply uses zlib!) is already applied, and you won't gain much by gzipping these. Actually, often small images (for example, icons) fit into a single TCP packet (ignoring the HTTP header, which sometimes is bigger than the image itself) and therefore you don't get any speed gain (but save money on transfer volume -- if you deliver terabytes of small images. Now, may I presume you're not Google itself...
Also, I'd like to point you to higher level optimization, like tools that can transform your javascript code into a compacter shape (eg. removing whitespace, renaming private variables from my_mother_really_likes_this_number_of_unicorns to m1); also, things like JQuery come in a "precompressed" form. The same exists for HTML. Doesn't make things easier to debug, but since you seem to be interested in ultimate space saving...

Prediciting time or compression ratio for lossless compress of a file?

How would one be able to predict execution time and/or resulting compression ratio when compressing a file using a certain lossless compression algorithm? I am especially more concerned with local compression, since if you know time and compression ratio for local compression, you can easily calculate time for network compression based on currently available network throughput.
Let's say you have some information about file such as size, redundancy, type (we can say text to keep it simple). Maybe we have some statistical data from actual prior measurements. What else would be needed to perform prediction for execution time and/or compression ratio (even if a very rough one).
For just local compression, the size of the file would have effect since actual reading and writing data to/from storage media (sdcard, hard drive) would take more dominant portion of total execution.
The actual compression portion, will probably depend on redundancy/type, since most compression algorithms work by compressing small blocks of data (100kb or so). For example, larger HTML/Javascripts files compress better since they have higher redundancy.
I guess there is also a problem of scheduling, but this could probably be ignored for rough estimation.
This is a question that been in my head for quiet sometimes. I been wondering if some low overhead code (say on the server) can predict how long it would take to compress a file before performing actual compression?
Sample the file by taking 10-100 small pieces from random locations. Compress them individually. This should give you a lower bound on compression ratio.
This only returns meaningful results if the chunks are not too small. The compression algorithm must be able to make use of a certain size of history to predict the next bytes.
It depends on the data but with images you can take small small samples. Downsampling would change the result. Here is an example:PHP - Compress Image to Meet File Size Limit.
The compression ratio can be calculated with these formulas:
And the performance benchmarking can be done using V8 or Sunspider.
You can also use algorithms like DEFLATE or LZMA to compute the mechanism. PPM (Partial by Predicting Matching) can be used for predicting.

Read sequential file - Compressed file vs Uncompressed

I am looking for the fastest way to read a sequential file from disk.
I read in some posts that if I compressed the file using, for example, lz4, I could achieve better performance than read the flat file, because I will minimize the i/o operations.
But when I try this approach, scanning a lz4 compressed file gives me a poor performance than scanning the flat file. I didn't try the lz4demo above, but looking for it, my code is very similar.
I have found this benchmarks:
http://skipperkongen.dk/2012/02/28/uncompressed-versus-compressed-read/
http://code.google.com/p/lz4/source/browse/trunk/lz4demo.c?r=75
Is it really possible to improve performance reading a compressed sequential file over an uncompressed one? What am I doing wrong?
Yes, it is possible to improve disk read by using compression.
This effect is most likely to happen if you use a multi-threaded reader : while one thread reads compressed data from disk, the other one decode the previous compressed block within memory.
Considering the speed of LZ4, the decoding operation is likely to finish before the other thread complete reading the next block. This way, you'll achieved a bandwidth improvement, proportional to the compression ratio of the tested file.
Obviously, there are other effects to consider when benchmarking. For example, seek times of HDD are several order of magnitude larger than SSD, and under bad circumstances, it can become the dominant part of the timing, reducing any bandwidth advantage to zero.
It depends on the speed of the disk vs. the speed and space savings of decompression. I'm sure you can put this into a formula.
Is it really possible to improve performance reading a compresses
sequential file over an uncompressed one? What am i doing wrong?
Yes, it is possible (example: a 1kb zip file could contain 1GB of data - it would most likely be faster to read and decompress the ZIP).
Benchmark different algorithms and their decompression speeds. There are compression benchmark websites for that. There are also special-purpose high-speed compression algorithms.
You could also try to change the data format itself. Maybe switch to protobuf which might be faster and smaller than CSV.