Choosing Symbols in an efficient way, in Arithmetic Code Algorithm, C++ - c++

I'm trying to implement Arithmetic Code Algorithm to compress binary images (JPG images transformed to binary base using opencv). The problem is that I've to save in the compressed file, the encoded string and the symbols which I used to generate this encoded string and also their frequencies, so I can be able to decode it. The symbols take a lot of space even if I'm transforming them to ascii and if I tried to take less number of characters for each symbol the size of the encoded string becomes bigger. So I wonder if there's an efficient way to save symbols in the compressed file with minimum possible size. And I want to know the most efficient way to choose the symbols from the original file.
Thanks in advance :)

325,592,005 bytes is 310 megabytes. You managed to compress this image into 2.8+6.1=8.9 megabytes so you decreased the size by 97%. It's a good result and I wouldn't worry here. Besides 6.1 megabytes of 64 bits long symbols means that you have around 800K of them. It is much less than the maximum possible number of possible symbols i.e. 2^64 - 1. It is again a good result.
As to your question about using multiple compression algorithms. Firstly, you have to know that in the case of the loosless compression the optimal number of bits per symbol is equal to the entropy. And the arithmetic encoding is close to be optimal (see this, this or this). It means that there is no much sense in using more than 1 algorithm one after another, if one of them is the arithmetic encoding.
As to the arithmetic coding vs Huffman codes. The latter is actually the special case of the former. And as far as I know the arithmetic encoding is always at least as good as Huffman codes.
It is also worth adding one thing. If you can consider lossy compression there is actually no limit in the compression rate. Or in other words, you can compress data as much as you want as long as quality loss is still acceptable for you. However, even in this case using multiple compression algorithms is not required. The one is enough.

Related

Which type of files can be compressed with Huffman coding?

I know that we use Huffman Coding to compress .txt files what i want to know that which other extensions can be compressed using Huffman Coding for example can we compress (.pdf, .Xls, .Jpg, .Gif, .Mp4) files using Huffman Coding?
In principle, you can compress any type of file with Huffman coding. Huffman coding works on the assumption that the input is a stream of symbols of some sort, and all files are represented as individual bytes, so any file is a valid input to a Huffman coder.
In practice, though, Huffman coding likely wouldn't work well for many other formats for a number of reasons. For example, many file formats (PDF, MP4, JPG, etc.) already employ some compression method to reduce their space usage, so hitting them with a secondary compressor isn't likely to do anything. Second, Huffman coding is based on the assumption that each symbol seen is sampled from some fixed probability distribution independently of any other symbol, and therefore doesn't do well when there are correlations between which symbols appear where. For example, a raw bitmap image is likely to have correlations between pixel colors and their neighboring pixels, but Huffman encoding can't take advantage of this.
That being said, Huffman coding is often used as one of many steps in various encoding algorithms. For example, if memory serves me correctly, bzip2 works by breaking the input into blocks, using the Burrows-Wheeler transform on each block, then using move-to-front coding, then run-length encoding, and then finally using Huffman encoding at the very end.
Hope this helps!

Why to combine Huffman and lz77?

I'm doing a reverse engineering in a Gameboy Advance's game, and I noticed that the originals developers wrote a code that has two system calls to uncompress a level using Huffman and lz77 (in this order).
But why to use Huffman + lzZ7? Whats the advantage to this approach?
Using available libraries
It's possible that the developers are using DEFLATE (or some similar algorithm), simply to be able to re-use tested and debugged software rather than writing something new from scratch (and taking who-knows-how-long to test and fix all the quirky edge cases).
Why both Huffman and LZ77?
But why does DEFLATE, Zstandard, LZHAM, LZHUF, LZH, etc. use both Huffman and LZ77?
Because these 2 algorithms detect and remove 2 different kinds of redundancy in common to many data files (video game levels, English and other natural-language text, etc.), and they can be combined to get better net compression than either one alone.
(Unfortunately, most data compression algorithms cannot be combined like this).
details
In English, the 2 most common letters are (usually) 'e' and then 't'.
So what is the most common pair? You might guess "ee", "et", or "te" -- nope, it's "th".
LZ77 is good at detecting and compressing these kinds of common words and syllables that occur far more often than you might guess from the letter frequencies alone.
Letter-oriented Huffman is good at detecting and compressing files using the letter frequencies alone, but it cannot detect correlations between consecutive letters (common words and syllables).
LZ77 compresses an original file into an intermediate sequence of literal letters and "copy items".
Then Huffman further compresses that intermediate sequence.
Often those "copy items" are already much shorter than the original substring would have been if we had skipped the LZ77 step and simply Huffman compressed the original file.
And Huffman does just as well compressing the literal letters in the intermediate sequence as it would have done compressing those same letters in the original file.
So already this 2-step process creates smaller files than either algorithm alone.
As a bonus, typically the copy items are also Huffman compressed for more savings in storage space.
In general, most data compression software is made up of these 2 parts.
First they run the original data through a "transformation" or multiple transformations, also called "decorrelators", typically highly tuned to the particular kind of redundancy in the particular kind of data being compressed (JPEG's DCT transform, MPEG's motion-compensation, etc.) or tuned to the limitations of human perception (MP3's auditory masking, etc.).
Next they run the intermediate data through a single "entropy coder" (arithmetic coding, or Huffman coding, or asymmetric numeral system coding) that's pretty much the same for every kind of data.

Is there an algorithm for "perfect" compression?

Let me clarify, I'm not talking about perfect compression in the sense of an algorithm that is able to compress any given source material, I realize that is impossible. What I'm trying to get at is an algorithm that is able to encode any source string of bits to it's absolute maximum compressed state, as determined by it's Shannon entropy.
I believe I have heard some things about Huffman Coding being in some sense optimal, so I believe that this encryption scheme might be based off that, but here is my issue:
Consider the bit-strings: a = "101010101010", b = "110100011010".
Using plain Shannon entropy, these bit strings should have the exact same entropy when we consider the bit strings as simply symbols of 0's and 1's, but this approach is flawed, because we can intuitively see that bitstring a has less entropy than bitstring b because it is simply a pattern of repeated 10's. With this in mind, we could get a better idea of the actual entropy of the source by calculating the Shannon entropy for the composite symbols 00, 10, 01, and 11.
This is just my understanding, and I could be totally off base, but from what I understand, for an ergodic source to be truly random, for an ergodic source with length n. the statistical probability of all n-length groups of symbols must be equally likely.
I suppose to be more specific than the question in the title, I have three main questions:
Does Huffman encoding using single bits as symbols compress a bitstring like a optimally, even with an obvious pattern that occurs when we analyze the string at the level of 2-bit symbols? If not, could one optimally compress a source by cycling through different "levels" (sorry if I'm butchering the terminology here) of Huffman coding until the best compression rate is found? Could going through different "rounds" of Huffman coding further increase the compression rate in some instances? (e.a. first go through Huffman coding with symbols that are 5 bits long, then going through Huffman coding for symbols that are 4 bits long? huff_4bits(huff_5bits(bitstring)) )
As stated by Mark, the general answer is "no", due to Kolmogorov complexity. Let me expand a bit on that.
Compression is basically two steps :
1) Model
2) Entropy
The role of the model is to "guess" the next bytes or fields to come.
Model can have any form, and there is no limit to its effectiveness.
A trivial example is a random number generator function : from an external perspective, it looks like a noise, and therefore cannot be compressed. But if you know the generation function, an infinitely long sequence can be compressed into a small set of code, the generator function.
That's why there is "no limit", and Kolmogorov complexity just states that : you can never guarantee that there is not a better way to "model" the data.
The second part is computable : Entropy is where you find the "Shannon Limit".
Given a set of symbols (typically, the output symbols from the model), which are part of an alphabet, you can compute the optimal cost, and find a way to reach the proven ultimate compression limit, which is the Shannon limit.
Huffman is optimal with regards to the Shannon limit if you accept the limitation that each symbol must be encoded using an integer number of bits. This is close but imperfect approximation. Better compression can be achieved by using fractional bits, which is what Arithmetic Coders do offer, or the more recent ANS-based Finite State Entropy coder. Both get much closer to the Shannon limit.
The Shannon limit only applies if you treat a set of symbols "individually". As soon as you try to "combine them", or find any correlations between the symbols, you are "modeling". And this is the territory of Kolmogorov Complexity, which is not computable.
No. It can be proven that there is not even an algorithm to determine how well a perfect compressor will do. See Kolmogorov Complexity.
Huffman coding (or arithmetic coding) by itself does not get close to the best compression. Other techniques need to be used to take advantage of higher order redundancies in the data.

Measure compression of Huffman Algorithm

I'm revamping my programming skills and implemented the Huffman algorithm. For now, I'm just considering [a-z] with no special characters. The probability values for a-z have been used from wikipedia.
When I run it, I get roughly 2x compression for random paragraphs.
But for this calculation I assume original letters require 8 bits each (ASCII).
But if I think about it, to represent 26 items, i just need 5 bits. If I calculate based on this fact, then compression factor drops to almost 1.1
So my question is, how is the compression factor determined in real world applications?
2nd question - if I write an encoder / decoder which uses 5 bits for representing a-z ( say a=0, b=1, etc) - is this also a considered a valid "compression" algorithm?
You have essentially the right answer, which is that you can't expect a lot of compression if all that you're working with is the letter frequencies of the English language.
The correct way to calculate the gain resulting from knowledge of the letter frequencies is to consider the entropy of a 26-symbol alphabet of equal probabilities with the entropy of the letters in English.
(I wish stackoverflow allowed TeX equations like math.stackexchange.com does. Then I could write decent equations here. Oh well.)
The key formula is -p log(p), where p is the probability of that symbol and the log is in base 2 to get the answer in bits. You calculate this for each symbol and then sum over all symbols.
Then in an ideal arithmetic coding scheme, an equiprobable set of 26-symbols would be coded in 4.70 bits per symbol. For the distribution in English (using the probabilities from the Wikipedia article), we get 4.18 bits per symbol. A reduction of only about 11%.
So that's all the frequency bias by itself can buy you. (It buys you a lot more in Scrabble scores, but I digress.)
We can also look at the same thing in the approximate space of Huffman coding, where each code is an integral number of bits. In this case you would not assume five bits per letter (with six codes wasted). Applying Huffman coding to 26 symbols of equal probability gives six codes that are four bits in length and 20 codes that are five bits in length. This results in 4.77 bits per letter on average. Huffman coding using the letter frequencies occurring in English gives an average of 4.21 bits per letter. A reduction of 12%, which is about the same as the entropy calculation.
There are many ways that real compressors do much better than this. First, they code what is actually in the file, using the frequencies of what's there instead of what they are across the English language. This makes it language independent, optimizes for the actual contents, and doesn't even code symbols that are not present. Second, you can break up the input into pieces and make a new code for each. If the pieces are big enough, then the overhead of transmitting a new code is small, and the gain is usually larger to optimize on a smaller chunk. Third, you can look for higher order effects. Instead of the frequency of single letters, you can take into account the previous letter and look at the probability of the next letter given its predecessor. Now you have 26^2 probabilities (for just letters) to track. These can also be generated dynamically for the actual data at hand, but now you need more data to get a gain, more memory, and more time. You can go to third order, fourth order, etc. for even greater compression performance at the cost of memory and time.
There are other schemes to pre-process the data by, for example, doing run-length encoding, looking for matching strings, applying block transforms, tokenizing XML, delta-coding audio or images, etc., etc. to further expose redundancies for an entropy coder to then take advantage of. I alluded to arithmetic coding, which can be used instead of Huffman to code very probable symbols in less than a bit and all symbols to fractional bit accuracy for better performance in the entropy step.
Back to your question of what constitutes compression, you can begin with any starting point you like, e.g. one eight-bit byte per letter, make assertions about your input, e.g. all lower case letters (accepting that if the assertion is false, the scheme fails), and then assess the compression effectiveness. So long as you use all of the same assumptions when comparing two different compression schemes. You must be careful that anything that is data dependent must also be considered part of the compressed data. E.g. a custom Huffman code derived from a block of data must be sent with that block of data.
If you ran an unrestricted Huffman-coding compression on the same text you'd get the same result, so I think it's reasonable to say that you're getting 2x compression over an ASCII encoding of the same text. I would be more inclined to say that your program is getting the expected compression, but currently has a limitation that it can't handle arbitrary input, and other simpler compression schemes to get compression over ASCII as well if that limitation is in place.
Why not extend your algorithm to handle arbitrary byte values? That way it's easier to make a true heads-up comparison.
It's not 5 bits for 26 character it's log(26) / log(2) = 4,7 bits. This is the maximum entropy but you need to know the specific entropy. For the german language it's 4,0629. When you know that you can use the formula R=Hmax - H. Look here: http://de.wikipedia.org/wiki/Entropie_(Informationstheorie)
http://en.wikipedia.org/wiki/Information_theory#Entropy

What's the best entropy encoding scheme to compress symbols with a known probability distribution?

I'm looking to encode user_ids in a long list of call records. The parts of these records that takes up the most space are the symbols for the caller and receiver. I will create a map that assigns the most active callers shorter symbols---this will help keep the overall size of the files (and therefore the I/O time) down.
I know in advance how many times each symbol will be used---in other words I know the relative probability distribution. Furthermore, it is not important that the codes that are produced be "prefix free" such as Huffman codes. So what's the best encoding scheme, i.e., the one that will deliver the most compression and for which a quick implementation exists?
An answer should not only point to a compression scheme, it should also point to an implementation of that encoding scheme.
For general-purpose lossless encoding with a known probability distribution, aside from Huffman coding, the other "textbook" answer is arithmetic coding.
In practice, there are a variety of implementations. See these general-purpose coders. Each has different properties. Without further information, we can't give you a more precise answer.
#conradlee: re "In what cases is arithmetic coding better than Huffman coding?" In terms of compression, nearly always. If you have a symbol,S, with a probability, Ps, then the ideal number of bits to code it with, bs, is -log(Ps)/log(2). For example, if Ps is 1/3 then bs is ~ 1.585 bits. With Huffman you have to round up or down to the nearest whole number of bits (so the compression ratio will decrease). Arithmetic encoding will store it with a fractional number of bits.