RLE Encoding bit sequence, not bytes - compression

I need to implement a compression algorithm for binary data, that need to work on embedded constrained devices (256kB ROM, 48 KB RAM).
I'm thinking to the RLE compression algorithm. Unless implementing it from scratch, I've found a lot of C implementations, (for example: http://sourceforge.net/projects/bcl/?source=typ_redirect ), but they apply the RLE algorithm over the byte sequence (the word of the dictionary are 1 to 255, that is 8-bit encoding.
I'm finding for an implementation that, starting from a sequence of bytes, applies the RLE encoding over the bit-sequence corresponding to the input (0 and 1). Note that also another algorithm can work (I need a compression ratio <0.9, so I think any algorithm can do it), but the implementation need to work on a bit-basis, not bytes.
Can anyone help me? Thank you!

I think that you can encode bytes such as 0, 1, 2, 3, 255… etc. (Where lots of 0 and 1)
Let's encode this bit sequence:
000000011111110
1. Shift bits and increment the counter if bit compare to last bit
2. If NOT— shift 111 to output buffer and write control bit 1 before bit sequence
3. If bits can not be packed — write 0 bit and rest of data
Output:
111101110100
To decompress simply shift first control bit:
If 0 — write next bit to output buffer
If 1 — read 3 bits (can be other length) and convert them to the decimal number. Shift next bit which will mean what bit to repeat and start loop to represent the original sequence
But this compression method will work only on files, which have lots of 0 and 255 bytes (00000000 and 11111111 in binary), such as BMP files with black or white background.
Hope I helped you!

Related

Microsoft WASAPI - Do different audio formats need different data in the buffer, for the same wave

Lets say I want to play a sine wave using WASAPI.
Will the data I enter into the AudioClient buffer always just be samples between -1 and 1, or will it be different between PCM and IEEE_Float Formats, and other formats for that matter.
Thanks.
Right now i'm just using 1 to -1, but i want to know whether or not i need to write my buffer input code different for each format.
MEDIASUBTYPE_IEEE_FLOAT / WAVE_FORMAT_IEEE_FLOAT audio types operate with floating point values in [-1, +1] range.
MEDIASUBTYPE_PCM / WAVE_FORMAT_PCM has integer values,
8-bit samples are stored as unsigned bytes, ranging from 0 to 255. 16-bit samples are stored as 2's-complement signed integers, ranging from -32768 to 32767.
You will also find good references here: How to handle asymmetry of WAV data?.

Ultrafast 2x lossy audio/image compression algorithm?

I'm looking for an audio or image compression algorithm that can compress a torrent of 16-bit samples
by a fairly predictable amount (2-3x)
at very high speed (say, 60 cycles per sample at most: >100MB/s)
with lossiness being acceptable but, of course, undesirable
My data has characteristics of images and audio (2-dimensional, correlated in both dimensions and audiolike in one dimension) so algorithms for audio or images might both be appropriate.
An obvious thing to try would be this one-dimensional algorithm:
break up the data into segments of 64 samples
measure the range of values among those samples (as an example, the samples might be between 3101 and 9779 in one segment, a difference of 6678)
use 2 to 4 additional bytes to encode the range
linearly downsample each 16-bit sample to 8 bits in that segment.
For example, I could store 3101 in 16 bits, and store a scaling factor ceil(6678/256) = 27 in 8 bits, then convert each 16-bit sample to 8-bit as s8 = (s16 - base) / scale where base = 3101 + 27>>1, scale = 27, with the obvious decompression "algorithm" of s16 = s8 * 27 + 3101.) Compression ratio: 128/67 = 1.91.
I've played with some ideas to avoid the division operation, but hasn't someone by now invented a superfast algorithm that could preserve fidelity better than this one?
Note: this page says that FLAC compresses at 22 million samples per second (44MB/s) at -q6 which is pretty darn good (assuming its implementation is still single-threaded), if not quite enough for my application. Another page says FLAC has similar performance (40MB/s on a 3.4GHz i3-3240, -q5) as 3 other codecs, depending on quality level.
Take a look at the PNG filters for examples of how to tease out your correlations. The most obvious filter is "sub", which simply subtracts successive samples. The differences should be more clustered around zero. You can then run that through a fast compressor like lz4. Other filter choices may result in even better clustering around zero, if they can find advantage in the correlations in your other dimension.
For lossy compression, you can decimate the differences before compressing them, dropping a few low bits until you get the compression you want, and still retain the character of the data that you would like to preserve.

Advanced Data Compression

My problem is related to data compression: I have 17 bytes of data and each byte has 5 bits of data and 3 unused bits. So I can't figure out: how do I compress this data such that the remaining 3 bits are filled from byte number 0, so that a few of the bytes towards the end become unused and hence reduce space. Any ideas to proceed? Please note: A small chunk of code would also be helpful.

Deinterleaving PCM (*.wav) stereo audio data

I understand that PCM data is stored as [left][right][left][right].... Am trying to convert a stereo PCM to mono Vorbis (*.ogg) which I understand is achievable by halving the left and the right channels ((left+right)*0.5). I have actually achieved this by amending the encoder example in the libvorbis sdk like this,
#define READ 1024
signed char readbuffer[READ*4];
and the PCM data is read thus
fread(readbuffer, 1, READ*4, stdin)
I then halved the two channels,
buffer[0][i] = ((((readbuffer[i*4+1]<<8) | (0x00ff&(int)readbuffer[i*4]))/32768.f) + (((readbuffer[i*4+3]<<8) | (0x00ff&(int)readbuffer[i*4+2]))/32768.f)) * 0.5f;
It worked perfectly, but, I don't understand how they deinterleave the left and right channel from the PCM data (i.e. all the bit shifting and "ANDing" and "ORing").
A .wav file typically stores its PCM data in little endian format, with 16 bits per sample per channel. For the usual signed 16-bit PCM file, this means that the data is physically stored as
[LEFT LSB] [LEFT MSB] [RIGHT LSB] [RIGHT MSB] ...
so that every group of 4 bytes makes up a single stereo PCM sample. Hence, you can find sample i by looking at bytes 4*i through 4*i+3, inclusive.
To decode a single 16-bit value from two bytes, you do this:
(MSB << 8) | LSB
Because your read buffer values are stored as signed chars, you have to be a bit careful because both MSB and LSB will be sign-extended. This is undesirable for the LSB; therefore, the code uses
0xff & (int)LSB
to obtain the unsigned version of the low byte (technically, this works by upcasting to an int, and selecting the low 8 bits; an alternate formulation would be to just write (uint8_t)LSB).
Note that the MSBs are at indices 1 and 3, and the LSBs are at indices 0 and 2. So,
((readbuffer[i*4+1]<<8) | (0x00ff&(int)readbuffer[i*4]))
and
((readbuffer[i*4+3]<<8) | (0x00ff&(int)readbuffer[i*4+2]))
are just obtaining the values of the left and right channels as 16-bit signed values by using some bit manipulation to assemble the bytes into numbers.
Then, each of these values is divided by 32768.0. Note that a signed 16-bit value has a range of [-32768, 32767]. Thus, dividing by 32768 gives a range of approximately [-1, 1]. The two divided values are added to give a number in the range [-2, 2], and then the whole thing is multiplied by 0.5 to obtain the average (a floating-point value in the range [-1, 1]).

compact representation and delivery of point data

I have an array of point data, the values of points are represented as x co-ordinate and y co-ordinate.
These points could be in the range of 500 upto 2000 points or more.
The data represents a motion path which could range from the simple to very complex and can also have cusps in it.
Can I represent this data as one spline or a collection of splines or some other format with very tight compression.
I have tried representing them as a collection of beziers but at best I am getting a saving of 40 %.
For instance if I have an array of 500 points , that gives me 500 x and 500 y values so I have 1000 data pieces.
I around 100 quadratic beziers from this. each bezier is represented as controlx, controly, anchorx, anchory.
which gives me 100 x 4 = 400 pcs of data.
So input = 1000pcs , output = 400pcs.
I would like to further tighen this, any suggestions?
By its nature, spline is an approximation. You can reduce the number of splines you use to reach a higher compression ratio.
You can also achieve lossless compression by using some kind of encoding scheme. I am just making this up as I am typing, using the range example in previous answer (1000 for x and 400 for y),
Each point only needs 19 bits (10 for x, 9 for y). You can use 3 bytes to represent a coordinate.
Use 2 byte to represent displacement up to +/- 63.
Use 1 byte to represent short displacement up to +/- 7 for x, +/- 3 for y.
To decode the sequence properly, you would need some prefix to identify the type of encoding. Let's say we use 110 for full point, 10 for displacement and 0 for short displacement.
The bit layout will look like this,
Coordinates: 110xxxxxxxxxxxyyyyyyyyyy
Dislacement: 10xxxxxxxyyyyyyy
Short Displacement: 0xxxxyyy
Unless your sequence is totally random, you can easily achieve high compression ratio with this scheme.
Let's see how it works using a short example.
3 points: A(500, 400), B(550, 380), C(545, 381)
Let's say you were using 2 byte for each coordinate. It will take 16 bytes to encode this without compression.
To encode the sequence using the compression scheme,
A is first point so full coordinate will be used. 3 bytes.
B's displacement from A is (50, -20) and can be encoded as displacement. 2 bytes.
C's displacement from B is (-5, 1) and it fits the range of short displacement 1 byte.
So you save 10 bytes out of 16 bytes. Real compression ratio is totally depending on the data pattern. It works best on points forming a moving path. If the points are random, only 25% saving can be achieved.
If for example you use 32-bit integers for point coords and there is range limit, like x: 0..1000, y:0..400, you can pack (x, y) into a single 32-bit variable.
That way you achieve another 50% compression.
You could do a frequency analysis of the numbers you are trying to encode and use varying bit lengths to represent them, of course here I am vaguely describing Huffman coding
Firstly, only keep enough decimal points in your data that you actually need. Removing these would reduce your accuracy, but its a calculated loss. To do that, try converting your number to a string, locating the dot's position, and cutting of those many characters from the end. That could process faster than math, IMO. Lastly you can convert it back to a number.
150.234636746 -> "150.234636746" -> "150.23" -> 150.23
Secondly, try storing your data relative to the last number ("relative values"). Basically subtract the last number from this one. Then later to "decompress" it you can keep an accumulator variable and add them up.
A A A A R R
150, 200, 250 -> 150, 50, 50