I used RLE algorithm for my binary data compression project but it gives me
ratio 16% how can i improved this ratio by improving RLE algorithm ?
if anyone can recommend better algorithms for binary data compression please help!
Related
I have few questions on JPEG Compression.
In my windows system, I have some image processing application. For example, Windows msPaint: which provides an option to convert BMP image to JPEG format.
Can anybody please tell me, what is the JPEG compression here using in mspaint- is it lossy or lossless.
If somebody is referring to "JPEG Standard compression", which compression it is internally using: lossy or lossless?
Thanks in advance.
Alvin
JPEG is a family of related compression techniques. There is lossless JPEG but is it generally relegated to 12bit, medical applications.
Any JPEG that you are likely to use creates loss. This occurs at several steps.
The transformation from the RGB to YCbCR. The two color spaces intersect but do not have the same gamut of colors. RGB colors outside of YCbCr get clamped into range. Also the transformation from RGB to YCbCr is a floating point operation that creates integer values, so there are rounding errors.
The Discrete Cosine Transform is usually performed on the data using scaled integers. This introduces small rounding errors. Even if you do this in floating point there will be some small errors and the values have to be rounded to integers for the final output.
Quantization is the big one. This divides the DCT output by integer values. You can eliminate rounding at this step by making all the quantization values 1.
JPEG compression is considered a lossy compression, because it is not possible to build the exact binary from an original source through uncompression.
Even at the highest quality, JPEG works by discarding data. You control the quality to trade off what you think is an acceptable loss to still have a fair representation of your image. Although data is lost, what can be seen might still be identical to the untrained eye - and that is the point. The same as what minidisc used to do for audio.
The intent for JPEG is to make photographic images smaller in file size for internet transmission, you get to decide how small, but if you want absolute quality a format like TIFF is better suited.
Incidently, TIFF offers a lossless compression, but the file sizes are still massive!
One more thing... If you take a 300 x 500 bitmap and convert it to JPEG, then convert it back. The file size will still be the same, because bitmap works by storing a common number of bits per pixel. But the contents of the file will be quite different. In this regard it might be naively viewed as lossless, but in practical terms it is far from it.
I want to Compress image using Run Length coding and Huffman coding.
how to draw the appropriate Huffman coding diagram to get the new codes for obtaining the image compression.
How much compression ratio by using these techniques? Assume that the 16 gray levels are coded into 4 bits.
I am trying to implement a steganographic algorithm where hidden message could survive jpeg compression.
The typical scenario is the following:
Hide data in image
Compress image using jpeg
The hidden data is not destroyed by jpeg compressiona nd could be restored
I was trying to use different described algorithms but with no success.
For example I was trying to use simple repetition code but the jpeg compression destroyed hidden data. Also I was trying to implementt algorithms described by the following articles:
http://nas.takming.edu.tw/chkao/lncs2001.pdf
http://www.securiteinfo.com/ebooks/palm/irvine-stega-jpg.pdf
Do you know about any algorithm that actually can survive jpeg compression?
You can hide the data in the frequency domain, JPEG saves information using DCT (Discrete Cosine Transform) for every 8x8 pixel block, the information that is invariant under compression is the highest frequency values, and they are arranged in a matrix, the lossy compression is done when the lowest coefficients of the matrix are rounded to 0 after the quantization of the block, these zeroes are arranged in the low-right part of the matrix and that is why the compression works and the information is lost.
Quite a few applications seem to implement Steganography on JPEG, so it's feasible:
http://www.jjtc.com/Steganography/toolmatrix.htm
Here's an article regarding a relevant algorithm (PM1) to get you started:
http://link.springer.com/article/10.1007%2Fs00500-008-0327-7#page-1
Perhaps the answer is late,but ...
You can do it in compressed domain steganography.Read image as binary file and analysis this file with libs like JPEG Parser. Based on your selected algorithm, find location of venues and compute new value of this venue and replace result bits in file data. Finally write file in same input extension.
I hope I helped.
What you're looking for is called watermarking.
A little warning: Watermarking algorithms use insane amounts of redundancy to ensure high robustness of the information being embedded. That means the amount of data you'll be able to hide in an image will be orders of magnitude lower compared to standard steganographic algorithms.
I want to compress a string of bits and after that I want to decompress it. Can any body help me by mentioning a Fast Lossless Compression and Decompression Technique and if possible than it's programming implementation.
If you're looking for speed, then considering a fast compression algorithm like LZ4 makes sense.
Such algorithm is an order of magnitude faster than zlib/gzip (like 10x faster).
http://code.google.com/p/lz4/
What about the ever-green called gzip or bzip2? They come already as library, ready to use.
gzip?
Algorithm can be found here:
http://www.gzip.org/algorithm.txt
Bonus: compatibility with pretty much everything.
According to Matt Mahoney's Large Text Compression Benchmark (http://mattmahoney.net/dc/text.html) there are several very fast decompressors with good compression ratio:
lzturbo 1.1 (-49 -b1000 -p0) 9 ns/byte decompression
lzham alpha 3 x64 (-m4 -d29) 9 ns/byte decompression
4x4 / tornado - 9-13 ns/byte decompression
libzling 20140430-bugfix (e4) 40 ns/byte compression and 10 ns/byte decompression
crush 1.00 (cx) 13-15 ns/byte decompression
I need a simple and fast video codec with alpha support as an alternative to Quicktime Animation which has horrible compression rates for regular video.
Since I haven't found any good open-source encoder/decoder with alpha support, I have been trying to write my own (with inspiration from huff-yuv).
My strategy is the following:
Convert to YUVA420
Subtract current frame from previous (no need for key-frames).
Huffman encode the result from the previous step. Split each frame into 64x64 blocks and create a new huffman table for each block and encode it.
With this strategy I achieve decent compression rate 60-80%. I could probably improve the compression rate by splitting each frame into block after step 1 and add motion vectors to the reduce the data output from step 2. However, better compression-rate than 60% is lower prio than performance.
Acceptable compression-speed on a quad-core cpu 60ms/frame.
However the decoding speed suffers, 40ms/frame (barely real-time with full cpu usage).
My question is whether there is a way to compress video with much faster decoding, while still achieving decent compression rate?
Decoding huffman-coded symbols seems rather slow. I have not tried to use table-lookups yet, not sure if table lookups is a good idea since I have a new huffman table for each block, and building the lookup-table is quite expensive. As far as I have been able to figure out its not possible to make use of any SIMD or GPU features. Is there any alternative? Note that it doesn't have to be lossless.
You want to try a Golomb Code instead of a Huffman Code. A golomb code is IMO faster to decode then a huffman code. If it doesn't have to be loseless you want to use a hilbert curve and a DCT and then a Golomb Code. You want to subdivide the frames with a space-filling curve. IMO a continous subdivision of a frame with a sfc and also a decode is very fast.