Is there a faster lossy compression than JPEG? - c++

Is there a compression algorithm that is faster than JPEG yet well supported? I know about jpeg2000 but from what I've heard it's not really that much faster.
Edit: for compressing.
Edit2: It should run on Linux 32 bit and ideally it should be in C or C++.

Jpeg encoding and decoding should be extremely fast. You'll have a hard time finding a faster algorithm. If it's slow, your problem is probably not the format but a bad implementation of the encoder. Try the encoder from libavcodec in the ffmpeg project.

Do you have MMX/SSE2 instructions available on your target architecture? If so, you might try libjpeg-turbo. Alternatively, can you compress the images with something like zlib and then offload the actual reduction to another machine? Is it imperative that actual lossy compression of the images take place on the embedded device itself?

In what context? On a PC or a portable device?
From my experience you've got JPEG, JPEG2000, PNG, and ... uh, that's about it for "well-supported" image types in a broad context (lossy or not!)
(Hooray that GIF is on its way out.)

JPEG2000 isn't faster at all. Is it encoding or decoding that's not fast enough with jpeg? You could probably be alot faster by doing only 4x4 FDCT and IDCT on jpeg.
It's hard to find any documentation on IJG libjpeg, but if you use that, try lowering the quality setting, it might make it faster, also there seems to be a fast FDCT option.
Someone mentioned libjpeg-turbo that uses SIMD instructions and is compatible with the regular libjpeg. If that's an option for you, I think you should try it.

I think wavelet-based compression algorithms are in general slower than the ones using DCT. Maybe you should take a look at the JPEG XR and WebP formats.

You could simply resize the image to a smaller one if you don't require the full image fidelity. Averaging every 2x2 block into a single pixel will reduce the size to 1/4 very quickly.

Related

JPEG-Compression, Time Complexity and Performance

I have a few questions regarding JPEG-compression;
What is the typical time-complexity of a good implementation of a JPEG-compression algorithm? I've tried reading up on the process itself but as it turns out I find it quite hard to pinpoint exactly what processes that needs to be done - I'm still at a pretty basic level in my algorithm-knowledge.. :-)
And I also wonder (I guess this can be derived from the first question) how demanding JPEG-compression is for the CPU compared to different compression algorithms, e.g. .gif - say if I needed to compress 1000 photos for example.
If you mean as a function of the size of the image, it's linear. The compression and decompression time are O(n), where n is the number of pixels.
JPEG and GIF are two different solutions to two different problems. JPEG is lossy generally for natural, photographic images, whereas GIF is lossless, generally for simple graphic images and icons. You would not use GIF for photographs.
Also GIF is obsolete, having been replaced with PNG, except for simple animated GIFs, of cats mostly. (There are better methods for lossless image compression than what PNG uses, but none seem to have caught on. The compression methods in PNG should be obsolete, but they aren't.)

Cropping large jpeg

There is a task, to write a programm that will be crope a JPEG files. But the problem is that some jpeg files has large sizes - hundreds of MegaBytes. So the question: Is it possible to crop a jpeg file, but without loading all file to the RAM, using something like fseek(), and decoding only the parts that needed.
Is that possible? If yes, maybe there is some libraries do the same.
Upd. All this will be used for the deep zoom technology. So when deep zoom will asking for a file, this program will give it, but this should be in real time
There are two ways to accomplish this.
The first is lossless cropping, where you don't decode the file all the way but work with the 8x8 DCT blocks. You'll need to use a library that has this capability, and it places some restrictions on the cropping ability. You can't crop to a boundary that isn't on the DCT square, which limits you to multiples of 8 or 16 depending on the subsampling in the file.
The second way is to use a library that allows you to read and write one line at a time. I know that the IJG library can do this, and probably others as well. This is the easy way, but the downside is that the image goes through a decompression/recompression pass and will lose quality and/or be larger.

Simple and Fast Video Encoding/Decoding

I need a simple and fast video codec with alpha support as an alternative to Quicktime Animation which has horrible compression rates for regular video.
Since I haven't found any good open-source encoder/decoder with alpha support, I have been trying to write my own (with inspiration from huff-yuv).
My strategy is the following:
Convert to YUVA420
Subtract current frame from previous (no need for key-frames).
Huffman encode the result from the previous step. Split each frame into 64x64 blocks and create a new huffman table for each block and encode it.
With this strategy I achieve decent compression rate 60-80%. I could probably improve the compression rate by splitting each frame into block after step 1 and add motion vectors to the reduce the data output from step 2. However, better compression-rate than 60% is lower prio than performance.
Acceptable compression-speed on a quad-core cpu 60ms/frame.
However the decoding speed suffers, 40ms/frame (barely real-time with full cpu usage).
My question is whether there is a way to compress video with much faster decoding, while still achieving decent compression rate?
Decoding huffman-coded symbols seems rather slow. I have not tried to use table-lookups yet, not sure if table lookups is a good idea since I have a new huffman table for each block, and building the lookup-table is quite expensive. As far as I have been able to figure out its not possible to make use of any SIMD or GPU features. Is there any alternative? Note that it doesn't have to be lossless.
You want to try a Golomb Code instead of a Huffman Code. A golomb code is IMO faster to decode then a huffman code. If it doesn't have to be loseless you want to use a hilbert curve and a DCT and then a Golomb Code. You want to subdivide the frames with a space-filling curve. IMO a continous subdivision of a frame with a sfc and also a decode is very fast.

Which is the fastest decoder for jpeg full-scale decoding?

Which is the fastest decoder for jpeg full-scale decoding ?
I want to accelerate my apps' jpeg decoding speed, how can I do this?
I am using libjpeg now, it is a bit slow, is there any one faster than libjpeg?
I do not need partical decoding.
Many thanks!
I don't know which is the fastest, but these should be faster than IJG's libjpeg:
[free] libjpeg-turbo
[cost] Intel Performance Primitives (IPP) library

efficient TIFF tile extraction C++

I am working with 1gb large tiff images of around 20000 x 20000 pixels. I need to extract several tiles (of about 300x300 pixels) out of the images, in random positions.
I tried the following solutions:
Libtiff (the only low level library I could find) offers TIFFReadline() but that means reading in around 19700 unnecesary pixels.
I implemented my own tiff reader which extracts a tile out of the image without reading in unnecesary pixels. I expected it to be faster, but doing a seekg for every line of the tile makes it very slow. I also tried reading to a buffer all the lines of the file that include my tile, and then extracting the tile from the buffer, but results are more or less the same.
I'd like to receive suggestions that would improve my tile extraction tool!
Everything is welcome, maybe you can propose a more efficient library I could use, some tips about C/C++ I/O, some higher level strategy for my needs, etc.
Regards,
Juan
[Major edit 14 Jan 10]
I was a bit confused by your mention of tiles, when the tiff is not tiled.
I do use tiled/pyramidical TIFF images. I've created those with VIPS
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,pyramid
I think you can do this with :
vips im_vips2tiff source_image output_image.tif:none,tile:256x256,flat
You may want to experiment with tile size. Then you can read using TIFFReadEncodedTile.
Multi-resolution storage using pyramidical tiffs are much faster if you need to zoom in/out. You may also want to use this to have a coarse image nearly immediately followed by a detailed picture.
After switching to (appropriately sized) tiled storage (which will bring you MASSIVE performance improvements for random access!), your bottleneck will be disk io. File read is much faster if read in sequence. Here mmapping may be the solution.
Some useful links:
VIPS
IIPImage
LibTiff.NET stackoverflow
VIPS is a image handling library which can do much more than just read/write. It has its own, very efficient internal format. It has a good documentation on the algorithms. For one, it decouples processing from filesystem, thereby allowing tiles to be cached.
IIPImage is a multi-zoom webserver/browser library. I found the documentation a very good source of information on multi-resolution imaging (like google maps)
The other solution on this page, using mmap, is efficient only for 'small' files. I've hit the 32-bit boundaries often. Generally, allocating a 1 GByte chunk of memory will fail on a 32-bit os (with 4 GBytes RAM installed) due to the fact that even virtual memory gets fragemented after one or two application runs. Still, there is sufficient memory to cache parts or whole of the image. More memory = more performance.
Just mmap your file.
http://www.kernel.org/doc/man-pages/online/pages/man2/mmap.2.html
Thanks everyone for the replies.
Actually a change in the way tiles were required, allowed me to extract the tiles from the files in hard disk, in a sequential way, instead of a random way. This allowed me to load a part of the file into ram, and extract the tiles from there.
The efficiency gain was huge. Otherwise, if you need random access to a file, mmap is a good deal.
Regards,
Juan
I did something similar to this to handle an arbitrarily large TARGA(TGA) format file.
The thing that made it simple for that kind of file is that the image is not compressed. You can calculate the position of any arbitrary pixel within the image and find it with a simple seek. You might consider targa format if you have the option to specify the image encoding.
If not there are many varieties of TIFF formats. You probably want to use a library if they've already gone through the pain of supporting all the different formats.
Did you get a specific error message? Depending on how you used that command line, you could have been stepping on your own file.
If that wasn't the issue, try using imagemagick instead of vips if it's an option.