Which is the fastest decoder for jpeg full-scale decoding ?
I want to accelerate my apps' jpeg decoding speed, how can I do this?
I am using libjpeg now, it is a bit slow, is there any one faster than libjpeg?
I do not need partical decoding.
Many thanks!
I don't know which is the fastest, but these should be faster than IJG's libjpeg:
[free] libjpeg-turbo
[cost] Intel Performance Primitives (IPP) library
Related
I am currently streaming my OpenGL rendered images through a websocket. I use the ZLib compression to compress the RGB data on the server side. On the client side I simply decompress and show the images.
My compression steps :
S3TC Texture compression from OpenGL
ZLib compression of step 1 with Qt framework
How can I compress even further? Is MPEG-4 encoding of a simple image an option or even possible? How can I reduce the image size even further?
S3TC is lossy, so if you want more compression, use another lossy approach, like JPEG, and crank up the compression until you don't like the result. Then back off.
If images are similar to each other, use some standard one-pass video compression algorithm. If images are distinct, why wouldn't you just use JPEG or some other (more modern) image compression algorithm? In either case it should be quite easy to find suitable libraries for server and client side, no need to invent and develop your own codecs and formats.
i am searching for a way to create a video from a row of frames i have rendered with OpenGL and transfered to ram as int array using the glGetTexImage function. is it possible to achieve this directly in ram (~10 secs video) or do i have to save each frame to the harddisk and encode the video afterwards?
i have found this sample http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/api-example_8c-source.html in an other SO question but is this still the best way to do it?
today i got a hint to use this example http://git.videolan.org/?p=ffmpeg.git;a=blob;f=doc/examples/decoding_encoding.c;h=cb63294b142f007cf78436c5d81c08a49f3124be;hb=HEAD to see how h264 could be achieved. the problem when you want to encode frames rendered by opengl is that they are in RGB(A) and most codecs require YUV. for the convertion you can use swscale (ffmpeg) - an example how to use it can be found here http://web.me.com/dhoerl/Home/Tech_Blog/Entries/2009/1/22_Revised_avcodec_sample.c.html
as ananthonline stated the direct encoding of the frames is very cpu intensive but you can also write your frames with ffmpeg as rawvideo format, which supports the rgb24 pixelformat, and convert it offline with the cmd commands of ffmpeg.
If you want a cross-platform way of doing it, you're probably going to have to use ffmpeg/libavcodec but on Windows, you can write an AVI quite simply using the resources here.
I need a simple and fast video codec with alpha support as an alternative to Quicktime Animation which has horrible compression rates for regular video.
Since I haven't found any good open-source encoder/decoder with alpha support, I have been trying to write my own (with inspiration from huff-yuv).
My strategy is the following:
Convert to YUVA420
Subtract current frame from previous (no need for key-frames).
Huffman encode the result from the previous step. Split each frame into 64x64 blocks and create a new huffman table for each block and encode it.
With this strategy I achieve decent compression rate 60-80%. I could probably improve the compression rate by splitting each frame into block after step 1 and add motion vectors to the reduce the data output from step 2. However, better compression-rate than 60% is lower prio than performance.
Acceptable compression-speed on a quad-core cpu 60ms/frame.
However the decoding speed suffers, 40ms/frame (barely real-time with full cpu usage).
My question is whether there is a way to compress video with much faster decoding, while still achieving decent compression rate?
Decoding huffman-coded symbols seems rather slow. I have not tried to use table-lookups yet, not sure if table lookups is a good idea since I have a new huffman table for each block, and building the lookup-table is quite expensive. As far as I have been able to figure out its not possible to make use of any SIMD or GPU features. Is there any alternative? Note that it doesn't have to be lossless.
You want to try a Golomb Code instead of a Huffman Code. A golomb code is IMO faster to decode then a huffman code. If it doesn't have to be loseless you want to use a hilbert curve and a DCT and then a Golomb Code. You want to subdivide the frames with a space-filling curve. IMO a continous subdivision of a frame with a sfc and also a decode is very fast.
Is there a compression algorithm that is faster than JPEG yet well supported? I know about jpeg2000 but from what I've heard it's not really that much faster.
Edit: for compressing.
Edit2: It should run on Linux 32 bit and ideally it should be in C or C++.
Jpeg encoding and decoding should be extremely fast. You'll have a hard time finding a faster algorithm. If it's slow, your problem is probably not the format but a bad implementation of the encoder. Try the encoder from libavcodec in the ffmpeg project.
Do you have MMX/SSE2 instructions available on your target architecture? If so, you might try libjpeg-turbo. Alternatively, can you compress the images with something like zlib and then offload the actual reduction to another machine? Is it imperative that actual lossy compression of the images take place on the embedded device itself?
In what context? On a PC or a portable device?
From my experience you've got JPEG, JPEG2000, PNG, and ... uh, that's about it for "well-supported" image types in a broad context (lossy or not!)
(Hooray that GIF is on its way out.)
JPEG2000 isn't faster at all. Is it encoding or decoding that's not fast enough with jpeg? You could probably be alot faster by doing only 4x4 FDCT and IDCT on jpeg.
It's hard to find any documentation on IJG libjpeg, but if you use that, try lowering the quality setting, it might make it faster, also there seems to be a fast FDCT option.
Someone mentioned libjpeg-turbo that uses SIMD instructions and is compatible with the regular libjpeg. If that's an option for you, I think you should try it.
I think wavelet-based compression algorithms are in general slower than the ones using DCT. Maybe you should take a look at the JPEG XR and WebP formats.
You could simply resize the image to a smaller one if you don't require the full image fidelity. Averaging every 2x2 block into a single pixel will reduce the size to 1/4 very quickly.
I am using OpenCV to compress binary images from a camera:
vector<int> p;
p.push_back(CV_IMWRITE_JPEG_QUALITY);
p.push_back(75); // JPG quality
vector<unsigned char> jpegBuf;
cv::imencode(".jpg", fIplImageHeader, jpegBuf, p);
The code above compresses a binary RGB image stored in fIplImageHeader to a JPEG image. For a 640*480 image it takes about 0.25 seconds to execute the five lines above.
Is there any way I could make it faster? I really need to repeat the compression more than 4 times a second.
Try using libjpeg-turbo instead of libjpeg, it has MMX & SSE optimizations.
If you don't mind spending money - consider Intel Performance Primitives - it is blazing fast. AMD has Framewave supposed to API-compatible - I haven't tried it.
BTW - check this link Fast JPEG encoding library