How to make this jpeg compression faster - c++

I am using OpenCV to compress binary images from a camera:
vector<int> p;
p.push_back(CV_IMWRITE_JPEG_QUALITY);
p.push_back(75); // JPG quality
vector<unsigned char> jpegBuf;
cv::imencode(".jpg", fIplImageHeader, jpegBuf, p);
The code above compresses a binary RGB image stored in fIplImageHeader to a JPEG image. For a 640*480 image it takes about 0.25 seconds to execute the five lines above.
Is there any way I could make it faster? I really need to repeat the compression more than 4 times a second.

Try using libjpeg-turbo instead of libjpeg, it has MMX & SSE optimizations.

If you don't mind spending money - consider Intel Performance Primitives - it is blazing fast. AMD has Framewave supposed to API-compatible - I haven't tried it.
BTW - check this link Fast JPEG encoding library

Related

FFMpeg vs. OpenCV for format conversion/simple transformation

I had to create a system that can process images in realtime. I have implemented in C++ a pixel format conversion system that can also do some simple transformation (currently: rotation & mirroring).
Input/output format of the system are frame in a the following formats:
RGB (24, 32)
YUYV420, YUYV 422
JPEG
Raw greyscale
For instance, one operation can be:
YUYV422 -> rotation 90 -> flip Horiz -> RGB24
Greyscale -> rotation 270 -> flip Vert -> YUYV420
The goal of the system is to offer best performance for rotation/mirroring and pixel format conversion. My current implementation rely on OpenCV, but I suffer from performance issues when processing data above 2k resolutions.
The current implementation uses cv::Mat and cv::transpose/cv::flip/cv::cvtColor, and I optimized the system to remove transitionnal buffers and copy as much as possible.
Not very happy to reinvent the wheel, I know that using swscale and some filters from FFMpeg, it is possible to achieve the same result. My question are:
The FFMpeg system is rather generic, do you think I might suffer from footprint/performance caveat with this solution?
Format conversion seems somewhat ooptimized in OpenCV, but I have no idea about FFMpeg implementation... (note: I'm on x86_64 intel platform with SSE)
Do you know any library than can handle this kind of simple transformation for real time?
Thank you
OpenCV implementation is optimised for your configuration. Don't expect improvements from ffmpeg. Recently, OpenCV switched to libjpeg-turbo with SSE optimizations, this may improve JPEG conversions.

How can I compress jpeg image with compression rate 4 bpp or less?

I am trying to compress my .jpeg image in Photoshop.
WHat is the best way to do this?
I am now calculating the bpp taking the image size in kb, calculating how many bits that is. Then I take the image size in pixel*pixel to get the amount of pixels in the image. After that I divide bits/pixels, to find how many bits per pixel the image has.
But How can I change this number? My guess is to change how many kb the image is, but how do i do this?
Thanks for any help!!
Yes, you can achieve higher compression ratio than 4 bits per pixel. Images with solid color can have rate as low as 0.13bpp.
In fact 4bpp is quite poor compression — it's same as uncompressed 16-color image or half of 256-color image, which even GIF can manage. JPEG can look decent at 1-2bpp.
in general, you cannot "compress" a jpeg image. all you can do is to reduce the image quality further in order to achieve a lower bpp value. jpeg streams are always compressed and they use a lossy compression method. it means that the original image will never ever be reconstructed from a jpeg image. the smaller the file the more information you have lost.
a specific "bpp value" is not, and should never be your target. especially with lossy compression. you should always look at your current image and decide whether it is still good enough or not.
if you still have the original image, try a lossless compression format, like zip-compressed or lzw-compressed tiff or compressed png. i'm sure PhotoShop can handle these formats as well. another softwares like IrfanView (https://www.irfanview.com/) or XnView MP (https://www.xnview.com/en/xnviewmp/) will convert your images too.
if you want manual (eg. full) control over your images, you should use command line utilities, like ImageMagick (https://imagemagick.org/) or NConvert (please find the XnView MP link above)
if you have only the jpeg images do not touch (edit & save) them. with every single save operation you lose another bunch of information. you should always work on file copies.
you should always keep your master image (the very picture you took with your phone or your camera).
of course, these rules of thumb will not answer your original question.

Is there a faster lossy compression than JPEG?

Is there a compression algorithm that is faster than JPEG yet well supported? I know about jpeg2000 but from what I've heard it's not really that much faster.
Edit: for compressing.
Edit2: It should run on Linux 32 bit and ideally it should be in C or C++.
Jpeg encoding and decoding should be extremely fast. You'll have a hard time finding a faster algorithm. If it's slow, your problem is probably not the format but a bad implementation of the encoder. Try the encoder from libavcodec in the ffmpeg project.
Do you have MMX/SSE2 instructions available on your target architecture? If so, you might try libjpeg-turbo. Alternatively, can you compress the images with something like zlib and then offload the actual reduction to another machine? Is it imperative that actual lossy compression of the images take place on the embedded device itself?
In what context? On a PC or a portable device?
From my experience you've got JPEG, JPEG2000, PNG, and ... uh, that's about it for "well-supported" image types in a broad context (lossy or not!)
(Hooray that GIF is on its way out.)
JPEG2000 isn't faster at all. Is it encoding or decoding that's not fast enough with jpeg? You could probably be alot faster by doing only 4x4 FDCT and IDCT on jpeg.
It's hard to find any documentation on IJG libjpeg, but if you use that, try lowering the quality setting, it might make it faster, also there seems to be a fast FDCT option.
Someone mentioned libjpeg-turbo that uses SIMD instructions and is compatible with the regular libjpeg. If that's an option for you, I think you should try it.
I think wavelet-based compression algorithms are in general slower than the ones using DCT. Maybe you should take a look at the JPEG XR and WebP formats.
You could simply resize the image to a smaller one if you don't require the full image fidelity. Averaging every 2x2 block into a single pixel will reduce the size to 1/4 very quickly.

Low memory image resizing

I am looking for some advice on how to construct a very low memory image resizing program that will be run as a child process of my nodejs application in linux.
The solution I am looking for is a linux executable that will take a base64 string image (uploaded from a client) using stdin, resizing the photo to a specified size and then pumping the resulting image data back through stdout.
I've looked into image magick and it might be what I end up using, but I figured I would ask and see if anyone had a suggestion.
Suggestions of libraries or examples of pre compiled executables in C/C++ would be greatly appreciated. Also a helpful answer would include general strategies for low memory image resizing.
Thank you
Depending on the image formats you want to support, it's almost surely possible to perform incremental decoding and scaling by decoding only a few lines at a time and discarding the data once you write the output. However it may require writing your own code or adapting an existing decoder library to support this kind of operation.
It's also worth noting that downsizing giant jpegs can be performed efficiently by simply skipping the high-frequency coefficients and using a smaller IDCT. For example, to decode at half width and half height, discard all but the upper-left quadrant of the coefficients (horizontal and vertical frequency < 4) and use a 4x4 IDCT on them instead of the usual 8x8. Both the libjpeg decoder and the libavcodec decoder support this operation for power-of-2 scalings (1/2, 1/4, or 1/8). This type of approach might make incremental decoding/scaling unnecessary.
You can try it out with djpeg -scale 1/4 < src.jpg | cjpeg > dest.jpg. If you want a fixed output size, you'll probably first scale by whichever of 1/2, 1/4, or 1/8 puts you closest to the desired size without going to low, then performing interpolation to go the final step, e.g. djpeg -scale 1/4 < src.jpg | convert pnm:- -scale 640x480 dest.jpg.
When working on very large images, such as 0.25 GPix and larger, ImageMagick uses ~2 GB ram, even when using djpeg to decode the JPEG image first.
This command chain will resize JPEG images of about any size using only ~3 MB ram:
djpeg my-large.jpg | pnmscale -xysize 16000 16000 | cjpeg > scaled-large.jpg
GraphicsMagick is generally a better version of ImageMagick, I'd take a look at that. If you really need something fast, you probably want to drop to something like libjpeg - while you say you want something that's non-blocking IO, the operation you want to do is relatively CPU-bound (i.e decoding the image, then trying to resize it).
if anything this is just a sample following what he described:
import sys
from PIL import Image
import binascii
import cStringIO
x,y = sys.stdin.readline().strip().split(' ')
x,y = int(x), int(y)
img = Image.open(cStringIO.StringIO(binascii.b2a_base64(sys.stdin.read())).resize(x,y)
img.save(sys.stdout, format="png")
as that has to read the input, decode it, resize, and encode, and write it out there is no way to reduce the size of the memory used to less then the size of the input image
Nothing can beat Intel Integrated Performance Primitives in terms of performance. If you can afford it I strongly recommend to use it.
Otherwise just implement your own resizing routine. Lanczos gives quite good results albeit it won't be tremendously fast.
Edit: I strongly suggest you NOT to use Image Magick or Graphics Magick. They are both great libraries, but designed for completely different purpose - handling many file formats, depths, pixel formats, etc. They sacrifice performance and memory effectiveness for the things I've mentioned.
You might need this: https://github.com/zhangyuanwei/node-images
Cross-platform image decoder(png/jpeg/gif) and encoder(png/jpeg) for Nodejs

Which is the fastest decoder for jpeg full-scale decoding?

Which is the fastest decoder for jpeg full-scale decoding ?
I want to accelerate my apps' jpeg decoding speed, how can I do this?
I am using libjpeg now, it is a bit slow, is there any one faster than libjpeg?
I do not need partical decoding.
Many thanks!
I don't know which is the fastest, but these should be faster than IJG's libjpeg:
[free] libjpeg-turbo
[cost] Intel Performance Primitives (IPP) library