I would like to convert a jpeg image into png and to do so I am using the code below:
QImageReader reader;
reader.setFileName(imagePath);
QImage image = reader.read();
QImageWriter writer;
writer.setFileName(newImagePath);
writer.write(image);
I thought the output image would be exactly the same as the input one but the difference image is not null and I cannot figure out why. The difference image looks like a noise image with values ranging from -5 to 6.
I tried to do the same thing with another librairy called VTK but I don't have the same problem, the image before and after compression are exactly the same.
Any suggestion is welcome !
Different JPEG decoders can produce slightly different RGB values
(more so if the JPEG contains a ICC profile); there a lot of numerical rounding and conversions involved (however, encoders are supposed to differ in no more than one bit per pixel from the reference implementation, but I would not bet on that; see eg this answer and this one).
I suggest you try to do the pixel-by-pixel comparison inside QImage.
Related
I need to extract CbCr chroma data from JPEG images, for image analysis. (in C/C++)
As I understand it, the JPEG "raw data" is compressed YCbCr. Am I correct in this assumption? How can I verify this for a given image?
I am currently using TubroJpeg lib. The documentation of tjDecompressToYUV says that it:
Decompress a JPEG image to a YUV planar image. This function
performs JPEG but leaves out the color conversion step, so a
planar YUV is generated instead of an RGB image.
I am a bit confused as to the output of this function. I thought that YUV and YCbCr were slightly different color spaces. Does this mean that for UV chroma I'd need to manipulate the output, and that the output "UV" components are actually CbCr components ?
The JPEG standard has no knowledge of color spaces. It simply compresses color components.
If is the specific file format (e.g. JFIF, EXIF, ADOBE) that specifies the color format. In most cases it is YCbCR. In some cases it is not (some adobe).
This link may explained the confusion
http://en.wikipedia.org/wiki/Yuv#Confusion_with_Y.27CbCr
YUV and YCbCR are similar, but different. If there is no color conversion, I have to believe that they have confused YUV and YCbCr.
I have an application (openCV - C++) that grab an image from webcam, encode it in JPG and trasmitt it from a Server to Client. Thwebcam is stereo so actually I have two image, LEFT and RIGHT. In the client, when I recieve the image I decode it and I generate an Anaglyph 3D Effect.
For do this I use the OpenCV...
Well I encode the image in this way:
params.push_back(CV_IMWRITE_JPEG_QUALITY);
params.push_back(60); //image quality
imshow(image); // here the anagliphic image is good!
cv::imencode(".jpg", image, buffer, params);
and decode in this way:
cv::Mat imageRecieved = cv::imdecode( cv::Mat(v), CV_LOAD_IMAGE_COLOR );
What I see is that this kind of encode generate in the Anaglyph image a "ghost effect" (artifact?) so there is a bad effect with the edges of the object. If look a door for example there a ghost effect with the edge of the door. I'm sure that this depends of the encode because if I show the Anaglyph image before encode instruction this work well. I cannot use the PNG because it generate to large image and this is a problem for the connection between the Server and the Client.
I look for the GIF but, if I understood good, is nt supported by the cv::encode function.
So there is another way to encode a cv:Mat obj in JPG withou this bad effect and without increase to much the size of the image?
If your server is only used as an image storage, you can send to the server the 2 original stereo images (compressed) and just generate the Anaglyph when you need it. I figure that if you fetch the image pair (JPEG) from the server and then generate the Anaglyph (client-side), it will have no ghosting. It might be that the compressed pair of images combined is smaller than the Anaglyph .png.
I assume the anaglyph encoding is using line interlacing to combine both sides into one image.
You are using JPEG to compress the image.
This algorithm optimized to compress "photo-like" real world images from cameras, and works very well on these.
The difference of "photo-like" and other images, regarding image compression, is about the frequencies occurring in the image.
Roughly speaking, in "photo-like" images, the high frequency part is relatively small, and mostly not important for the image content.
So the high frequencies can be safely compressed.
If two frames are interlaced line by line, this creates an image with very strong high frequency part.
The JPEG algorithm discards much of that information as unimportant, but because it is actually important, that causes relatively strong artefacts.
JPEG basically just "does not work" on this kind of images.
If you can change the encoding of the anaqlyph images to side by side, or alternating full images from left and right, JPEG compression should just work fine.
Is this an option for you?
If not, it will get much more complicated. One problem - if you need good compression - is that the algorithms that are great for compressing images with very high frequencies are really bad at compressing "photo-like" data, which is still the larger part of your image.
Therefore, please try really hard to change the encoding to be not line-interlacing, that should be about an order of magnitude easier than other options.
I'm trying to get a YUV420 palanar or semiplanar (NV12) image out of jpeg using libjpeg.
I see that there is a option to specify output format to JCS_YCbCr which would generally be a YUV format, but as far as i understand it would give me the data as arrays of 3 elements { Y, U, V }. So to get the image to the right format i would have to rearange and subsample the pixels myself and i want to avoid that for performance reasons.
So I was wondering is there a way to configure libjpeg to output a YUV420p / NV12 buffer directly.
Just take a look at gst_jpeg_decode() in gstreamer source tree. This function along with gst_jpeg_decode_direct() function does exactly what you want to do.
Note that it gives YUV420 planar output, bypassing all color conversion done by libjpeg. (Note: this assumes that the input JPEG is encoded in YUV420 color space (aka I420), which is true for almost all JPEGs out there.
I find myself doing a lot of convertTo() calls in my C++ opencv code. It's somewhat confusing and I'm not sure when I need to convert the bit depth of an image until I get an error message.
For example, I have a Mat representing an image that is 16U. I then try to call matchTemplate() and get an assertion error that it expects 8U or 32F. Why shouldn't template matching work at 16U? Similar issues when I'm displaying the image as well (although bit depth restrictions make more sense in the case of displaying images). I find myself fiddling with convertTo() and scaling factors and such trying to get images to show up properly with imshow() and wish I were able to do this more elegantly (maybe I'm spoiled by matlab's imagesc function).
Am I missing something fundamental about what openCV expects of bit depth usage? How deal with the opencv library functions' requirements for bit depth in a cleaner way?
Assuming you are using the C interface :
cvMatchTemplate(const CvArr* image, const CvArr* templ, CvArr* result,int method)
image – Image where the search is running; should be 8-bit or 32-bit
floating-point
Most of the functions in OpenCv will use either 8U (for greyscale images) or 32F (for color 3 channel images).
The most common image type is 8U (for both color and grey). This is the preferred format of OpenCV.
Other formats are supported on a more function specific basis.
I am trying to compress my .jpeg image in Photoshop.
WHat is the best way to do this?
I am now calculating the bpp taking the image size in kb, calculating how many bits that is. Then I take the image size in pixel*pixel to get the amount of pixels in the image. After that I divide bits/pixels, to find how many bits per pixel the image has.
But How can I change this number? My guess is to change how many kb the image is, but how do i do this?
Thanks for any help!!
Yes, you can achieve higher compression ratio than 4 bits per pixel. Images with solid color can have rate as low as 0.13bpp.
In fact 4bpp is quite poor compression — it's same as uncompressed 16-color image or half of 256-color image, which even GIF can manage. JPEG can look decent at 1-2bpp.
in general, you cannot "compress" a jpeg image. all you can do is to reduce the image quality further in order to achieve a lower bpp value. jpeg streams are always compressed and they use a lossy compression method. it means that the original image will never ever be reconstructed from a jpeg image. the smaller the file the more information you have lost.
a specific "bpp value" is not, and should never be your target. especially with lossy compression. you should always look at your current image and decide whether it is still good enough or not.
if you still have the original image, try a lossless compression format, like zip-compressed or lzw-compressed tiff or compressed png. i'm sure PhotoShop can handle these formats as well. another softwares like IrfanView (https://www.irfanview.com/) or XnView MP (https://www.xnview.com/en/xnviewmp/) will convert your images too.
if you want manual (eg. full) control over your images, you should use command line utilities, like ImageMagick (https://imagemagick.org/) or NConvert (please find the XnView MP link above)
if you have only the jpeg images do not touch (edit & save) them. with every single save operation you lose another bunch of information. you should always work on file copies.
you should always keep your master image (the very picture you took with your phone or your camera).
of course, these rules of thumb will not answer your original question.