Is there any difference between data compression and image compression when compressing JPEG images?
(to 'standardize' your question: I assume with "data compression" you mean general data file compression - in likes of .zip and .rar)
There is a "big" difference:
Data compression is "Lossless" compression - wherein you cannot afford loosing/mistaking even a single bit. Whereas..
Image compression is a "Lossy" compression - wherein you can afford to loose/mistake a certain amount of detail depending on you requirement of data-size over quality(data-reproduction). i.e. Your compressed version will not be the exact copy of your original image file; but in data compression - the compressed version will recreate the input file - exactly.
JPEG transforms RGB color space to YCbCr color space. Most of the image data is in Y channel and to avoid loss of information, it can be less compressed. But the Cb and Cr channels have color information and is mostly repetitive. This can be easily compressed.
http://nboddula.blogspot.com/2013/05/image-compression-how-jpeg-works.html
Related
I am currently streaming my OpenGL rendered images through a websocket. I use the ZLib compression to compress the RGB data on the server side. On the client side I simply decompress and show the images.
My compression steps :
S3TC Texture compression from OpenGL
ZLib compression of step 1 with Qt framework
How can I compress even further? Is MPEG-4 encoding of a simple image an option or even possible? How can I reduce the image size even further?
S3TC is lossy, so if you want more compression, use another lossy approach, like JPEG, and crank up the compression until you don't like the result. Then back off.
If images are similar to each other, use some standard one-pass video compression algorithm. If images are distinct, why wouldn't you just use JPEG or some other (more modern) image compression algorithm? In either case it should be quite easy to find suitable libraries for server and client side, no need to invent and develop your own codecs and formats.
I want to write a script to extract the PixelDATA of a DICOM file using c or c ++, I don't want to use external libraries like dicomsdl... if anyone can help me to write algorithm for extract and show image .
Just extracting the image data under the pixel data is not enough to interpret the DICOM image properly. You will need other attributes from DICOM file such as Rows, Columns, Bit Allocated, Bit Stored, High Bit, Photometric Interpretation, Sample Per Pixel to Number of Frames information just to interpret the raw uncompressed image data. Also, stored image data can be in Little Endian or Big Endian byte order. In addition, image data can be encapsulated or compressed (e.g. compressed using different compression algorithms such as JPEG, JPEG 2000, JPEG LS, RLE etc)) and compressed streams are stored differently than the uncompressed image data. Even the PixelData element can exist in multiple locations in a single DICOM file (e.g. one under the Icon Image Sequence (thumbnail) and one at the top level (actual image).
It can get more complicated when you need to account for Palette Color (segmented vs un-segmented), modality LUT, VOI LUT etc. My recommendation is to use an existing DICOM SDK and there are many open source and commercial SDK available for different platforms and programming environments.
I'm trying to write a TCP client/server application that transmits objects containing OpenCv Mat. I'd like to serialize these objects using JSON. I found some libraries that help me in doing that (rapidjson), but they of course do not take into account images as object members.
What would you suggest to serialize in a JSON object a cv::Mat variable? How can I use RapidJson, for example, to achieve that?
imencode can be used to encode an viewable image (with CV_8UC1 or CV_8UC3 pixel formats) into a std::vector<uchar>. Link to documentation.
The vector<uchar> will contain the same bytes as if OpenCV had saved the image into one of the supported image file formats (such as JPEG or PNG) and then have the file bytes loaded back into a byte array.
imencode can be found in highgui module when using OpenCV 2.x, or imgcodecs module when using OpenCV 3.x.
With the compressed data in a vector<uchar>, you can use Base64 encoding to format it into a string, which can then be added as a JSON value inside a JSON object.
When using JSON to transmit large amounts of data, consider very very carefully the character encoding format that the JSON library is instructed to emit. Normally, If a large portion of the data is going to be Base64, you will want to make sure the JSON is emitted in UTF8.
If you have the option of sending in binary (which requires an "out-of-band" design in the web service, something not always doable), it should be seriously considered.
When considering different serialization choices for images, these things should be taken into account:
Typical image sizes (total number of pixels)
Size efficiency is less of a concern if images are small.
Pixel format (number of channels and precision)
Most common image file formats will only allow 8-bit grayscale and 24-bit RGB pixel data. Trying to save higher-precision pixel data into these image formats will result in partial loss of precision.
Available transmission bandwidth (if it is scarce enough to be a concern). With less available bandwidth, compression becomes more important.
Compression options.
Typical (photographic or synthetic) images are highly compressible due to the common sense that images that are too "dense" will be too hard to comprehend when viewed by human eyes.
Compression can be lossless or lossy.
Choice of compression may depend on the statistical characteristics of the pixel values (image content).
As mentioned above, if compression is performed by encoding into some image formats, you have to make sure the image format can satisfy the pixel value precision requirements of your application.
If no existing image format meets your requirements and you still want to perform lossless compression, consider using the zlib API that is integrated into the OpenCV Core module.
If you are good at image processing and data compression theory, you may be able to devise an application-specific compression method based on your own needs.
Remember that reducing the image resolution can be a powerful (and super-lossy) way of reducing the transmission file size. Consider carefully what minimum image resolution is actually needed for your application.
Other considerations
Binary or text
Endianness
Availability of highgui, imgcodecs or an image decoder for the chosen image format on the receiving end.
Information source: just did this a few months ago.
I am trying to compress my .jpeg image in Photoshop.
WHat is the best way to do this?
I am now calculating the bpp taking the image size in kb, calculating how many bits that is. Then I take the image size in pixel*pixel to get the amount of pixels in the image. After that I divide bits/pixels, to find how many bits per pixel the image has.
But How can I change this number? My guess is to change how many kb the image is, but how do i do this?
Thanks for any help!!
Yes, you can achieve higher compression ratio than 4 bits per pixel. Images with solid color can have rate as low as 0.13bpp.
In fact 4bpp is quite poor compression — it's same as uncompressed 16-color image or half of 256-color image, which even GIF can manage. JPEG can look decent at 1-2bpp.
in general, you cannot "compress" a jpeg image. all you can do is to reduce the image quality further in order to achieve a lower bpp value. jpeg streams are always compressed and they use a lossy compression method. it means that the original image will never ever be reconstructed from a jpeg image. the smaller the file the more information you have lost.
a specific "bpp value" is not, and should never be your target. especially with lossy compression. you should always look at your current image and decide whether it is still good enough or not.
if you still have the original image, try a lossless compression format, like zip-compressed or lzw-compressed tiff or compressed png. i'm sure PhotoShop can handle these formats as well. another softwares like IrfanView (https://www.irfanview.com/) or XnView MP (https://www.xnview.com/en/xnviewmp/) will convert your images too.
if you want manual (eg. full) control over your images, you should use command line utilities, like ImageMagick (https://imagemagick.org/) or NConvert (please find the XnView MP link above)
if you have only the jpeg images do not touch (edit & save) them. with every single save operation you lose another bunch of information. you should always work on file copies.
you should always keep your master image (the very picture you took with your phone or your camera).
of course, these rules of thumb will not answer your original question.
I'm saving a large number of small png files for use in a game on a phone, so space is at a premium.
I'm trying to figure out the logic behind the file sizes so I can save things most efficiently, but even after using pngcrush the sizes are totally inconsistent.
I saved a 1x1 image and it takes 3kb. I have another 23x21 image which takes only 2kb. I have two images which are almost the same size, but one takes 6kb and the other takes 13kb. I doubled the image height and copied one image into the empty space of the other and saved that. The combined image is only 11kb!
Why is a 1x1 image larger than a 23x21 image? Why can I combine a 13kb image and a 6kb image and get an 11kb image?
Here are the images I'm talking about (there's a 1x1 pixel in between the 1st and second images. It's difficult to see, so I'll just give the URL: http://g42.org/temp/png/1x1.png):
example http://g42.org/temp/png/hat.png
example http://g42.org/temp/png/1x1.png
example http://g42.org/temp/png/helmet1.png
example http://g42.org/temp/png/helmet2.png
example http://g42.org/temp/png/helmet1_2.png
It's not a compression thing, the problem with the 1x1 image is that it has metadata (added by Photoshop, it seems), a color profile (iCCP chunk). If you look inside the binary, its' the data between the strings "iCCP" and "IDAT", it could be removed and you get a 69 bytes file.
If you reopen and save the file most image viewers (xnview), or use pngcrush, you can strip that chunk. : See it here : http://i.stack.imgur.com/fmOdA.png
And regarding the helmet images: besides other informational chunks (imageReady ads some informational text, as you can see), the difference is due to different formats: the two-helmets is a paletted image (8bits per pixel), the single helmet is a RGB with alpha (32bits per pixel)
PNG compression is based on the same algorithm as zlib and is highly sensitive to the data that is being compressed so you won't see a consistent relationship between image size and file size. In the case of the combined image, it is still bigger than the smaller image and given the similarity of the two halves of the image, the compressor was probably able to reuse a lot of the Huffman tree. I don't know enough about the algorithm to say for certain how it ended up smaller than the other half.
As long as you are not seeing oddities like the 1x1 image, which you seem to have figured out in the comments, I don't think this will make a lot of sense without extensive study of image compression.
There is a great utility called pngcrush
http://pmt.sourceforge.net/pngcrush/
Compressing to PNG is a rather difficult task - there are lost of assumptions and strategies to try - do we create a palette, or are we better off without it?
PNGcrush essentially bruteforces 100+ different compression strategies, while at the same time trimming useless tags and sections.
PNG has several sub-formats: 24-bit with or without alpha, 8-bit (includes alpha), grayscale, etc. which use different amount of bytes per pixel and have different "compressibility".
Plus PNG supports several compression tricks (filters and gzip settings) which affect how well image data is compressed.
On top of that PNG can contain metadata, which sometimes can be pretty large, like some embedded color profiles.
ImageAlpha converts images to the most space-efficient PNG8+alpha variant.
ImageOptim removes junk metadata and finds best compression parameters.
With a combination of those two your images can be reduced by 30-50%.