Reduce Image bit C++ - c++

How can I reduce the number of bits from 24 bits to a number between 0 and 8 bits and distribute the bits for the three colors Red, Green and Blue
Any idea ?

This is called "Color Quantization". You have 16.777.216 colors and you want to map them to a smaller number (2 to 256).
Step 1: choose the colors you want to use. First their number, then the colors themselves. You need to chose if the colors are fixed for all images, or if they change based on the image (you will need to ship a palette of colors with every image).
Step 2: substitute the colors of your image with those in the selection.
If the colors are fixed and you want to stay very simple you can use 1 bit per channel (8 colors in total) or 2 bits per channel (64 colors in total).
Slightly more complex, use these values for each channel 0, 51, 102, 153, 204, 255, in any possible way, leading to 216 different color combinations. Make a table which associates every color combination with an index. That index requires 8 bits (with some spare). This is called a "web safe palette" (this brings me back to the late 1999). Now you are ready for substitution: take every pixel in your image and the quantized color can be found as x*6//256*51 (// is integer division).
If you want a better looking palette, look for the Median cut algorithm.

Keep only the most significant bits of the pixel's red channel. Do likewise for the green and blue channel. Then use C++'s bit manipulation operations to move those bits values into a single byte. There are multiple ways of doing so. For example, do an Internet search for "rgb332" for one example (where you keep 3 red bits, 3 green bits, and 2 blue bits).

Related

Converting 12 bit color values to 8 bit color values C++

I'm attempting to convert 12-bit RGGB color values into 8-bit RGGB color values, but with my current method it gives strange results.
Logically, I thought that simply dividing the 12-bit RGGB into 8-bit RGGB would work and be pretty simple:
// raw_color_array contains R,G1,G2,B in a bayer pattern with each element
// ranging from 0 to 4096
for(int i = 0; i < array_size; i++)
{
raw_color_array[i] /= 16; // 4096 becomes 256 and so on
}
However, in practice this actually does not work. Given, for example, a small image with water and a piece of ice in it you can see what actually happens in the conversion (right most image).
Why does this happen? and how can I get the same (or close to) image on the left, but as 8-bit values instead? Thanks!
EDIT: going off of #MSalters answer, I get a better quality image but the colors are still drasticaly skewed. What resources can I look into for converting 12-bit data to 8-bit data without a steep loss in quality?
It appears that your raw 12 bits data isn't on a linear scale. That is quite common for images. For a non-linear scale, you can't use a linear transformation like dividing by 16.
A non-linear transform like sqrt(x*16) would also give you an 8 bits value. So would std::pow(x, 12.0/8.0)
A known problem with low-gradient images is that you get banding. If your images has an area where the original value varies from say 100 to 200, the 12-to-8 bit reduction will shrink that to less than 100 different values. You get rounding , and with naive (local) rounding you get bands. Linear or non-linear, there will then be some inputs x that all map to y, and some that map to y+1. This can be mitigated by doing the transformation in floating point, and then adding a random value between -1.0 and +1.0 before rounding. This effectively breaks up the band structure.
After you clarified that this 12bit data is only for one color, here is my simple answer:
Since you want to convert its value to its 8 bit equivalent, it obviously means you lost some of the data (4bits). This is the reason why you are not getting the same output.
After clarification:
If you want to retain the actual colour values!
Apply de-mosaicking in the 12 Bit image and then scale the resultant data to 8 - Bit. So that the colour loss due to de-mosaicking will be less compared to the previous approach.
You say that your 12-bits represent 2^12 bits of one colour. That is incorrect. There are reds, greens and blues in your image. Look at the histogram. I made this with ImageMagick at the command line:
convert cells.jpg histogram:png:h.png
If you want 8-bits per pixel, rather than trying to blindly/statically apportion 3 bits to Green, 2 bits to Red and 3 bits to Blue, you would probably be better off going with an 8-bit palette so you can have 250+ colours of all variations rather than restricting yourself to just 8 blue shades, 4 reds an 8 green. So, like this:
convert cells.jpg -colors 254 PNG8:result.png
Here is the result of that beside the original:
The process above is called "quantisation" and if you want to implement it in C/C++, there is a writeup here.

How can I store each pixel in an image as a 16 bit index into a colortable?

I have a 2D array of float values:
float values[1024][1024];
that I want to store as an image.
The values are in the range: [-range,+range].
I want to use a colortable that goes from red(-range) to white(0) to black(+range).
So far I have been storing each pixel as a 32 bit RGBA using the BMP file format. The total memory for storing my array is then 1024*1024*4 bytes = 4MB.
This seems very vasteful knowing that my colortable is "1 dimensional" whereas the 32 RGBA is "4 dimensional".
To see what I mean; lets assume that my colortable went from black(-range) to blue(+range).
In this case the only component that varies is clearly the B, all the others are fixed.
So I am only getting 8bits of precision whereas I am "paying" for 32 :-(.
I am therefore looking for a "palette" based file format.
Ideally I would like each pixel to be a 16 bit index (unsigned short int) into a "palette" consisting of 2^16 RGBA values.
The total memory used for storing my array in this case would be: 1024*1024*2 bytes + 2^16*4bytes = 2.25 MB.
So I would get twice as good precision for almost half the "price"!
Which image formats support this?
At the moment I am using Qt's QImage to write the array to file as an image. QImage has an internal 8 bit indexed ("palette") format. I would like a 16 bit one. Also I did not understand from Qt's documentation which file formats support the 8 bit indexed internal format.
Store it as a 16 bit greyscale PNG and do the colour table manually yourself.
You don't say why your image can be decomposed in 2^16 colours but using your knowledge of this special image you could make an algorithm so that indices that are near each other have similar colours and are therefore easier to compress.
"I want to use a colortable that goes from red(-range) to white(0) to black(+range)."
Okay, so you've got FF,00,00 (red) to FF,FF,FF (white) to 00,00,00 (black). In 24 bit RGB, that looks to me like 256 values from red to white and then another 256 from white to black. So you don't need a palette size of 2^16 (16384); you need 2^9 (512).
If you're willing to compromise and use a palette size of 2^8 then the GIF format could work. That's still relatively fine resolution: 128 shades of red on the negative size, plus 128 shades of grey on the positive. Each of a GIF's 256 palette entries can be an RGB value.
PNG is another candidate for palette-based color. You have more flexibility with PNG, including RGBA if you need an alpha channel.
You mentioned RGBA in your question but the use of the alpha channel was not explained.
So independent of file format, if you can use a 256 entry palette then you will have a very well compressed image. Back to your mapping requirement (i.e. mapping floats [-range -> 0.0 -> +range] to [red -> white -> black], here is a 256 entry palette that covers the range red-white-black you wanted:
float entry# color rgb
------ ------- ----- --------
-range 0 00 red FF,00,00
1 01 FF,02,02
2 02 FF,04,04
... ...
... ...
127 7F FF,FD,FD
0.0 128 80 white FF,FF,FF
129 81 FD,FD,FD
... ....
... ...
253 FD 04,04,04
254 FE 02,02,02
+range 255 FF black 00,00,00
If you double the size of the color table to be 9 bits (512 values) then you can make the increments between RGB entries more fine: increments of 1 instead of 2. Such a 9-bit palette would give you full single-channel resolution in RGB on both the negative and positive sides of the range. It's not clear that allocating 16 bits of palette would really be able to store any more visual information given the mapping you want to do. I hope I understand your question and maybe this is helpful.
PNG format supports paletted format up to 8-bits, but should also support grayscale images up to 16-bits. However, 16-bit modes are less used, and software support may be lacking. You should test your tools first.
But you could also test with plain 24-bit RGB truecolor PNG images. They are compressed and should produce better result than BMP in any case.

How to transform rgb (three bytes) to one byte for bitmap format?

I have data for every pixel red one byte, green one byte, blue one byte. I need to pack this to 8 bits bitmap, so I have only one byte for pixel. How to transform rgb (three bytes) to one byte for bitmap format ?
( I am using C++ and cannot use any external libraries)
I think you misunderstood how to form a bitmap structure. You do not need to pack (somehow) 3 bytes into one. That is not possible after all, unless you throw away information (like using special image formats GL_R3_G3_B2).
The BMP file format wiki page shows detailed BMP format : it is a header, followed by data. Now depending on what you set in your header, it is possible to form a BMP image containing RBG data component, where each component is one byte.
First you need to decide how many bits you want to allocate for each color.
3bit per color will overflow a byte (9bits)
2bits per color will underflow;
In three byte RGB bitmap you have one byte to represent each color's intensity. Where 0 is minimum and 255 is max intensity. When you convert it to 1 byte bitmap (assuming you will choose 2bits per color ) transform should be:
1-byte red color/64
i.e you will get only 4 shades out of a spectrum of 265 shades per color.
First you have to produce 256 colors palette that best fits your source image.
Then you need to dither the image using the palette you've generated.
Both problems have many well-known solutions. However, it's impossible to produce high-quality result completely automatic: for different source images, different approaches work best. For example, here's the Photoshop UI that tunes the parameters of the process:

Pointers on solving this Image Processing challenge?

The 2nd problem in IOI 2013 states:
You have an Art History exam approaching, but you have been paying
more attention to informatics at school than to your art classes! You
will need to write a program to take the exam for you.
The exam will consist of several paintings. Each painting is an example of one of
four distinctive styles, numbered 1, 2, 3 and 4. Style 1 contains
neoplastic modern art. Style 2 contains impressionist landscapes.
Style 3 contains expressionist action paintings. Style 4 contains
colour field paintings.
Your task is, given a digital image of a painting, to determine which style the painting belongs to.
The image will be given as an H×W grid of pixels. The rows of
the image are numbered 0, …, (H ­ 1) from top to bottom, and the
columns are numbered 0, …, W ­ 1 from left to right. The pixels are
described using two­dimensional arrays R , G and B , which give the
amount of red, green and blue respectively in each pixel of the image.
These amounts range from 0 (no red, green or blue) to 255 (the maximum
amount of red, green or blue).
Implementation
You should submit a file
that implements the function style(), as follows:
int style(int H, int W, int R[500][500], int G[500][500], int B[500][500]);
This function should determine the style of the image. Parameters are:
H: The number of rows of pixels in the image.
W: The number of columns of pixels in the image.
R: A two­dimensional array of size H×W , giving the amount of red in each pixel of the image.
G: A two­dimensional array of size H×W , giving the amount of green in each pixel of the image.
B: A two­dimensional array of size H×W , giving the amount of blue in each pixel of the image.
Example pictures are in the problem PDF
I do not want a readymade program. A hint or two to get me started would be nice, as I am clueless about this might be solved.
Since you are provided the image data in RGB format, first prepare a copy of the same image data in YUV. This is essential as some of the image features are easily identified patterns in the Luma(Y) and Chroma(U,V) maps.
Based on the samples provided, here are some of the salient features of each "style" of art :
Style1 - Neoplastic modern art
Zero graininess - Check for large areas with uniform Luma(Y)
Black pixels at edges of the areas(transition between different chroma).
Style2 - Impressionist landscapes
High graininess - Check for high entropy (salt-n-pepper-noise like) patterns in Luma(Y).
Pre-dominantly green - High values in green channel.
Greenavg >> Redavg
Greenavg >> Blueavg
Style3 - Expressionist action paintings
High graininess - Check for high entropy (salt-n-pepper-noise like) patterns in Luma(Y).
NOT green.
Style4 - Color field paintings
Zero graininess - Check for large areas with uniform Luma(Y)
NO black(or near black) pixels at the transition between different chroma.
As long as the input image belongs to one of these classes you should have no trouble in classification by running the image data through functions that are implemented to identify the above features.
Basically it boils down to the following code-flow :
Image has uniform luma?
(If Yes) Image has black pixels at chroma transitions?
(If Yes) Style1
(If No) Style4
(If No) Image is green-ish?
(If Yes) Style2
(If No) Style3
Maybe you can do a first approach using colors and shapes... In neo plastic modern it is likely that there will be only a few number of colors, occupying geometrical areas as in the colour field paintings.
This might gives you a way to differenciate styles 1 and 4 from styles 2 and 3.
In styles 1 and 4 you have large areas with the same color, but in style 4 the color is rarely a solid color but brush strokes of shades of the color.
Anyway you should look into the specialities of each styles, which are the usual colors and methods and then try to make your function "see" it.

Compressing BMP methods

I am working on a project to losslessly compress a specific style of BMP images that look like this
I have thought about doing pattern recognition, to find repetitive blocks of N x N pixels but I feel like it wont be fast enough execution time.
Any suggestions?
EDIT: I have access to the dataset that created these images too, I just use the image to visualize my data.
Optical illusions make it hard to tell for sure but are the colors only black/blue/red/green? If so, the most straightforward compression would be to simply make more efficient use of pixels. I'm thinking pixels use a fixed amount of space regardless of what color they are. Thus, chances are you are using 12x as many pixels as you really need to be. Since a pixel can be a lot more colors than just those four.
A simple way to do that would be to do label the pixels with the following base 4 numbers:
Black = 0
Red = 1
Green = 2
Blue = 3
Example:
The first four colors of the image seems to be Blue-Red-Blue-Blue. This is equal to 3233 in base 4, which is simply EF in base 16 or 239 in base 10. This is enough to define what the red color of the new pixel should be. The next 4 would define the green color and the final 4 define what the blue color is. Thus turning 12 pixels into a single pixel.
Beyond that you'll probably want to look into more conventional compression software.