Why does converting RGB888 to RGB565 many times cause loss of colour? - c++

If I do the following using Qt:
Load a bitmap in QImage::Format_RGB32.
Convert its pixels to RGB565 (no Qt format for this so I must do it "by hand").
Create a new bitmap the same size as the one loaded in step 1.
Convert the RGB565 buffer pixels back to RGB88 in to the image created in step 3.
The image created from step 4 looks like the image from step 1, however they're not exactly the same if you compare the RGB values.
Repeating steps 2 to 5 results in the final image losing colour - it seems to become darker and darker.
Here are my conversion functions:
qRgb RGB565ToRGB888( unsigned short int aTextel )
{
unsigned char r = (((aTextel)&0x01F) <<3);
unsigned char g = (((aTextel)&0x03E0) >>2);
unsigned char b = (((aTextel)&0x7C00 )>>7);
return qRgb( r, g, b, 255 );
}
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF;
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) < 5) & 0x07E0;
unsigned short R = ((red >> 3) < 11) & 0xF800;
return (unsigned short int) (R | G | B);
}
An example I found from my test image which doesn't convert properly is 4278192128 which gets converted back from RGB565 to RGB888 as 4278190080.
Edit: I should also mention that the original source data is RGB565 (which my test RGB888 image was created from). I am only converting to RGB888 for display purposes but would like to convert back to RGB565 afterwards rather than keeping two copies of the data.

Beforehand I want to mention that the component order in your two conversion functions aren't the same. In 565 -> 888 conversion, you assume that the red component uses the low order bits (0x001F), but when encoding the 5 bits of the red component, you put them at the high order bits (0xF800). Assuming that you want a component order analogous to 0xAARRGGBB (binary representation in RGB565 is then 0bRRRRRGGGGGGBBBBB), you need to change the variable names in your RGB565ToRGB888 method. I fixed this in the code below.
Your RGB565 to RGB888 conversion is buggy. For the green channel, you extract 5 bits, which gives you only 7 bit instead of 8 bit in the result. For the blue channel you take the following bits which is a consequential error. This should fix it:
QRgb RGB565ToRGB888( unsigned short int aTextel )
{
// changed order of variable names
unsigned char b = (((aTextel)&0x001F) << 3);
unsigned char g = (((aTextel)&0x07E0) >> 3); // Fixed: shift >> 5 and << 2
unsigned char r = (((aTextel)&0xF800) >> 8); // shift >> 11 and << 3
return qRgb( r, g, b, 255 );
}
In the other function, you accidentally wrote less-than operators instead of left-shift operators. This should fix it:
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF; // why not qRed(aPixel) etc. ?
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) << 5) & 0x07E0; // not <
unsigned short R = ((red >> 3) << 11) & 0xF800; // not <
return (unsigned short int) (R | G | B);
}
Note that you can use the already existing (inline) functions qRed, qGreen, qBlue for component extraction analogous to qRgb for color construction from components.
Also note that the final bit masks in RGB888ToRGB565 are optional, as the component values are in the 8-bit-range and you cropped them by first right-, then left-shifting the values.

Related

C++ Gdiplus Monochrome Pixel Values

I have a monochrome bitmap. I am using it for collision detection.
// creates the monochrome bitmap
bmpTest = new Bitmap(200, 200, PixelFormat1bppIndexed);
// color and get the pixel color at point (x, y)
Color color;
bmpTest->GetPixel(110,110,&color);
// the only method I know of that I can get a 0 or 1 from.
int b = color.GetB();
// b is 0 when the color is black and 1 when it is not black as desired
Is there a faster way of doing this? I can only use it on Get A R G B() values. I am using GetB() because any ARGB value is 0 or 1, correctly, but seems messy to me.
Is there a way I can read a byte from a monochrome bitmap returning either a 0 or 1? (is the question)
You should use LockBits() method for faster access:
BitmapData bitmapData;
pBitmap->LockBits(&Rect(0,0,pBitmap->GetWidth(), pBitmap->GetHeight()), ImageLockModeWrite, PixelFormat32bppARGB, &bitmapData);
unsigned int *pRawBitmapOrig = (unsigned int*)bitmapData.Scan0; // for easy access and indexing
unsigned int curColor = pRawBitmapCopy[curY * bitmapData.Stride / 4 + curX];
int b = curColor & 0xff;
int g = (curColor & 0xff00) >> 8;
int r = (curColor & 0xff0000) >> 16;
int a = (curColor & 0xff000000) >> 24;

ARGB and kCGImageAlphaPremultipliedFirst format. Why do the pixel colors are stored as (255-data)?

I create an image using
UIGraphicsBeginImageContextWithOptions(image.size, NO, 0);
[image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
// more code - not relevant - removed for debugging
image = UIGraphicsGetImageFromCurrentImageContext(); // the image is now ARGB
UIGraphicsEndImageContext();
Then I try to find the color of a pixel (using the code by Minas Petterson from here: Get Pixel color of UIImage).
But since the image is now in ARGB format I had to modified the code with this:
alpha = data[pixelInfo];
red = data[(pixelInfo + 1)];
green = data[pixelInfo + 2];
blue = data[pixelInfo + 3];
However this did not work.
The problem is that (for example) a red pixel, that in RGBA would be represented as 1001 (actually 255 0 0 255, but for simplicity I use 0 to 1 values), in the image is represented as 0011 and not (as I thought) 1100.
Any ideas why? Am I doing something wrong?
PS. The code I have to use looks like it has to be this:
alpha = 255-data[pixelInfo];
red = 255-data[(pixelInfo + 1)];
green = 255-data[pixelInfo + 2];
blue = 255-data[pixelInfo + 3];
There are some problems that arises there:
"In some contexts, primarily OpenGL, the term "RGBA" actually means the colors are stored in memory such that R is at the lowest address, G after it, B after that, and A last. OpenGL describes the above format as "BGRA" on a little-endian machine and "ARGB" on a big-endian machine." (wiki)
Graphics hardware is backed by OpenGL on OS X/iOS, so I assume that we deal with little-endian data(intel/arm processors). So, when format is kCGImageAlphaPremultipliedFirst (ARGB) on little-endian machine it's BGRA. But don't worry, there is easy way to fix that.
Assuming that it's ARGB, kCGImageAlphaPremultipliedFirst, 8 bits per component, 4 components per pixel(That's what UIGraphicsGetImageFromCurrentImageContext() returns), don't_care-endiannes:
- (void)parsePixelValuesFromPixel:(const uint8_t *)pixel
intoBuffer:(out uint8_t[4])buffer {
static NSInteger const kRedIndex = 0;
static NSInteger const kGreenIndex = 1;
static NSInteger const kBlueIndex = 2;
static NSInteger const kAlphaIndex = 3;
int32_t *wholePixel = (int32_t *)pixel;
int32_t value = OSSwapHostToBigConstInt32(*wholePixel);
// Now we have value in big-endian format, regardless of our machine endiannes (ARGB now).
buffer[kAlphaIndex] = value & 0xFF;
buffer[kRedIndex] = (value >> 8) & 0xFF;
buffer[kGreenIndex] = (value >> 16) & 0xFF;
buffer[kBlueIndex] = (value >> 24) & 0xFF;
}

Get RGB Channels From Pixel Value Without Any Library

Get RGB Channels From Pixel Value Without Any Library
I`m trying to get the RGB channels of each pixel that I read from an image.
I use getchar by reading each byte from the image.
so after a little search I did on the web I found the on BMP for example the colors data start after the 36 byte, I know that each channle is 8 bit and the whole RGB is a 8 bit of red, 8 bit of green and 8 bit of blue. my question is how I extract them from a pixel value? for example:
pixel = getchar(image);
what can I do to extract those channels? In addition I saw this example on JAVA but dont know how to implement it on C++ :
int rgb[] = new int[] {
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
I guess that argb is the "pixel" var I mentioned before.
Thanks.
Assuming that it's encoded as ABGR and you have one integer value per pixel, this should do the trick:
int r = color & 0xff;
int g = (color >> 8) & 0xff;
int b = (color >> 16) & 0xff;
int a = (color >> 24) & 0xff;
When reading single bytes it depends on the endianness of the format. Since there are two possible ways this is of course always inconsistent so I'll write both ways, with the reading done as a pseudo-function:
RGBA:
int r = readByte();
int g = readByte();
int b = readByte();
int a = readByte();
ABGR:
int a = readByte();
int b = readByte();
int g = readByte();
int r = readByte();
How it's encoded depends on how your file format is laid out. I've also seen BGRA and ARGB orders and planar RGB (each channel is a separate buffer of width x height bytes).
It looks like wikipedia has a pretty good overview on what BMP files look like:
http://en.wikipedia.org/wiki/BMP_file_format
Since it seems to be a bit more complicated I'd strongly suggest using a library for this instead of rolling your own.

binary file bit manipuation

I have a binary file of image data where each pixel is exactly 4 bits. Image data is laid out as follow:
There a N images where the first image is 1x1, the second image is 2x2, the third is 4x4, and so on (they are mipmaps if you care to know).
Given a pointer to the start of the data buffer, I want to skip to the biggest image.
Now I know how many bytes I want to skip, but there is this annoying 1x1 image at the start which is 4 bits. I am not aware of anyway to increment a pointer by bit.
How can I successfully retrieve the data without everything being off by 4 bits?
Assuming you can change your file format you can do either of the following:
Add padding to the 1x1 image
Store the images in reverse order (effectively the same as above, but not ideal for mip-maps because you don't necessarily know how many images you will have)
If you can't change your format, you have these choices:
Convert the data
Accept that the buffer is offset by half a byte and work with it accordingly
You said:
How can I successfully retrieve the data without everything being off
by 4 bits?
So that means you need to convert. When you calculate your offset in bytes, you will find that the first one contains half a byte of the previous image. So in a pinch you can shuffle them like this:
for( i = start; i < end; i++ ) {
p[i] = (p[i] << 4) | (p[i+1] >> 4);
}
That's assuming the first pixel is bits 4-7 and the second pixel is bits 0-3, and so on... If it's the other way around, just invert those two shifts.
// this assumes pixels points to bytes(unsigned chars)
index = ?;// your index to the pixel
byte_t b = pixels[index / 2];
if (index % 2) pixel = b >> 4;
else pixel = b & 15;
// Or you can use
byte_t b = pixels[index >> 1];
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
Either way just compute the logical index into the file. Dividing by two takes you to the start of the byte where the pixel is. And then just read the correct half of the byte.
So make a function
byte_t GetMyPixel(unsigned char* pixels, unsigned index) {
byte_t b = pixels[index >> 1];
byte_t pixel;
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
return pixel;
}
To read first image.
Image1x1 = GetMyPixel(pixels,0);
Image2x2_1 = GetMyPixel(pixels,1);// Top left pixel of second image
Image2x2_2 = GetMyPixel(pixels,2);// Top Right pixel of second image
Image2x2_3 = GetMyPixel(pixels,3);// Bottom left pixel of second image
... etc
So that is one way to go about it. You might need to take into account the endian-ness you are using so if it seems wrong then switch the logic for the pixel read thusly...
byte_t GetMyPixel(unsigned char* pixels, unsigned index) {
byte_t b = pixels[index >> 1];
byte_t pixel;
#if OTHER_ENDIAN
if (index & 1) pixel = b >> 4;
else pixel = b & 15;
#else
if (index & 1) pixel = b & 15;
else pixel = b >> 4;
#endif
return pixel;
}

how to convert a RGB color value to an hexadecimal value in C++?

In my C++ application, I have a png image's color in terms of Red, Green, Blue values. I have stored these values in three integers.
How to convert RGB values into the equivalent hexadecimal value?
Example of that like in this format 0x1906
EDIT: I will save the format as GLuint.
Store the appropriate bits of each color into an unsigned integer of at least 24 bits (like a long):
unsigned long createRGB(int r, int g, int b)
{
return ((r & 0xff) << 16) + ((g & 0xff) << 8) + (b & 0xff);
}
Now instead of:
unsigned long rgb = 0xFA09CA;
you can do:
unsigned long rgb = createRGB(0xFA, 0x09, 0xCA);
Note that the above will not deal with the alpha channel. If you need to also encode alpha (RGBA), then you need this instead:
unsigned long createRGBA(int r, int g, int b, int a)
{
return ((r & 0xff) << 24) + ((g & 0xff) << 16) + ((b & 0xff) << 8)
+ (a & 0xff);
}
Replace unsigned long with GLuint if that's what you need.
If you want to build a string, you can probably use snprintf():
const unsigned red = 0, green = 0x19, blue = 0x06;
char hexcol[16];
snprintf(hexcol, sizeof hexcol, "%02x%02x%02x", red, green, blue);
This will build the string 001906" inhexcol`, which is how I chose to interpret your example color (which is only four digits when it should be six).
You seem to be confused over the fact that the GL_ALPHA preprocessor symbol is defined to be 0x1906 in OpenGL's header files. This is not a color, it's a format specifier used with OpenGL API calls that deal with pixels, so they know what format to expect.
If you have a PNG image in memory, the GL_ALPHA format would correspond to only the alpha values in the image (if present), the above is something totally different since it builds a string. OpenGL won't need a string, it will need an in-memory buffer holding the data in the format required.
See the glTexImage2D() manual page for a discussion on how this works.