Extracting 'parts' of a hexadecimal number - c++

I want to write a function getColor() that allows me to extract parts of a hexadecimal number entered as a long
The details are as follows:
//prototype and declarations
enum Color { Red, Blue, Green };
int getColor(const long hexvalue, enum Color);
//definition (pseudocode)
int getColor(const long hexvalue, enum Color)
{
switch (Color)
{
case Red:
; //return the LEFTmost value (i.e. return int value of xAB if input was 'xABCDEF')
break;
case Green:
; //return the 'middle' value (i.e. return int value of xCD if input was 'xABCDEF')
break;
default: //assume Blue
; //return the RIGHTmost value (i.e. return int value of xEF if input was 'xABCDEF')
break;
}
}
My 'bit twiddling' isn't what it used to be. I would appreciate some help on this.
[Edit]
I changed the ordering of the color constants in the switch statements - no doubt any designers, CSS enthusiasts out there would have noticed that colors are defined (in the RGB scale) as RGB ;)

Generally:
Shift first
Mask last
So, for instance:
case Red:
return (hexvalue >> 16) & 0xff;
case Green:
return (hexvalue >> 8) & 0xff;
default: //assume Blue
return hexvalue & 0xff;
The ordering of the operations help cut down on the size of the literal constants needed for the masks, which generally leads to smaller code.
I took DNNX's comment to heart, and switched the names of the components since the order is typically RGB (not RBG).
Furthermore, please note that these operations have nothing to do with the number being "hexadecimal", when you're doing operations on an integer type. Hexadecimal is a notation, a way of representing numbers, in textual form. The number itself is not stored in hex, it's binary like everything else in your computer.

switch (Color)
{
case Red:
return (hexvalue >> 16) & 0xff;
case Blue:
return (hexvalue >> 8) & 0xff;
default: //assume Green
return hexvalue & 0xff;
}

Related

Bitwise operation aren't making any sense

my current understanding of bitwise operations is that it would push the binary reprisentation of a number a specific amount of times either removing numbers in the process (in case of >>) or add 0's in the end of a number (in case of <<).
so why is it so when i have a int32 storing a hex value of
int32_t color = 0xFFFF99FF; (= 1111 1111 1111 1111 1001 1001 1111 1111)
bitshifting right this int by 24 should give the value FF , because we moved the first two byts by the number of the byts remaining (32 - 8 = 24)
but what actually happens is that i end up with the value -1 when i execute my code and 0 in calculator
note : bitshifting right by 18 yeilds me the desired result.
am using the SDL2 library and C++
am trying to store colors as their hex values then extract the red,green and blue chanel ignoring the alpha one .
the code here is minimized without any unacessary details.
int32_t color; //hex value of the color yellow
//taking input and changing the value of color based on it
if (event->type == SDL_KEYDOWN) {
switch (event->key.keysym.sym)
{
case SDLK_a:
//SDL_SetRenderDrawColor(renderer, 255, 255, 153, 255); // sand
color = 0xFFFF99FF; //hex code for sand color
break;
case SDLK_z:
//SDL_SetRenderDrawColor(renderer, 0, 0, 255, 255); // water
color = 0x0000FFFF; //hex code for blue color ...
break;
case SDLK_e:
//SDL_SetRenderDrawColor(renderer, 139, 69, 19, 255); // dirt
color = 0x8B4513FF;
break;
case SDLK_d:
//SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255); // delete button || air
color = 0x000000FF;
break;
default:
OutputDebugString("unhandled input.\n");
break;
}
}
//checking for mouse input and drawing a pixel with a specific color based on it
if (event-\>button.button == SDL_BUTTON_LEFT)
{
SDL_SetRenderDrawColor(renderer, color \>\> 24, (color \>\> 16) - (color \>\> 24), (color \>\> 8) - ((color \>\> 16) - (color \>\> 24) + color \>\> 24));
OutputDebugString(std::to_string(color \>\> 24).c_str());
SDL_RenderDrawPoint(renderer, mouseX / 4, mouseY / 4);
SDL_RenderPresent(renderer);
}
int32_t is signed and then >> is the signed shift, keeping the sign bit 31. You should use uint32_t. And it also is more logical to use uint8_t for color components.

Reading image data (R,G, & B pixel) into 2D arrays in C++

I'm able to read the header file of an image fine, but I'm having trouble putting the first data value for the red channel,for example, 206, into a 2D array ppmImage.red[0][0]. A white space follows and then the first value for the green channel, and so on.
Below is what I am currently doing and instead of 206 being in ppmImage.red[0][0], I have ppmImage.red[0][0]= 2, ppmImage.green[0][0]=0, and ppmImage.blue[0][0]=6. For reference, these will only be 8 bit values and thus red, green, and blue are of type pixel and are unsigned char.
void readTextData(fstream &fin, struct ppmImage &image)
{
int iii, jjj;
for(iii = 0; iii < image.rows ; iii++)
{
for(jjj = 0; jjj < image.cols; jjj++)
{
fin >> image.red[iii][jjj];
fin >> image.green[iii][jjj];
fin >>image.blue[iii][jjj];
}
}
fin.close();
}
I thought fin >> would read until it hit white space but I was mistaken. I also tried using fin.read((char *) & image.redgray[iii][jjj],sizeof(pixel)); but ended up with the same results.
The data could be in a form like:
1 2
3 4 5
6 7 8 9
and I'm not sure how I would deal with the '\n'.
I've searched for information and end up more confused than I already am. I'd appreciate a nudge in the right direction or someone pointing out my stupidity. Thanks in advance.
This sounds like a simple fix, you have to read integers not unsigned char. Since your image uses unsigned char all you need to do add is some temporary ints for the read.
int red, green, blue;
fin >> red >> green >> blue;
image.red[iii][jjj] = red;
image.green[iii][jjj] = green;
image.blue[iii][jjj] = blue;
I am assuming that your image file is a text file, that seems to be the case from your description.
If at all possible, I'd change from a structure containing arrays, to an array of structures:
struct pixel {
unsigned char red, green, blue;
friend std::istream &operator>>(std::istream &is, pixel &p) {
unsigned int red, green, blue;
is >> red >> green >> blue;
p.red = red;
p.green = green;
p.blue = blue;
return is;
}
};
// ...
for (i = 0; i<image.cols; i++)
for (j=0; j<image.rows; j++)
if (!infile >> pixels[i][j])
break; // read failed

Get RGB Channels From Pixel Value Without Any Library

Get RGB Channels From Pixel Value Without Any Library
I`m trying to get the RGB channels of each pixel that I read from an image.
I use getchar by reading each byte from the image.
so after a little search I did on the web I found the on BMP for example the colors data start after the 36 byte, I know that each channle is 8 bit and the whole RGB is a 8 bit of red, 8 bit of green and 8 bit of blue. my question is how I extract them from a pixel value? for example:
pixel = getchar(image);
what can I do to extract those channels? In addition I saw this example on JAVA but dont know how to implement it on C++ :
int rgb[] = new int[] {
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
I guess that argb is the "pixel" var I mentioned before.
Thanks.
Assuming that it's encoded as ABGR and you have one integer value per pixel, this should do the trick:
int r = color & 0xff;
int g = (color >> 8) & 0xff;
int b = (color >> 16) & 0xff;
int a = (color >> 24) & 0xff;
When reading single bytes it depends on the endianness of the format. Since there are two possible ways this is of course always inconsistent so I'll write both ways, with the reading done as a pseudo-function:
RGBA:
int r = readByte();
int g = readByte();
int b = readByte();
int a = readByte();
ABGR:
int a = readByte();
int b = readByte();
int g = readByte();
int r = readByte();
How it's encoded depends on how your file format is laid out. I've also seen BGRA and ARGB orders and planar RGB (each channel is a separate buffer of width x height bytes).
It looks like wikipedia has a pretty good overview on what BMP files look like:
http://en.wikipedia.org/wiki/BMP_file_format
Since it seems to be a bit more complicated I'd strongly suggest using a library for this instead of rolling your own.

how to convert a RGB color value to an hexadecimal value in C++?

In my C++ application, I have a png image's color in terms of Red, Green, Blue values. I have stored these values in three integers.
How to convert RGB values into the equivalent hexadecimal value?
Example of that like in this format 0x1906
EDIT: I will save the format as GLuint.
Store the appropriate bits of each color into an unsigned integer of at least 24 bits (like a long):
unsigned long createRGB(int r, int g, int b)
{
return ((r & 0xff) << 16) + ((g & 0xff) << 8) + (b & 0xff);
}
Now instead of:
unsigned long rgb = 0xFA09CA;
you can do:
unsigned long rgb = createRGB(0xFA, 0x09, 0xCA);
Note that the above will not deal with the alpha channel. If you need to also encode alpha (RGBA), then you need this instead:
unsigned long createRGBA(int r, int g, int b, int a)
{
return ((r & 0xff) << 24) + ((g & 0xff) << 16) + ((b & 0xff) << 8)
+ (a & 0xff);
}
Replace unsigned long with GLuint if that's what you need.
If you want to build a string, you can probably use snprintf():
const unsigned red = 0, green = 0x19, blue = 0x06;
char hexcol[16];
snprintf(hexcol, sizeof hexcol, "%02x%02x%02x", red, green, blue);
This will build the string 001906" inhexcol`, which is how I chose to interpret your example color (which is only four digits when it should be six).
You seem to be confused over the fact that the GL_ALPHA preprocessor symbol is defined to be 0x1906 in OpenGL's header files. This is not a color, it's a format specifier used with OpenGL API calls that deal with pixels, so they know what format to expect.
If you have a PNG image in memory, the GL_ALPHA format would correspond to only the alpha values in the image (if present), the above is something totally different since it builds a string. OpenGL won't need a string, it will need an in-memory buffer holding the data in the format required.
See the glTexImage2D() manual page for a discussion on how this works.

Why does converting RGB888 to RGB565 many times cause loss of colour?

If I do the following using Qt:
Load a bitmap in QImage::Format_RGB32.
Convert its pixels to RGB565 (no Qt format for this so I must do it "by hand").
Create a new bitmap the same size as the one loaded in step 1.
Convert the RGB565 buffer pixels back to RGB88 in to the image created in step 3.
The image created from step 4 looks like the image from step 1, however they're not exactly the same if you compare the RGB values.
Repeating steps 2 to 5 results in the final image losing colour - it seems to become darker and darker.
Here are my conversion functions:
qRgb RGB565ToRGB888( unsigned short int aTextel )
{
unsigned char r = (((aTextel)&0x01F) <<3);
unsigned char g = (((aTextel)&0x03E0) >>2);
unsigned char b = (((aTextel)&0x7C00 )>>7);
return qRgb( r, g, b, 255 );
}
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF;
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) < 5) & 0x07E0;
unsigned short R = ((red >> 3) < 11) & 0xF800;
return (unsigned short int) (R | G | B);
}
An example I found from my test image which doesn't convert properly is 4278192128 which gets converted back from RGB565 to RGB888 as 4278190080.
Edit: I should also mention that the original source data is RGB565 (which my test RGB888 image was created from). I am only converting to RGB888 for display purposes but would like to convert back to RGB565 afterwards rather than keeping two copies of the data.
Beforehand I want to mention that the component order in your two conversion functions aren't the same. In 565 -> 888 conversion, you assume that the red component uses the low order bits (0x001F), but when encoding the 5 bits of the red component, you put them at the high order bits (0xF800). Assuming that you want a component order analogous to 0xAARRGGBB (binary representation in RGB565 is then 0bRRRRRGGGGGGBBBBB), you need to change the variable names in your RGB565ToRGB888 method. I fixed this in the code below.
Your RGB565 to RGB888 conversion is buggy. For the green channel, you extract 5 bits, which gives you only 7 bit instead of 8 bit in the result. For the blue channel you take the following bits which is a consequential error. This should fix it:
QRgb RGB565ToRGB888( unsigned short int aTextel )
{
// changed order of variable names
unsigned char b = (((aTextel)&0x001F) << 3);
unsigned char g = (((aTextel)&0x07E0) >> 3); // Fixed: shift >> 5 and << 2
unsigned char r = (((aTextel)&0xF800) >> 8); // shift >> 11 and << 3
return qRgb( r, g, b, 255 );
}
In the other function, you accidentally wrote less-than operators instead of left-shift operators. This should fix it:
unsigned short int RGB888ToRGB565( QRgb aPixel )
{
int red = ( aPixel >> 16) & 0xFF; // why not qRed(aPixel) etc. ?
int green = ( aPixel >> 8 ) & 0xFF;
int blue = aPixel & 0xFF;
unsigned short B = (blue >> 3) & 0x001F;
unsigned short G = ((green >> 2) << 5) & 0x07E0; // not <
unsigned short R = ((red >> 3) << 11) & 0xF800; // not <
return (unsigned short int) (R | G | B);
}
Note that you can use the already existing (inline) functions qRed, qGreen, qBlue for component extraction analogous to qRgb for color construction from components.
Also note that the final bit masks in RGB888ToRGB565 are optional, as the component values are in the 8-bit-range and you cropped them by first right-, then left-shifting the values.