Unsigned Int and Colors - c++

I'm trying to alter the coloring of something using an unsigned int (0xFF998877 etc...) in the form 0xAABBGGRR where A is Alpha and B, G and R are Blue, Green and Red.
I'm wondering however what the best way would be to alter the color I'm passing in to slowly become darker and or lighter.
As I don't have a lot of experience with unsigned int in this way my solution was simply to decrement the value by 1, but this had weird results. Is there a good method to only alter the RGB elements and keep the Alpha constant? I did, in my scant research, find that I could multiply other unsigned ints together to produce my final result e.g. (0xFF * 0x99 * 0x99 * 0x99) but this still left the alpha value variable in the end result.
Any help is greatly appreciated!

What you need to do is to convert your color to HSL(or HSV) and then change the lightness (or value) as you desire. Then you need to convert it back to RGB.
There's a huge amount of code on the internet that you can use to do this conversion. If you had trouble with that, I could provide you with some code of my own.

I know nothing about colour manipulation in C++ but could you not simply use a bitmask? Or write a class that represent a colour using 4 UINT8 (unsigned char)? Then you could easily write all kinds of manipulations of different channels.
It would make for a great API as well:
Colour c(128,50,50,0);
c.saturateRed(10);
c.darkerBySteps(3);
c.desaturateGreen(10);
UINT32 rawColVal = c.getRawValue();

Related

cimg pixel value - numerical

is there a way to get the int value of a pixel returned with cimg? I'm in the process of building a basic ASCII art program that converts JPG's to character arrays, and I have the entire utility built out but I cann not find a way to get the unsigned char's converted into the range of ints I need (0-255, although the specifics don't matter so long as its a predictable interval).
Does anyone have any idea how to get a numerical pixel value from a JPG? (library suggestions or anything else are completely welcome)
Here is the pixel output:
\�_b��}�HaX�gNzԴ�����p��-�u�����lqu��Lߐ_"T������{�y�sricX[[TXgZ]`a~�t91960d�BpvJ0kY#uR!BpMWb\W?j"#���dCy2+4?ڽ�TT<Tght%P%y;mhͬ�����8#1�H��)����:4lu���CY|��u&<_��ī��������������ȿF�����LP:����N���-�Q�+�2;E3(�SdRO6��NI16j{#�0((
: pixel data
It's already been converted to black and white, so even accessing the numerical value of one color channel off the cimg would be fine. I just can't seem to get any kind of intelligible/manipulable output from the image, even though the image itself is exactly what i'm looking for.
cast it as an int using (int)img(x,y) and ignore the extra channels

How do I convert an RGB byte[] slice to an image.Image in go?

A C++ application running in another process passes in a char[] array of three-byte pixels (red, green, blue) to a go program. I've reconstructed this in go as a byte[] slice using cgo, but I'm unsure how to convert to an image. I can pass the width or height as well, if that is needed (I would imagine it would be).
I'm aware of the image.RGBA type, but the documentation seems to imply that those aren't just single-byte-per-color, and that assumes that there is an alpha channel, which my very simplistic bitmap does not have. Would converting the 3 byte values I have into something that works with image.RGBA be a solution? If so, how should I do that?
Alternatively, I could do the conversion in C/C++ before sending the values into a format that go recognizes (jpeg, gif, png). Either way works for my uses, but I don't know how to approach either.
The image package is based on interfaces. Just define a new type with those methods.
Your type's ColorModel would return color.RGBAModel, Bounds - your rectangle's borders, and At - the color at (x, y) that you can compute if you know the image's dimensions.

RGB color to HSL bytes

I've seen some implementations for converting RGB to HSL. Most are accurate and work in both directions.
To me its not important that it will work in 2 directions (no need to put back to RGB)
But i want code that returns values from 0 to 255 max, also for the Hue channel.
And I wouldnt like to do devisions like Hue/360*250 i am searching for integer based math no Dwords (its for another system), nice would be some kind of boolean logix (and/or/xor)
It should not do any integer or real number based math, the goal is code
working only using byte math.
Maybe someone already has found such math when he used code like
c++ or
c# or
python
Which i would be able to translate to c++
Checkout the colorsys module, it has methods like:
colorsys.rgb_to_hls(r,g,b)
colorsys.hls_to_rgb(h,l,s)
The easyrgb site has many code snippets for color space conversion. Here's the rgb->hsl code.

C++ RGB values from pixel selected by user-using seekg

I need to create a program that loads a .raw image (generic 100x100 image), asks the user to select an (x, y) coordinate within the range, and display the red, green, and blue values for said pixel using the seekg function. I'm at a loss as to how to get the rgb values from the pixel. I've gone through every chapter of the textbook that we've covered so far, and there is nothing about retreiving rgb values.
The code asking for the coordinates and giving an error message if outside the range is working fine. Only when I try to come up with the code for using seekg/getting the rgb values am I running in to trouble. I've looked at different questions on the site, and there is good information here, but I've not see any answers using seekg in order to get the rgb values.
I'm not looking for anyone to produce the code for me, just looking for some guidance and a push in the right direction.
loc = (y * 100 + x) * 3; // code given by professor with 100 being the width of the image
imageRaw.seekg(loc, ios::beg);
And then I'm at a loss.
Any help would be greatly appreciated.
From there, you probably need to read three bytes, which will represent the red, green, and blue values. You haven't told us enough to be sure of the order; green is almost always in the middle, but RGB and BGR are both fairly common.
From a practical viewpoint, for a picture of this size you don't normally want to use seekg at all though. You'd read the entire image into memory, and lookup the values in the vector (or array, if you insist) that stores the data.

OpenGL. Testing the colour of a pixel. Can someone explain OpenGl glGetPixels to me?

Just getting started with OpenFrameworks and I'm trying to do something that should be simple : test the colour of the pixel at a particular point on the screen.
I find there's no nice way to do this in openFrameworks, but I can drop down into openGL and glReadPixels. However, I'm having a lot of trouble with it.
Based on http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml I started off trying to do this:
glReadPixels(x,y, 1,1, GL_RGB, GL_INT, &an_int);
I figured that as I was checking the value of a single pixel (width and height are 1) and giving it a GL_INT as type GL_RGB as format, a single pixel should take up a single int (4 bytes) Hence I passed a pointer to an int as the data argument.
However, the first thing I noticed was that glReadPixels seemed to be clobbering some other local variables in my function, so I changed to making an array of 10 ints and now pass that. This has stopped any weird side-effect, but I still have no idea how to interpret what it's returning.
So ... what's the right combination of format and type arguments that I should be passing to safely get something that can easily be unpacked into its RGB values? (Note that I'm doing this through openFrameworks so I'm not explicitly setting up openGL myself. I guess I'm just getting the openFramework / openGL defaults. The only bit of configuration I know I'm doing is NOT setting up alpha-blending, which I believe means that pixels are represented by 3 bytes (R,G,B but no alpha)). I assume that GL_RGB is the format that corresponds to this.
If you do so, you need three int: one for R, one for G, one for B. If think you should use:
unsigned char RGB[3];
glReadPixels(x,y, 1,1, GL_RGB, GL_UNSIGNED_BYTE, rgb);