OpenCV HSV Parsing Issue (C++) - c++

First of all, I realize there are existing questions about converting an RGB image to an HSV image out there; I used one of those questions to help me write my code. However, I am getting values for HSV that don't make sense to me.
What I know about HSV I have gotten from this website. From this colorpicker, I have inferred that H is a number ranging from 0-360 degrees, S is a number ranging from 0-100%, and V is a number ranging from 0-100%. Therefore, I had assumed that my code (as follows) would return an H value between 0 and 360, and S/V values between 0 and 100. However, this is not the case.
I plugged my program's output into the above color picker, which all S/V values down to 100 when they exceeded 100. As you can see, the output is close to what it should be, but is not accurate. I feel like this is because I am interpreting the HSV values incorrectly.
For contex, I am going to establish a range for each color on the cube and from there look at the other faces and fill out the current setup of the cube in another program I have.
My code:
void get_color(Mat img, int x_offset, int y_offset)
{
Rect outline(x_offset - 2, y_offset - 2, 5, 5);
rectangle(img, outline, Scalar(255, 0, 0), 2);
Rect sample(x_offset, y_offset, 1, 1);
Mat rgb_image = img(sample);
Mat hsv_image;
cvtColor(rgb_image, hsv_image, CV_BGR2HSV);
Vec3b hsv = hsv_image.at<Vec3b>(0, 0);
int hue = hsv.val[0];
int saturation = hsv.val[1];
int value = hsv.val[2];
printf("H: %d, S: %d, V: %d \n", hue, saturation, value);
}
Output of the program:
H: 21, S: 120, V: 191 // top left cubie
H: 1, S: 180, V: 159 // top center cubie
H: 150, S: 2, V: 142 // top right cubie
H: 86, S: 11, V: 159 // middle left cubie
H: 75, S: 12, V: 133 // middle center cubie
H: 5, S: 182, V: 233 // middle right cubie
H: 68, S: 7, V: 156 // bottom left cubie
H: 25, S: 102, V: 137 // bottom center cubie
H: 107, S: 155, V: 69 // bottom right cubie
Starting image (pixel being extracted # center of each blue square):
Resulting colors (as the above color picker gave):
As you can see, the red and white is fairly accurate, but the orange and yellow are not correct and the blue is blatantly wrong; it is impossible for the pixel I looked at to actually be that color. What am I doing wrong? Any help would be greatly appreciated.

OpenCV has a funny way of representing its colors.
Hue - Represented as a number from 0-179 instead of 0-360. Therefore, multiply the H value by two before plugging it into a traditional color picker.
Saturation/Value - Represented as a number from 0-255. To get a percentage, divide given answer by 255 and multiply by 100 to get a percentage.
Everything works much more sensibly now. See this website for more details on OpenCV and HSV.

Related

WICConvertBitmapSource BGR to Gray unexpected pixel format conversion

I am using WICConvertBitmapSource function to convert pixel format from BGR to Gray and I'm getting unexpected pixel values.
...
pIDecoder->GetFrame( 0, &pIDecoderFrame );
pIDecoderFrame->GetPixelFormat( &pixelFormat ); // GUID_WICPixelFormat24bppBGR
IWICBitmapSource * dst;
WICConvertBitmapSource( GUID_WICPixelFormat8bppGray, pIDecoderFrame, &dst );
Example on 4x3 image with following
BGR pixel values:
[ 0, 0, 255, 0, 255, 0, 255, 0, 0;
0, 255, 255, 255, 255, 0, 255, 0, 255;
0, 0, 0, 119, 119, 119, 255, 255, 255;
233, 178, 73, 233, 178, 73, 233, 178, 73]
Gray pixel values I am getting:
[127, 220, 76;
247, 230, 145;
0, 119, 255;
168, 168, 168]
Gray pixel values I expected to get (ITU-R BT.601 conversion)
[ 76, 149, 29;
225, 178, 105;
0, 119, 255;
152, 152, 152]
What kind of conversion is happening in the background, and is there a way to force conversion to my wanted behaviour?
Also worth mentioning, the conversions are working properly (as expected) for Gray -> BGR and BGRA -> BGR
As for the question "What kind of conversion is happening in the background": it seems like a different conversion algorithm is used. Using the WINE project to calculate the greyscale values, seem to give the same results, so it gives us a pretty good approximation what is happening. The formula used is R * 0.2126 + G * 0.7152 + B * 0.0722.
copypixels_to_8bppGray (source):
float gray = (bgr[2] * 0.2126f + bgr[1] * 0.7152f + bgr[0] * 0.0722f) / 255.0f;
In addition to that, it is corrected for the sRGB color space.
copypixels_to_8bppGray (source):
gray = to_sRGB_component(gray) * 255.0f;
to_sRGB_component (source):
static inline float to_sRGB_component(float f)
{
if (f <= 0.0031308f) return 12.92f * f;
return 1.055f * powf(f, 1.0f/2.4f) - 0.055f;
}
Plugging in some values:
B G R WINE You're getting
0 0 255 127.1021805 127
0 255 0 219.932749 220
255 0 0 75.96269736 76
0 255 255 246.7295889 247
255 255 0 229.4984163 230
255 0 255 145.3857605 145
0 0 0 12.92 0
As for the other question, I'm too unfamiliar with the framework to answer that, so I'll leave that open for others to answer.

Advanced rgb2hsv conversion Matlab to opnecv/C++ access to pixel value

I am building a program in objective C/C++ and openCV. I am pretty skilled in Objective C but new to C++.
I am building custom RGB2HSV algorithm. My algorithm is slightly different from the openCV library cvtColor(in, out, CV_RGB2HSV).
The one I try to translate form Matlab to opencV/C++ produces so clear HSV image that no additional filtering is needed before further processing. Code below – Matlab code is self-explanatory.
I try to translate it to C++/openCV function out of it but I hit the wall trying to access pixel values of the image. I am new to C++.
I read a lot on the ways how to access Mat structure but usually I obtain either bunch of letters in a place of zeros or a number typically something like this “\202 k g”. When I try to do any multiplication operations on the say \202 the result has nothing to do with math.
Please help me to properly access the pixel values. Also in current version using uchar won’t work because some values are outside 0-255 range.
The algorithm is not mine. I cannot even point the source but it gives clearly better results than stock RGB2HSV.
Also the algorithm below is for one pixel. It needs to be applied each pixel in the image so in final version it need to wrapped with for { for {}} loops.
I also wish to share this method with community so everyone can benefit from it and saving on pre-filtering.
Please help me translate it to C++ / openCV. If possible with the best practices speed wise. Or at least how to clearly access the pixel value so it is workable with range of mathematical equations. Thanks in advance.
function[H, S, V] = rgb2hsvPixel(R,G,B)
% Algorithm:
% In case of 8-bit and 16-bit images, `R`, `G`, and `B` are converted to the
% floating-point format and scaled to fit the 0 to 1 range.
%
% V = max(R,G,B)
% S = / (V - min(R,G,B)) / V if V != 0
% \ 0 otherwise
% / 60*(G-B) / (V - min(R,G,B)) if V=R
% H = | 120 + 60*(B-R) / (V - min(R,G,B)) if V=G
% \ 240 + 60*(R-G) / (V - min(R,G,B)) if V=B
%
% If `H<0` then `H=H+360`. On output `0<=V<=1`, `0<=S<=1`, `0<=H<=360`.
red = (double(R)-16)*255/224; % \
green = (double(G)-16)*255/224; % }- R,G,B (0 <-> 255) -> (-18.2143 <-> 272.0759)
blue = (min(double(B)*2,240)-16)*255/224; % /
minV = min(red,min(green,blue));
value = max(red,max(green,blue));
delta = value - minV;
if(value~=0)
sat = (delta*255) / value;% s
if (delta ~= 0)
if( red == value )
hue = 60*( green - blue ) / delta; % between yellow & magenta
elseif( green == value )
hue = 120 + 60*( blue - red ) / delta; % between cyan & yellow
else
hue = 240 + 60*( red - green ) / delta; % between magenta & cyan
end
if( hue < 0 )
hue = hue + 360;
end
else
hue = 0;
sat = 0;
end
else
% r = g = b = 0
sat = 0;
hue = 0;
end
H = max(min(floor(((hue*255)/360)),255),0);
S = max(min(floor(sat),255),0);
V = max(min(floor(value),255),0);
end
To access the value of a pixel in a 3-channel, 8-bit precision image (type CV_8UC3) you have to do it like this:
cv::Mat image;
cv::Vec3b BGR = image.at<cv::Vec3b>(i,j);
If, as you say, 8-bit precision and range are not enough, you can declare a cv::Mat of type CV_32F to store floating point 32-bit numbers.
cv::Mat image(height, width, CV_32FC3);
//fill your image with data
for(int i = 0; i < image.rows; i++) {
for(int j = 0; j < image.cols; j++) {
cv::Vec3f BGR = image.at<cv::Vec3f>(i,j)
//process your pixel
cv::Vec3f HSV; //your calculated HSV values
image.at<cv::Vec3f>(i,j) = HSV;
}
}
Be aware that OpenCV stores rgb values in the BGR order and not RGB. Take a look at OpenCV docs to learn more about it.
If you are concerned by performance and fairly comfortable with pixel indexes, you can use directly the Mat ptr.
For example:
cv::Mat img = cv::Mat::zeros(4, 8, CV_8UC3);
uchar *ptr_row_img;
int cpt = 0;
for(int i = 0; i < img.rows; i++) {
ptr_row_img = img.ptr<uchar>(i);
for(int j = 0; j < img.cols; j++) {
for(int c = 0; c < img.channels(); c++, cpt++, ++ptr_row_img) {
*ptr_row_img = cpt;
}
}
}
std::cout << "img=\n" << img << std::endl;
The previous code should print:
img= [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23; 24, 25, 26,
27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47; 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
69, 70, 71; 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82,
83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]
The at access should be enough for most of the cases and is much more readable / less likely to make a mistake than using the ptr access.
References:
How to scan images, lookup tables and time measurement with OpenCV
C++: OpenCV: fast pixel iteration
Thanks everybody for help.
Thanks to your hints I constructed the custom rgb2hsv function C++/openCV.
From the top left respectively, edges after bgr->gray->edges, bgr->HSV->edges, bgr->customHSV->edges
Below each of them corresponding settings of the filters to achieve approximately the same clear results. The bigger the radius of a filter the more complex and time consuming computations.
It produces clearer edges in next steps of image processing.
It can be tweaked further experimenting with parameters in r g b channels:
red = (red-16)*1.1384; //255/244=1.1384
here 16 – the bigger the number the clearer V becomes
255/244 – also affect the outcome extending it beyond ranges 0-255, later to be clipped.
This numbers here seem to be golden ratio but anyone can adjust for specific needs.
With this function translating BGR to RGB can be avoided by directly connecting colors to proper channels in raw image.
Probably it is a little clumsy performance wise. In my case it serves in first step of color balance and histogram adjustment so speed is not that critical.
To use in constant processing video stream it need speed optimization, I think by using pointers and reducing loop complexity. Optimization is not exactly my cup of tea. So if someone helped to optimize it for the community that would be great.
Here it is ready to use:
Mat bgr2hsvCustom ( Mat& image )
{
//smallParam = 16;
for(int x = 0; x < image.rows; x++)
{
for(int y = 0; y<image.cols; y++)
{
//assigning vector to individual float BGR values
float blue = image.at<cv::Vec3b>(x,y)[0];
float green = image.at<cv::Vec3b>(x,y)[1];
float red = image.at<cv::Vec3b>(x,y)[2];
float sat, hue, minValue, maxValue, delta;
float const ang0 = 0; // func min and max don't accept varaible and number
float const ang240 = 240;
float const ang255 = 255;
red = (red-16)*1.1384; //255/244
green = (green-16)*1.1384;
blue = (min(blue*2,ang240)-16)*1.1384;
minValue = min(red,min(green,blue));
maxValue = max(red,max(green,blue));
delta = maxValue - minValue;
if (maxValue != 0)
{
sat = (delta*255) / maxValue;
if ( delta != 0)
{
if (red == maxValue){
hue = 60*(green - blue)/delta;
}
else if( green == maxValue ) {
hue = 120 + 60*( blue - red )/delta;
}
else{
hue = 240 + 60*( red - green )/delta;
}
if( hue < 0 ){
hue = hue + 360;
}
}
else{
sat = 0;
hue = 0;
}
}
else{
hue = 0;
sat = 0;
}
image.at<cv::Vec3b>(x,y)[0] = max(min(floor(maxValue),ang255),ang0); //V
image.at<cv::Vec3b>(x,y)[1] = max(min(floor(sat),ang255),ang0); //S
image.at<cv::Vec3b>(x,y)[2] = max(min(floor(((hue*255)/360)),ang255),ang0); //H
}
}
return image;
}

Which color gradient is used to color mandelbrot in wikipedia?

At Wikipedia's Mandelbrot set page there are really beautiful generated images of the Mandelbrot set.
I also just implemented my own Mandelbrot algorithm. Given n is the number of iterations used to calculate each pixel, I color them pretty simple from black to green to white like that (with C++ and Qt 5.0):
QColor mapping(Qt::white);
if (n <= MAX_ITERATIONS){
double quotient = (double) n / (double) MAX_ITERATIONS;
double color = _clamp(0.f, 1.f, quotient);
if (quotient > 0.5) {
// Close to the mandelbrot set the color changes from green to white
mapping.setRgbF(color, 1.f, color);
}
else {
// Far away it changes from black to green
mapping.setRgbF(0.f, color, 0.f);
}
}
return mapping;
My result looks like that:
I like it pretty much already, but which color gradient is used for the images in Wikipedia? How to calculate that gradient with a given n of iterations?
(This question is not about smoothing.)
The gradient is probably from Ultra Fractal. It is defined by 5 control points:
Position = 0.0 Color = ( 0, 7, 100)
Position = 0.16 Color = ( 32, 107, 203)
Position = 0.42 Color = (237, 255, 255)
Position = 0.6425 Color = (255, 170, 0)
Position = 0.8575 Color = ( 0, 2, 0)
where Position is in range [0, 1) and Color is RGB in range [0, 255].
The catch is that the colors are not linearly interpolated. The interpolation of colors is likely cubic (or something similar). Following image shows the difference between linear and Monotone cubic interpolation:
As you can see the cubic interpolation results in smoother and "prettier" gradient. I used monotone cubic interpolation to avoid "overshooting" of the [0, 255] color range that can be caused by cubic interpolation. Monotone cubic ensures that interpolated values are always in the range of input points.
I use following code to compute the color based on iteration i:
double smoothed = Math.Log2(Math.Log2(re * re + im * im) / 2); // log_2(log_2(|p|))
int colorI = (int)(Math.Sqrt(i + 10 - smoothed) * gradient.Scale) % colors.Length;
Color color = colors[colorI];
where i is the diverged iteration number, re and im are diverged coordinates, gradient.Scale is 256, and the colors is and array with pre-computed gradient colors showed above. Its length is 2048 in this case.
Well, I did some reverse engineering on the colours used in wikipedia using the Photoshop eyedropper. There are 16 colours in this gradient:
R G B
66 30 15 # brown 3
25 7 26 # dark violett
9 1 47 # darkest blue
4 4 73 # blue 5
0 7 100 # blue 4
12 44 138 # blue 3
24 82 177 # blue 2
57 125 209 # blue 1
134 181 229 # blue 0
211 236 248 # lightest blue
241 233 191 # lightest yellow
248 201 95 # light yellow
255 170 0 # dirty yellow
204 128 0 # brown 0
153 87 0 # brown 1
106 52 3 # brown 2
Simply using a modulo and an QColor array allows me to iterate through all colours in the gradient:
if (n < MAX_ITERATIONS && n > 0) {
int i = n % 16;
QColor mapping[16];
mapping[0].setRgb(66, 30, 15);
mapping[1].setRgb(25, 7, 26);
mapping[2].setRgb(9, 1, 47);
mapping[3].setRgb(4, 4, 73);
mapping[4].setRgb(0, 7, 100);
mapping[5].setRgb(12, 44, 138);
mapping[6].setRgb(24, 82, 177);
mapping[7].setRgb(57, 125, 209);
mapping[8].setRgb(134, 181, 229);
mapping[9].setRgb(211, 236, 248);
mapping[10].setRgb(241, 233, 191);
mapping[11].setRgb(248, 201, 95);
mapping[12].setRgb(255, 170, 0);
mapping[13].setRgb(204, 128, 0);
mapping[14].setRgb(153, 87, 0);
mapping[15].setRgb(106, 52, 3);
return mapping[i];
}
else return Qt::black;
The result looks pretty much like what I was looking for:
:)
I believe they're the default colours in Ultra Fractal. The evaluation version comes with source for a lot of the parameters, and I think that includes that colour map (if you can't infer it from the screenshot on the front page) and possibly also the logic behind dynamically scaling that colour map appropriately for each scene.
This is an extension of NightElfik's great answer.
The python library Scipy has monotone cubic interpolation methods in version 1.5.2 with pchip_interpolate. I included the code I used to create my gradient below. I decided to include helper values less than 0 and larger than 1 to help the interpolation wrap from the end to the beginning (no sharp corners).
#set up the control points for your gradient
yR_observed = [0, 0,32,237, 255, 0, 0, 32]
yG_observed = [2, 7, 107, 255, 170, 2, 7, 107]
yB_observed = [0, 100, 203, 255, 0, 0, 100, 203]
x_observed = [-.1425, 0, .16, .42, .6425, .8575, 1, 1.16]
#Create the arrays with the interpolated values
x = np.linspace(min(x_observed), max(x_observed), num=1000)
yR = pchip_interpolate(x_observed, yR_observed, x)
yG = pchip_interpolate(x_observed, yG_observed, x)
yB = pchip_interpolate(x_observed, yB_observed, x)
#Convert them back to python lists
x = list(x)
yR = list(yR)
yG = list(yG)
yB = list(yB)
#Find the indexs where x crosses 0 and crosses 1 for slicing
start = 0
end = 0
for i in x:
if i > 0:
start = x.index(i)
break
for i in x:
if i > 1:
end = x.index(i)
break
#Slice away the helper data in the begining and end leaving just 0 to 1
x = x[start:end]
yR = yR[start:end]
yG = yG[start:end]
yB = yB[start:end]
#Plot the values if you want
#plt.plot(x, yR, color = "red")
#plt.plot(x, yG, color = "green")
#plt.plot(x, yB, color = "blue")
#plt.show()

c++ defined 16bit (high) color

I am working on a project with a TFT touch screen. With this screen there is an included library. But after some reading, I still don't get something. In the library there are some defines regarding colors:
/* some RGB color definitions */
#define Black 0x0000 /* 0, 0, 0 */
#define Navy 0x000F /* 0, 0, 128 */
#define DarkGreen 0x03E0 /* 0, 128, 0 */
#define DarkCyan 0x03EF /* 0, 128, 128 */
#define Maroon 0x7800 /* 128, 0, 0 */
#define Purple 0x780F /* 128, 0, 128 */
#define Olive 0x7BE0 /* 128, 128, 0 */
#define LightGrey 0xC618 /* 192, 192, 192 */
#define DarkGrey 0x7BEF /* 128, 128, 128 */
#define Blue 0x001F /* 0, 0, 255 */
#define Green 0x07E0 /* 0, 255, 0 */
#define Cyan 0x07FF /* 0, 255, 255 */
#define Red 0xF800 /* 255, 0, 0 */
#define Magenta 0xF81F /* 255, 0, 255 */
#define Yellow 0xFFE0 /* 255, 255, 0 */
#define White 0xFFFF /* 255, 255, 255 */
#define Orange 0xFD20 /* 255, 165, 0 */
#define GreenYellow 0xAFE5 /* 173, 255, 47 */
#define Pink 0xF81F
Those are 16-bit colors. But how do they go from: 0, 128, 128(dark cyan) to 0x03EF. I mean, how do you convert a 16-bit color to a uint16? This doesn't need to have an answer in code, because I just want to add some colors in the library. A link to a online converter (which I could not find) would be okay as well :)
Thanks
From these one can easily find out the formula:
#define Red 0xF800 /* 255, 0, 0 */
#define Magenta 0xF81F /* 255, 0, 255 */
#define Yellow 0xFFE0 /* 255, 255, 0 */
F800 has 5 MSB bits set and FFE0 has 5 LSB not set.
0xF81F has obviously both 5 LSB's and 5 MSB's set, which proves the format to be RGB565.
The formula to convert a value 173 to Red is not as straightforward as it may look -- you can't simply drop the 3 least significant bits, but have to linearly interpolate to make 255 to correspond to 31 (or green 255 to correspond to 63).
NewValue = (31 * old_value) / 255;
(And this is still just a truncating division -- proper rounding could be needed)
With proper rounding and scaling:
Uint16_value = (((31*(red+4))/255)<<11) |
(((63*(green+2))/255)<<5) |
((31*(blue+4))/255);
EDIT Added parenthesis to as helpfully suggested by JasonD.
You need to know the exact format of the display, just "16-bit" is not enough.
There's RGB555, in which each of the three components get 5 bits. This drops the total color space to just 32,768 colors, wasting one bit but it's very simple to manage since the there's the same number of shades for each component.
There's also RGB565, in which the green component is given 6 bits (since the human eye is more sensitive to green). This might be the format you're having, since the "dark green" example is 0x03e0 which in binary is 0b0000 0011 1110 0000. Since there's 6 bits set to 1 there, I guess that's the total allocation for the green component and showing it's maximum value.
It's like this, then (with spaces separating every four bits and re-using the imaginary 0b prefix):
0bRRRR RGGG GGGB BBBB
Of course, the bit ordering can differ too, in the word.
The task of converting a triplet of numbers into a bit-packed word is quite easily done in typically programming languages that have bit manipulation operators.
In C, it's often done in a macro, but we can just as well have a function:
#include <stdint.h>
uint16_t rgb565_from_triplet(uint8_t red, uint8_t green, uint8_t blue)
{
red >>= 3;
green >>= 2;
blue >>= 3;
return (red << 11) | (green << 5) | blue;
}
note that the above assumes full 8-bit precision for the components, so maximum intensity for a component is 255, not 128 as in your example. If the color space really is using 7-bit components then some additional scaling would be necessary.
It looks like you're using RGB565, first 5 bits for Red, then 6 bits for Green, then 5 bits for Blue.
You should mask with 0xF800 and shift right 11 bits to get the red component (or shift 8 bits to get a value from 0-255).
Mask with 0x07E0 and shift right 5 bits to get green component (or 3 to get a 0-255 value).
Mask with 0x001F to get the blue component (and shift left 3 bits to get 0-255).
Your colours are in 565 format. It would be more obvious if you wrote them out in binary.
Blue, (0,0,255) is 0x001f, which is 00000 000000 11111
Green, (0, 255, 0) is 0x07e0, which is 00000 111111 00000
Red, (255, 0, 0) is 0xf800, which is 11111 000000 00000
To convert a 24 bit colour to 16 bit in this format, simply mask off the upper bits needed from each component, shift into position, and OR together.
The convert back into 24 bit colour from 16 bit, mask each component, shift into position, and then duplicate the upper bits into the lower bits.
In your examples it seems that some colours have been scaled and rounded rather than shifted.
I strongly recommend using the bit-shift method rather than scaling by a factor like 31/255, because the bit-shifting is not only likely to be faster, but should give better results.
The 3-part numbers you’re showing are applicable to 24-bit color. 128 in hex is 0x7f, but in your color definitions, it's being represented as 0x0f. Likewise, 255 is 0xff, but in your color definitions, it's being represented as 0x1f. This suggests that you need to take the 3-part numbers and shift them down by 3 bits (losing 3 bits of color data for each color). Then combine them into a single 16-bit number:
uint16 color = ((red>>3)<<11) | ((green>>2)<<5) | (blue>>3);
...revised from earlier because green uses 6 bits, not 5.
You need to know how many bits there are per colour channel. So yes, there are 16 bits for a colour, but the RGB components are each some subset of those bits. In your case, red is 5 bits, green is 6, and blue is 5. The format in binary would look like so:
RRRRRGGG GGGBBBBB
There are other 16 bit colour formats, such as red, green, and blue each being 5 bits and then use the remaining bit for an alpha value.
The range of values for both the red and blue channels will be from 0 to 2^5-1 = 31, while the range for green will be 0 to 2^6-1 = 63. So to convert from colours in the form of (0->255),(0->255),(0->255) you will need to map values from one to the other. For example, a red value of 128 in the range 0->255 will be mapped to (128/255) * 31 = 15.6 in the red channel with range 0-31. If we round down, we get 15 which is represented as 01111 in five bits. Similarly, for the green channel (with six bits) you will get 011111. SO the colour (128,128,128) will map to 01111011 11101111 which is 0x7BEF in hexadecimal.
You can apply this to the other values too: 0,128,128 becomes 00000011 11101111 which is 0x03EF.
Those colours shown in your code are RGB565. As shown by
#define Blue 0x001F /* 0, 0, 255 */
#define Green 0x07E0 /* 0, 255, 0 */
#define Red 0xF800 /* 255, 0, 0 */
If you simply want to add some new colours to this #defined list, the simplest way to convert from 16bit UINT per channel is just to shift your values down to loose the the low order bits and then shift and (or) them into position in the 16bitRGB value.
This could well produce banding artefacts though, and there may well be a better conversion method.
i.e.
UINT16 blue = 0xFF;
UINT16 green = 0xFF;
UINT16 red = 0xFF;
blue >>= 11; // top 5 bits
green >>= 10; // top 6 bits
red >>= 11; // top 5 bits
UINT16 RGBvalue = blue | (green <<5) | (red << 11)
You may need to mask of any unwanted stray bits after the shifts, as I am unsure how this works, but I think the code above should work.
Building on unwind's answer, specifically for the Adafruit GFX library using the Arduino 2.8" TFT Touchscreen(v2), you can add this function to your Arduino sketch and use it inline to calculate colors from rgb:
uint16_t getColor(uint8_t red, uint8_t green, uint8_t blue)
{
red >>= 3;
green >>= 2;
blue >>= 3;
return (red << 11) | (green << 5) | blue;
}
Now you can use it inline as so, illustrated with a function that creates a 20x20 square at x0,y0:
void setup() {
tft.begin();
makeSquare(getColor(20,157,217));
}
unsigned long makeSquare(uint16_t color1) {
tft.fillRect(0, 0, 20, 20, color1);
}
Docs for the Adafruit GFX library can be found here

RaphaelJS - I need help understanding a transform

I have a question about the following demo - http://raphaeljs.com/hand.html.
Here is code from the sample...
var r = Raphael("holder", 640, 480), angle = 0;
while (angle < 360) {
var color = Raphael.getColor();
(function(t, c) {
r.circle(320, 450, 20).attr({
stroke : c,
fill : c,
transform : t,
"fill-opacity" : .4
}).click(function() {
s.animate({
transform : t,
stroke : c
}, 2000, "bounce");
}).mouseover(function() {
this.animate({
"fill-opacity" : .75
}, 500);
}).mouseout(function() {
this.animate({
"fill-opacity" : .4
}, 500);
});
})("r" + angle + " 320 240", color);
angle += 30;
}
Raphael.getColor.reset();
var s = r.set();
s.push(r.path("M320,240c-50,100,50,110,0,190").attr({
fill : "none",
"stroke-width" : 2
}));
s.push(r.circle(320, 450, 20).attr({
fill : "none",
"stroke-width" : 2
}));
s.push(r.circle(320, 240, 5).attr({
fill : "none",
"stroke-width" : 10
}));
s.attr({
stroke : Raphael.getColor()
});
The question I have is about the following line of code...
("r" + angle + " 320 240", color);
In the anonymous function the circle is initially drawn at 320, 450 with a radius of 20. Then a transform is applied, for example ("r30 320 240") when the angle is 30.
How does this transform work? The way I read this transform is to rotate the circle 30 degrees around 320,450 , then move 320 horizontally (to the right) and 240 vertically down.
But i'm obviously reading this transform wrong because this is not what is happening.
What am i missing?
Thanks
The transform "r30 320 240" sets the rotation of the object about the point (320,240) by 30 degrees. It does not add to the rotation. It overrides any previous transformations.
If you look at this example:
http://jsfiddle.net/jZyyy/1/
You can see that I am setting the rotation of the circle about the point (0,0). If you consider the point (0,0) to be the centre of a clock, then the circle begins at 3 o'clock. If I use the transform "r90 0 0" the circle will be rotated from 3 o'clock to 6 o'clock. If I then later set the transform to be "r30 0 0" the circle will be at 4 o'clock, rotated 30 degrees from the original 3 o'clock position about the point (0,0).