How i can set rgb color? - c++

How i can set rgb color to anything component? FireMonkey, C++ Builder XE8.
I have used this code but its useless...
Rectangle1->Fill->Color = RGB(255, 50, 103);
Rectangle1->Fill->Color = (TColor)RGB(255, 50, 103);
May be i must use RGBA? But i dont know how to do it.
I did it.
UnicodeString s ;
s = "0xFF" ;
s += IntToHex ( 255 , 2 );
s += IntToHex ( 50 , 2 );
s += IntToHex ( 103 , 2 );
Rectangle1 -> Fill -> Color = StringToColor ( s );

This function will allow you to convert int specified RGB values to a TAlphaColor, which is what is used by FireMonkey.
TAlphaColor GetAlphaColor (int R, int G, int B)
{
TAlphaColorRec acr;
acr.R = R;
acr.G = G;
acr.B = B;
acr.A = 255;
return acr.Color;
}

Related

what is the meaning of bayer pattern code in opencv

what is the meaning of bayer pattern codes in Opencv?
For example I have this code:
CV_BayerGR2BGR
my questions are:
What is the color of first pixel in matrix (so what is the color of pixel 0,0).
what is the color of pixel (x,y)?
Is there any simple and efficient way to find the color of pixel (x,y) based on a specific Bayer pattern? (so extend the code that I am writing to be used with every Bayer pattern).
Update:
As Miki pointed out, OpenCv document says:
The two letters C_1 and C_2 in the conversion constants CV_Bayer C_1 C_2 2BGR and CV_Bayer C_1 C_2 2RGB indicate the particular pattern type. These are components from the second row, second and third columns, respectively. For example, the above pattern has a very popular “BG” type.
which answers my first question, but what about the other two questions?
Knowing the type of the pattern, COLOR_Bayer<type>2BGR, you can easily find the color of each pixel checking if the coordinates are odd or even, since the pattern is simply a 2x2 that repeats over the whole image.
The OpenCV patterns are:
COLOR_BayerBG2BGR = 46,
COLOR_BayerGB2BGR = 47,
COLOR_BayerRG2BGR = 48,
COLOR_BayerGR2BGR = 49,
COLOR_BayerBG2RGB = COLOR_BayerRG2BGR,
COLOR_BayerGB2RGB = COLOR_BayerGR2BGR,
COLOR_BayerRG2RGB = COLOR_BayerBG2BGR,
COLOR_BayerGR2RGB = COLOR_BayerGB2BGR,
so you can simply check the first four.
The function
#define BLUE 0
#define GREEN 1
#define RED 2
int getColorFromBayer(int r, int c, int type)
will output the color, given the row r, the column c, and the type of the pattern type.
The following code shows how to recover the color of each pixel, and generates a BGR color image of the Bayer pattern.
#include <opencv2\opencv.hpp>
using namespace cv;
//COLOR_BayerBG2BGR = 46,
//COLOR_BayerGB2BGR = 47,
//COLOR_BayerRG2BGR = 48,
//COLOR_BayerGR2BGR = 49,
//
//COLOR_BayerBG2RGB = COLOR_BayerRG2BGR,
//COLOR_BayerGB2RGB = COLOR_BayerGR2BGR,
//COLOR_BayerRG2RGB = COLOR_BayerBG2BGR,
//COLOR_BayerGR2RGB = COLOR_BayerGB2BGR,
#define BLUE 0
#define GREEN 1
#define RED 2
int getColorFromBayer(int r, int c, int type)
{
static int bg[] = { RED, GREEN, GREEN, BLUE };
static int gb[] = { GREEN, RED, BLUE, GREEN };
static int rg[] = { BLUE, GREEN, GREEN, RED };
static int gr[] = { GREEN, BLUE, RED, GREEN };
int rr = r % 2;
int cc = c % 2;
switch (type)
{
case COLOR_BayerBG2BGR: return bg[2 * rr + cc];
case COLOR_BayerGB2BGR: return gb[2 * rr + cc];
case COLOR_BayerRG2BGR: return rg[2 * rr + cc];
case COLOR_BayerGR2BGR: return gr[2 * rr + cc];
}
return -1;
}
int main()
{
Mat3b bayer(10,10, Vec3b(0,0,0));
// Create bayer pattern BG
for (int r = 0; r < bayer.rows; ++r)
{
for (int c = 0; c < bayer.cols; ++c)
{
int color = getColorFromBayer(r,c,COLOR_BayerBG2BGR);
switch (color)
{
case BLUE : bayer(r, c) = Vec3b(255, 0, 0); break;
case GREEN: bayer(r, c) = Vec3b(0, 255, 0); break;
case RED : bayer(r, c) = Vec3b(0, 0, 255); break;
}
}
}
return 0;
}
Something like this pseudocode?
(note that CV_BayerBG2RGB = CV_BayerRG2BGR and work with one triplet type only )
if ((x+y) && 1) <> (CV_Bayerxxxxxx && 1)) then
C(x,y) = G
else
if (Pattern is CV_BayerBGxxx || CV_BayerGBxxx) == (y && 1) then
C(x,y) = B
else
C(x,y) = R

Bilinear re-sizing with C++ and vector of RGBA pixels

I am trying to re-size an image by using the bilinear technique I found here but I don't see anything but a black image.
So, in first place I have my image decoded with LodePNG and the pixels go into a vector<unsigned char> variable. It says that they are stored as RGBARGBA but when I tried to apply the image to a X11 window I realized they were stored as BGRABGRA. I don't know if is the X11 API which changes the order or the LodePNG decoder. Anyway, before anything, I convert the BGR to RGB:
// Here is where I have the pixels stored
vector<unsigned char> Image;
// Converting BGRA to RGBA, or vice-versa, I don't know, but it's how it is shown
// correctly on the window
unsigned char red, blue;
unsigned int i;
for(i=0; i<Image.size(); i+=4)
{
red = Image[i + 2];
blue = Image[i];
Image[i] = red;
Image[i + 2] = blue;
}
So, now I am trying to change the size of the image, before applying it to the window. The size would be the size of the window (stretch it).
I firstly try to convert the RGBA to int values, like this:
vector<int> IntImage;
for(unsigned i=0; i<Image.size(); i+=4)
{
IData.push_back(256*256*this->Data[i+2] + 256*this->Data[i+1] + this->Data[i]);
}
Now I have this function from the link I specified above, which is supposed to do the interpolation:
vector<int> resizeBilinear(vector<int> pixels, int w, int h, int w2, int h2) {
vector<int> temp(w2 * h2);
int a, b, c, d, x, y, index ;
float x_ratio = ((float)(w-1))/w2 ;
float y_ratio = ((float)(h-1))/h2 ;
float x_diff, y_diff, blue, red, green ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = (y*w+x) ;
a = pixels[index] ;
b = pixels[index+1] ;
c = pixels[index+w] ;
d = pixels[index+w+1] ;
// blue element
// Yb = Ab(1-w)(1-h) + Bb(w)(1-h) + Cb(h)(1-w) + Db(wh)
blue = (a&0xff)*(1-x_diff)*(1-y_diff) + (b&0xff)*(x_diff)*(1-y_diff) +
(c&0xff)*(y_diff)*(1-x_diff) + (d&0xff)*(x_diff*y_diff);
// green element
// Yg = Ag(1-w)(1-h) + Bg(w)(1-h) + Cg(h)(1-w) + Dg(wh)
green = ((a>>8)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>8)&0xff)*(x_diff)*(1-y_diff) +
((c>>8)&0xff)*(y_diff)*(1-x_diff) + ((d>>8)&0xff)*(x_diff*y_diff);
// red element
// Yr = Ar(1-w)(1-h) + Br(w)(1-h) + Cr(h)(1-w) + Dr(wh)
red = ((a>>16)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>16)&0xff)*(x_diff)*(1-y_diff) +
((c>>16)&0xff)*(y_diff)*(1-x_diff) + ((d>>16)&0xff)*(x_diff*y_diff);
temp.push_back(
((((int)red)<<16)&0xff0000) |
((((int)green)<<8)&0xff00) |
((int)blue) |
0xff); // hardcode alpha ;
}
}
return temp;
}
and I use it like this:
vector<int> NewImage = resizeBilinear(IntData, image_width, image_height, window_width, window_height);
which is supposed to return me the RGBA vector of the re-sized image. Now I am changing back to RGBA (from int)
Image.clear();
for(unsigned i=0; i<NewImage.size(); i++)
{
Image.push_back(NewImage[i] & 255);
Image.push_back((NewImage[i] >> 8) & 255);
Image.push_back((NewImage[i] >> 16) & 255);
Image.push_back(0xff);
}
and what I get is a black window (the default background color), so I don't know what am I missing. If I comment out the line where I get the new image and just convert back to RGBA the IntImage I get the correct values so I don't know if it is the messed up RGBA/int <> int/RGBA. I'm just lost now. I know this can be optimized/simplified but for now I just want to make it work.
The array access in your code is incorrect:
vector<int> temp(w2 * h2); // initializes the array to contain zeros
...
temp.push_back(...); // appends to the array, leaving the zeros unchanged
You should overwrite instead of appending; for that, calculate the array position:
temp[i * w2 + j] = ...;
Alternatively, initialize the array to an empty state, and append your stuff:
vector<int> temp;
temp.reserve(w2 * h2); // reserves some memory; array is still empty
...
temp.push_back(...); // appends to the array

Does videoInput guarantee rgb camera input? (Transfering image from videoInput/dshow -> Java BufferedImage)

I am using videoInput to get a live stream from my webcam, but I've ran into a problem where videoInput's documentation implies that I should always be getting a BGR/RGB, however, the "verbose" output tells me the pixel format is YUY2.
***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****
SETUP: Setting up device 0
SETUP: 1.3M WebCam
SETUP: Couldn't find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480
SETUP: trying format RGB24 # 640 by 480
SETUP: trying format RGB32 # 640 by 480
SETUP: trying format RGB555 # 640 by 480
SETUP: trying format RGB565 # 640 by 480
SETUP: trying format YUY2 # 640 by 480
SETUP: Capture callback set
SETUP: Device is setup and ready to capture.
My first thoughts were to try converting to RGB (assuming I was really getting YUY2 data), and I end up getting a blue image that was highly distorted.
Here is my code for converting YUY2 to BGR (Note: This is part of a much larger program, and this is borrowed code - I can get the url at anyone's request):
#define CLAMP_MIN( in, min ) ((in) < (min))?(min):(in)
#define CLAMP_MAX( in, max ) ((in) > (max))?(max):(in)
#define FIXNUM 16
#define FIX(a, b) ((int)((a)*(1<<(b))))
#define UNFIX(a, b) ((a+(1<<(b-1)))>>(b))
#define ICCIRUV(x) (((x)<<8)/224)
#define ICCIRY(x) ((((x)-16)<<8)/219)
#define CLIP(t) CLAMP_MIN( CLAMP_MAX( (t), 255 ), 0 )
#define GET_R_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.402, FIXNUM)*(v)), FIXNUM)
#define GET_G_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(-0.344, FIXNUM)*(u) + FIX(-0.714, FIXNUM)*(v)), FIXNUM)
#define GET_B_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.772, FIXNUM)*(u)), FIXNUM)
bool yuy2_to_rgb24(int streamid) {
int i;
unsigned char y1, u, y2, v;
int Y1, Y2, U, V;
unsigned char r, g, b;
int size = stream[streamid]->config.g_h * (stream[streamid]->config.g_w / 2);
unsigned long srcIndex = 0;
unsigned long dstIndex = 0;
try {
for(i = 0 ; i < size ; i++) {
y1 = stream[streamid]->vi_buffer[srcIndex];
u = stream[streamid]->vi_buffer[srcIndex+ 1];
y2 = stream[streamid]->vi_buffer[srcIndex+ 2];
v = stream[streamid]->vi_buffer[srcIndex+ 3];
Y1 = ICCIRY(y1);
U = ICCIRUV(u - 128);
Y2 = ICCIRY(y2);
V = ICCIRUV(v - 128);
r = CLIP(GET_R_FROM_YUV(Y1, U, V));
//r = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (1.596f * (float(V) - 128)) );
g = CLIP(GET_G_FROM_YUV(Y1, U, V));
//g = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
b = CLIP(GET_B_FROM_YUV(Y1, U, V));
//b = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );
stream[streamid]->rgb_buffer[dstIndex] = b;
stream[streamid]->rgb_buffer[dstIndex + 1] = g;
stream[streamid]->rgb_buffer[dstIndex + 2] = r;
dstIndex += 3;
r = CLIP(GET_R_FROM_YUV(Y2, U, V));
//r = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (1.596f * (float(V) - 128)) );
g = CLIP(GET_G_FROM_YUV(Y2, U, V));
//g = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
b = CLIP(GET_B_FROM_YUV(Y2, U, V));
//b = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );
stream[streamid]->rgb_buffer[dstIndex] = b;
stream[streamid]->rgb_buffer[dstIndex + 1] = g;
stream[streamid]->rgb_buffer[dstIndex + 2] = r;
dstIndex += 3;
srcIndex += 4;
}
return true;
} catch(...) {
return false;
}
}
After this wasn't working, I assume either a) my color space conversion function is wrong, or b) videoInput is lying to me.
Well, I wanted to double check that videoInput was indeed telling me the truth, and it turns out there is absolutely no way for me to see the pixel format I'm getting from the videoInput::getPixels() function, outside of the verbose text (unless I'm extremely crazy and just can't see it). This makes me assume that it's possible that videoInput does some sort of color space conversion behind the scenes so you're always getting a consistent image, regardless of the webcam. With this in mind, and following some of the documentation in videoInput.h:96, it sort of appears that it just gives out RGB or BGR images.
The utility I'm using to display the image takes RGB images (Java BufferedImage), so I figured I could just feed it the raw data directly from videoInput, and it should be fine.
Here is how I've got my image setup in Java:
BufferedImage buffer = new BufferedImage(directShow.device_stream_width(stream),directShow.device_stream_height(stream), BufferedImage.TYPE_INT_RGB );
int rgbdata[] = directShow.grab_frame_stream(stream);
if( rgbdata.length > 0 ) {
buffer.setRGB(
0, 0,
directShow.device_stream_width(stream),
directShow.device_stream_height(stream),
rgbdata,
0, directShow.device_stream_width(stream)
);
}
And here is how I send it to Java (C++/JNI):
JNIEXPORT jintArray JNICALL Java_directshowcamera_dsInterface_grab_1frame_1stream(JNIEnv *env, jobject obj, jint streamid)
{
//jclass bbclass = env->FindClass( "java/nio/IntBuffer" );
//jmethodID putMethod = env->GetMethodID(bbclass, "put", "(B)Ljava/nio/IntBuffer;");
int buffer_size;
jintArray ia;
jint *intbuffer = NULL;
unsigned char *buffer = NULL;
append_stream( streamid );
buffer_size = stream_device_rgb24_size(streamid);
ia = env->NewIntArray( buffer_size );
intbuffer = (jint *)calloc( buffer_size, sizeof(jint) );
buffer = stream_device_buffer_rgb( streamid );
if( buffer == NULL ) {
env->DeleteLocalRef( ia );
return env->NewIntArray( 0 );
}
for(int i=0; i < buffer_size; i++ ) {
intbuffer[i] = (jint)buffer[i];
}
env->SetIntArrayRegion( ia, 0, buffer_size, intbuffer );
free( intbuffer );
return ia;
}
This has been driving me absolutely nuts for the past two weeks, and I've tried variations of anything suggested to me as well, with absolutely no sane success.

Bitmap in C# into C++

I think this must be an easy question for somebody who uses bitmap in C++. I have my a working code in C# - how to do something simillar in C++ ?? Thanks for your codes (help) :-))
public Bitmap Visualize ()
{
PixelFormat fmt = System.Drawing.Imaging.PixelFormat.Format24bppRgb;
Bitmap result = new Bitmap( Width, Height, fmt );
BitmapData data = result.LockBits( new Rectangle( 0, 0, Width, Height ), ImageLockMode.ReadOnly, fmt );
unsafe
{
byte* ptr;
for ( int y = 0; y < Height; y++ )
{
ptr = (byte*)data.Scan0 + y * data.Stride;
for ( int x = 0; x < Width; x++ )
{
float num = 0.44;
byte c = (byte)(255.0f * num);
ptr[0] = ptr[1] = ptr[2] = c;
ptr += 3;
}
}
}
result.UnlockBits( data );
return result;
}
Raw translation to C++/CLI, I didn't run the example so it may contains some typo. Anyway there are different ways to get the same result in C++ (because you can use the standard CRT API).
Bitmap^ Visualize ()
{
PixelFormat fmt = System::Drawing::Imaging::PixelFormat::Format24bppRgb;
Bitmap^ result = gcnew Bitmap( Width, Height, fmt );
BitmapData^ data = result->LockBits( Rectangle( 0, 0, Width, Height ), ImageLockMode::ReadOnly, fmt );
for ( int y = 0; y < Height; y++ )
{
unsigned char* ptr = reinterpret_cast<unsigned char*>((data->Scan0 + y * data->Stride).ToPointer());
for ( int x = 0; x < Width; x++ )
{
float num = 0.44f;
unsigned char c = static_cast<unsigned char>(255.0f * num);
ptr[0] = ptr[1] = ptr[2] = c;
ptr += 3;
}
}
result->UnlockBits( data );
return result;
}
You can do very similar loops using the Easy BMP library
C++ does contains nothing in reference to images or processing images. Many libraries are available for this, and the way in which you operate on the data may be different for each.
At it's most basic level, an image consists of a bunch of bytes. If you can extract just the data (i.e., not headers or other metadata) into a unsigned char[] (or some other appropriate type given the format of your image) then you can iterate through each pixel much like you have done in your C# example.

Open yml file in opencv 1.0

I have a yml file and i want to open the file for reading using the existing opencv 1.0 functions. The file contains something like this:
%YAML:1.0
Image file: "00961010.jpg"
Contours count: 8
Contours:
-
Name: FO
Count: 41
Closed: 0
Points:
-
x: 740.7766113281250000
y: 853.0124511718750000
-
x: 745.1353149414062500
y: 875.5324096679687500
Can you please provide some example of how to iterate over this data? I need only the x, y points and store then in an array. I have searched but i did not found a similar example of data and please help me. Thanks in advance!
You're going to want to look at the cvFileStorage data-structures, and functions.
Here is an example from OpenCV to get you started:
#include "cxcore.h"
int main( int argc, char** argv )
{
CvFileStorage* fs = cvOpenFileStorage( "points.yml", 0, CV_STORAGE_READ );
CvStringHashNode* x_key = cvGetHashedNode( fs, "x", -1, 1 );
CvStringHashNode* y_key = cvGetHashedNode( fs, "y", -1, 1 );
CvFileNode* points = cvGetFileNodeByName( fs, 0, "points" );
if( CV_NODE_IS_SEQ(points->tag) )
{
CvSeq* seq = points->data.seq;
int i, total = seq->total;
CvSeqReader reader;
cvStartReadSeq( seq, &reader, 0 );
for( i = 0; i < total; i++ )
{
CvFileNode* pt = (CvFileNode*)reader.ptr;
int x = cvReadIntByName( fs, pt, "x", 0 /* default value */ );
int y = cvReadIntByName( fs, pt, "y", 0 /* default value */ );
CV_NEXT_SEQ_ELEM( seq->elem_size, reader );
printf("point%d is (x = %d, y = %d)\n", i, x, y);
}
}
cvReleaseFileStorage( &fs );
return 0;
}