How to find color of the objects(cloth color of human being)? - c++

I had come across several stack overflow questions and solutions, in all the questions the solution is based on a particular color(red or green or blue). I need to identify the color of objects which are of multiple type. I need to detect color which ranged between 0 to 255. So can anybody help me with a solution based on opencv.
Thanks in advance.

If you already know about what could be the possible color then it very simple. I will talk about one example and you can follow the same procedure for rest.
Lets say that you have several possible combinations, for example a t-shirt could have red and cyan color and you already have an image of such a sample. Then you should do the following:
Step-1: Load the template/sample image. Calculate its Hue-histogram (or Hue-Saturation histogram).
Step-2: Load the image for which you want to know the color. Calculte the histogram for this image also.
Step-3: Perform histogram matching() between all the sample/template/example/possbile image's histogram (i.e. step-1) and the image for which you want to know the color (i.e. step-2).
Step-4: For which so ever combination, you get the maximum value...your image has that color. For example, lets say your sample images have an image of red & cyan t-shirt, another image of bule & purple t-shirt and so on. And you get the maximum histogram matching() value for blue& purple , it means that your image for which you want to know have blue and purple color.

Related

How to project IR image on a 2D plane using OpenCV and PCL

I have a Kinect and I'm using OpenCV and point cloud library. I would like to project the IR Image onto a 2D plane for forklift pallet detection. How would I do that?
I'm trying to detect the pallet in the forklift here is an image:
Where are the RGB data? You can use it to help with the detection. You do not need to project the image onto any plane to detect a pellet. There are basically 2 ways used for detection
non-deterministic based on neural network, fuzzy logic, machine learning, etc
This approach need a training dataset to recognize the object. Much experience is needed for proper training set and classifier architecture/topology selection. But other then that you do not need to program it... as usually some readily available lib/tool is used just configure and pass the data.
deterministic based on distance or correlation coefficients
I would start with detecting specific features like:
pallet has specific size
pallet has sharp edges and specific geometry shape in depth data
pallet has specific range of colors (yellowish wood +/- lighting and dirt)
Wood has specific texture patterns
So compute some coefficient for each feature how close the object is to real pallet. And then just treshold the distance of all coefficients combined (possibly weighted as some features are more robust).
I do not use the #1 approach so I would go for #2. So combine the RGB and depth data (they have to be matched exactly). Then segmentate the image (based on depth and color). After that for each found object classify if it is pallet ...
[Edit1]
Your colored image does not correspond to depth data. The aligned gray-scale has poor quality and the depth data image is also very poor. Is the depth data processed somehow (loosing precision)? If you look at your data from different sides:
You can see how poor it is so I doubt you can use depth data for detection at all...
PS. I used my Align already captured rgb and depth images for the visualization.
The only thing left is the colored image and detect areas with matching color only. Then detect the features and classify. The color of your pallet in the image is almost white. Here HSV reduced colors to basic 16 colors (too lazy to segmentate)
You should obtain range of colors of the pallets possible by your setup to ease up the detection. Then check those objects for the features like size, shape,area,circumference...
[Edit2]
So I would start with Image preprocessing:
convert to HSV
treshold only pixels close to pallet color
I chose (H=40,S=18,V>100) as a pallet color. My HSV ranges are <0,255> per channel so Hue angle difference can be only <-180deg,+180deg> max which corresponds to <-128,+128> in my ranges.
remove too thin areas
Just scan all Horizontal an Vertical lines count consequent set pixels and if too small size recolor them to black...
This is the result:
On the left the original image (downsized so it fits to this page), In the middle is the color treshold result and last is the filtering out of small areas. You can play with tresholds and pallet color to change behavior to suite your needs.
Here C++ code:
int tr_d=10; // min size of pallet [pixels[
int h,s,v,x,y,xx;
color c;
pic1=pic0;
pic1.pf=_pf_rgba;
pic2.resize(pic1.xs*3,pic1.ys); xx=0;
pic2.bmp->Canvas->Draw(xx,0,pic0.bmp); xx+=pic1.xs;
// [color selection]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
// get color from image
c=pic0.p[y][x];
rgb2hsv(c);
// distance to white-yellowish color in HSV (H=40,S=18,V>100)
h=c.db[picture::_h]-40;
s=c.db[picture::_s]-18;
v=c.db[picture::_v];
// hue is cyclic angular so use only shorter angle
if (h<-128) h+=256;
if (h>+128) h-=256;
// abs value
if (h< 0) h=-h;
if (s< 0) s=-s;
// treshold close colors
c.dd=0;
if (h<25)
if (s<25)
if (v>100)
c.dd=0x00FFFFFF;
pic1.p[y][x]=c;
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
// [remove too thin areas]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;)
{
for ( ;x<pic1.xs;x++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=x;x<pic1.xs;x++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (x-h<tr_d) for (;h<x;h++) pic1.p[y][h].dd=0; // if too small size recolor to zero
}
for (x=0;x<pic1.xs;x++)
for (y=0;y<pic1.ys;)
{
for ( ;y<pic1.ys;y++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=y;y<pic1.ys;y++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (y-h<tr_d) for (;h<y;h++) pic1.p[h][x].dd=0; // if too small size recolor to zero
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
See how to extract the borders of an image (OCT/retinal scan image) for the description of picture and color. Or look at any of my DIP/CV tagged answers. Now the code is well commented and straightforward but just need to add:
You can ignore pic2 stuff it is just the image posted above so I do not need to manually print screen and merge the subresult in paint... To improve robustness you should add enhancing of dynamic range (so the tresholds have the same conditions for any input images). Also you should compare to more then just single color (if more wood types of pallet is present).
Now you should segmentate or label the areas
loop through entire image
find first pixel set with the pallet color
flood fill the area with some distinct ID color different from set pallet color
I use black 0x00000000 space and white 0x00FFFFFF as pallete pixel color. So use ID={1,2,3,4,5...}. Also remember number of filled pixels (that is your area) so you do not need to compute it again. You can also compute bounding box directly while filling.
compute and compare features
You need to experiment with more then one image. To find out what properties are good for detection. I would go for circumference length vs area ratio. and or bounding box size... The circumference can be extracted by simply selecting all pixels with proper ID color neighboring black pixel.
See also similar Fracture detection in hand using image proccessing
Good luck and have fun ...

Create mask to select the black area

I have a black area around my image and I want to create a mask using OpenCV C++ that selects just this black area so that I can paint it later. How can i do that without affecting the image itself?
I tried to convert the image to grayscale and then using threshold to convert it to binary, but it affects my image since the result contains black pixels from inside the image.
Another Question : if i want to crop the image instead of paint it, how can i do it??
Thanks in advance,
I would solve the problem like this:
Inverse-binarize the image with a threshold of 1 (i.e. all pixels with the value 0 are set to 1, all others to 0)
use cv::findContours to find white segments
remove segments that don't touch image borders
use cv::drawContours to draw the remaining segments to a mask.
There is probably a more efficient solution in terms of runtime efficiency, but you should be able to prototype my solution quite quickly.

Range of HSV values to sample an Image as done by adobe

I have an image as shown in the inset. I sampled it in Adobe Photoshop using the blue color as the image shows. The sampled image is shown in gray-scale on the left.
I know that openCV provides a similar method to sample images that is the inRange() function. How can I find out the range of HSV values that Adobe checked for to sample my image. Since the resultant image is pretty much what I want and I am not able to determine the range myself It would be a great help if some one could guide me for the same.
You can convert your image in HSV with cv::cvtColor(...) here the documentation
Then accordingly to Wikipedia the blue is near to 240° of the HUE channel of your image.
You can set something like maxHue = 270 and a minHue = 180 or other values to scan your image.
Maybe you should set a minSaturation and a minValue to avoid the black and white.
To find the best ranges you can link them with some sliders in a Qt GUI and change them until you get the same result as photoshop...

OpenCV - HSV range of values for tracking red color

Could you please tell me how to what are ranges for Hue, Saturation and Value indices for intense red?
I try to use this values for color tracking and I couldn't find a specific answer via Google.
you can map any color to OpenCV HSV. Actually opencv use 1800 hue cylinder while ideally it is 360, on the orher hand MS paint use 2400 cyllinder.
So to get OpenCV HSV value, simply open MS paint, open mixer, and read the value of HSV, now to map this value into OpenCV HSV multiply it with 180/240.
the range to value for saturation and value is 00-1800
You are the only one who can answer this question, since we don't know your criteria for "intense red". Collect as many samples as you can, some of which you consider intense red and some which are close but just miss the cut. Convert them all to HSL. Study the pattern.
You might put together a small app that has sliders for the H, S, and L parameters and displays a block of color corresponding to the settings. That will tell you your limits very quickly.

Recoloring an image based on current theme?

I want to develop a program which recolors the input image based on the given theme the same way as ms-powerpoint application does.
I am giving following link that shows what exactly i want to do.
I want to generate images same as images in below link under the Dark Variations and light Variations title based on the current theme.
http://blogs.msdn.com/powerpoint/archive/2006/07/06/658238.aspx
Can anybody give me idea,info regarding how to achieve it efficiently ??
You can give a look to the HSL colorspace to be able to have the same result. HSL means Hue, Saturation, Lightness.
You can keep the lightness of each pixel of your image and change only the hue. I think this will allow you to achieve what you want. You can find the RGB to HSL conversion on the wiki page.
Hope that helps.
Step 1: Choose the colors you want to represent black and white. For the dark variations, choose black and a light color; for the light variations, choose a dark color and white.
Step 2: Convert a pixel to gray. A common formula for this is L = R*0.3 + G*0.59 + B*0.11.
Step 3: Interpolate between the colors using the gray value. output.R = (L/255)*light.R + (1-(L/255))*dark.R and likewise for green and blue.
You can use a library like CxImage and convert the image to grayscale, then use the mix command with another image that you have made that is the same size as the original, and mix the two with the Mix command, using the filters. You can do mix-screen, and this should tint the pixels the color of the second image in the resultant image. Try playing with CxImage a bit, see if it will do what you want it to do. This is all coming off the top of my head, and its been a while since I have tried to do anything like this. YMMV, but this would be the simplest implementation. You could always look at how CxImage does the blend, and apply it to the image yourself.
I must say thanks to Mark and Patrice for ur guidance which helped me achieved it.
For light variation, I have done it by converting the theme colors to HSV colorspace and found relation between output color and theme color for black color (input) .
The relation was found to be linear for saturation and value and hue was almost constant.
I have used interpolation formula to make it generic for any given theme.
I have also make use of color matrix to achieve desired result.
Similarly for dark variation i have used white color as input and apply the same technique.