I'm developing a software that detects boxers punching motion. At the moment i used color based segmentation using inRange function and set it to detect blue Minimum value and Blue Maximum value. The problem is that the range is quite wide and my cam at times picks out noise and segments objects of no interest. To improve the software i though of scanning image of a boxing glove and establishing exact Blue color Value before further processing.
It would make sens to me to store that value in a Vector and call it in inRange fiction
// My current function which takes the Minimum and Maximum values of Blue Color
Mat range_out;
inRange(blur_out, Scalar(100, 100, 100), Scalar(120, 255, 255), range_out);
So i would image the vector to go somewhere here.
Scan this above image compute the Blue value
Store this value in an array
recall the array in a inRange function
Could someone suggest a solution to this problem or direct me to a source of information where I can look for answers ?
since you are detecting the boxer gloves in motion so first use motion to separate it from other elements in the scene...use frame differentiation or optical flow to separate the glove and other moving areas from non moving areas...now in those moving area try for some colour detection...
Separe luminosity and cromaticity - your fixed range will not work very well in different light conditions. Your range is wide probably because you are trying to see "blue" in dark and on light at the same time. Convert your image to HSV (or La*b*) and discard V (or L), keeping H and S (or a* and b*).
Learn a color distribution instead a simple range - take some samples and compute a 2D
color histogram on H and S (a* or b*) for pixels on the glove. This histogram will be a model for the color distribution of your object. Then, use c2.calcBackProjection to detect the pixels of interest in your scene.
Clean the result using morphological close operation
Important: on step 2, play a little with different quantization values (ie, different numbers of bins).
Related
Is that possible to get the depth/disparity map from a moving camera? Let say I capture an image at x location, after I travelled let say 5cm and I capture another picture, and from there I calculate the depth map of the image.
I have tried using BlockMatching in opencv but the result is not good.The first and second image are as following:
first image,second image,
disparity map (colour),disparity map
My code is as following:
GpuMat leftGPU;
GpuMat rightGPU;
leftGPU.upload(left);rightGPU.upload(right);
GpuMat disparityGPU;
GpuMat disparityGPU2;
Mat disparity;Mat disparity1,disparity2;
Ptr<cuda::StereoBM> stereo = createStereoBM(256,3);
stereo->setMinDisparity(-39);
stereo->setPreFilterCap(61);
stereo->setPreFilterSize(3);
stereo->setSpeckleRange(1);
stereo->setUniquenessRatio(0);
stereo->compute(leftGPU,rightGPU,disparityGPU);
drawColorDisp(disparityGPU, disparityGPU2,256);
disparityGPU.download(disparity);
disparityGPU2.download(disparity2);
imshow("display img",disparityGPU);
how can I improve upon this? From the colour disparity map, there are quite a lot error (ie. the tall circle is red in colour and it is the same as some of the part of the table.). Also,from the disparity map, there are small noise (all the black dots in the picture), how can I pad those black dots with nearby disparities?
It is possible if the object is static.
To properly do stereo matching, you first need to rectify your images! If you don't have calibrated cameras, you can do this from detected feature points. Also note that for cuda::StereoBM the minimum default disparity is 0. (I have never used cuda, but I don't think your setMinDisparity is doing anything, see this anser.)
Now, in your example images corresponding points are only about 1 row apart, therefore your disparity map actually doesn't look too bad. Maybe having a larger blockSize would already do in this special case.
Finally, your objects have very low texture, therefore the block matching algorithm can't detect much.
What I need
I'm currently working on an augmented reality kinda game. The controller that the game uses (I'm talking about the physical input device here) is a mono colored, rectangluar pice of paper. I have to detect the position, rotation and size of that rectangle in the capture stream of the camera. The detection should be invariant on scale and invariant on rotation along the X and Y axes.
The scale invariance is needed in case that the user moves the paper away or towards the camera. I don't need to know the distance of the rectangle so scale invariance translates to size invariance.
The rotation invariance is needed in case the user tilts the rectangle along its local X and / or Y axis. Such a rotation changes the shape of the paper from rectangle to trapezoid. In this case, the object oriented bounding box can be used to measure the size of the paper.
What I've done
At the beginning there is a calibration step. A window shows the camera feed and the user has to click on the rectangle. On click, the color of the pixel the mouse is pointing at is taken as reference color. The frames are converted into HSV color space to improve color distinguishing. I have 6 sliders that adjust the upper and lower thresholds for each channel. These thresholds are used to binarize the image (using opencv's inRange function).
After that I'm eroding and dilating the binary image to remove noise and unite nerby chunks (using opencv's erode and dilate functions).
The next step is finding contours (using opencv's findContours function) in the binary image. These contours are used to detect the smallest oriented rectangles (using opencv's minAreaRect function). As final result I'm using the rectangle with the largest area.
A short conclusion of the procedure:
Grab a frame
Convert that frame to HSV
Binarize it (using the color that the user selected and the thresholds from the sliders)
Apply morph ops (erode and dilate)
Find contours
Get the smallest oriented bouding box of each contour
Take the largest of those bounding boxes as result
As you may noticed, I don't make an advantage of the knowledge about the actual shape of the paper, simply because I don't know how to use this information properly.
I've also thought about using the tracking algorithms of opencv. But there were three reasons that prevented me from using them:
Scale invariance: as far as I read about some of the algorithms, some don't support different scales of the object.
Movement prediction: some algorithms use movement prediction for better performance, but the object I'm tracking moves completely random and therefore unpredictable.
Simplicity: I'm just looking for a mono colored rectangle in an image, nothing fancy like car or person tracking.
Here is a - relatively - good catch (binary image after erode and dilate)
and here is a bad one
The Question
How can I improve the detection in general and especially to be more resistant against lighting changes?
Update
Here are some raw images for testing.
Can't you just use thicker material?
Yes I can and I already do (unfortunately I can't access these pieces at the moment). However, the problem still remains. Even if I use material like cartboard. It isn't bent as easy as paper, but one can still bend it.
How do you get the size, rotation and position of the rectangle?
The minAreaRect function of opencv returns a RotatedRect object. This object contains all the data I need.
Note
Because the rectangle is mono colored, there is no possibility to distinguish between top and bottom or left and right. This means that the rotation is always in range [0, 180] which is perfectly fine for my purposes. The ratio of the two sides of the rect is always w:h > 2:1. If the rectangle would be a square, the range of roation would change to [0, 90], but this can be considered irrelevant here.
As suggested in the comments I will try histogram equalization to reduce brightness issues and take a look at ORB, SURF and SIFT.
I will update on progress.
The H channel in the HSV space is the Hue, and it is not sensitive to the light changing. Red range in about [150,180].
Based on the mentioned information, I do the following works.
Change into the HSV space, split the H channel, threshold and normalize it.
Apply morph ops (open)
Find contours, filter by some properties( width, height, area, ratio and so on).
PS. I cannot fetch the image you upload on the dropbox because of the NETWORK. So, I just use crop the right side of your second image as the input.
imgname = "src.png"
img = cv2.imread(imgname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## Split the H channel in HSV, and get the red range
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
h[h<150]=0
h[h>180]=0
## normalize, do the open-morp-op
normed = cv2.normalize(h, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC1)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(3,3))
opened = cv2.morphologyEx(normed, cv2.MORPH_OPEN, kernel)
res = np.hstack((h, normed, opened))
cv2.imwrite("tmp1.png", res)
Now, we get the result as this (h, normed, opened):
Then find contours and filter them.
contours = cv2.findContours(opened, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))[-2]
bboxes = []
rboxes = []
cnts = []
dst = img.copy()
for cnt in contours:
## Get the stright bounding rect
bbox = cv2.boundingRect(cnt)
x,y,w,h = bbox
if w<30 or h < 30 or w*h < 2000 or w > 500:
continue
## Draw rect
cv2.rectangle(dst, (x,y), (x+w,y+h), (255,0,0), 1, 16)
## Get the rotated rect
rbox = cv2.minAreaRect(cnt)
(cx,cy), (w,h), rot_angle = rbox
print("rot_angle:", rot_angle)
## backup
bboxes.append(bbox)
rboxes.append(rbox)
cnts.append(cnt)
The result is like this:
rot_angle: -2.4540319442749023
rot_angle: -1.8476102352142334
Because the blue rectangle tag in the source image, the card is splited into two sides. But a clean image will have no problem.
I know it's been a while since I asked the question. I recently continued on the topic and solved my problem (although not through rectangle detection).
Changes
Using wood to strengthen my controllers (the "rectangles") like below.
Placed 2 ArUco markers on each controller.
How it works
Convert the frame to grayscale,
downsample it (to increase performance during detection),
equalize the histogram using cv::equalizeHist,
find markers using cv::aruco::detectMarkers,
correlate markers (if multiple controllers),
analyze markers (position and rotation),
compute result and apply some error correction.
It turned out that the marker detection is very robust to lighting changes and different viewing angles which allows me to skip any calibration steps.
I placed 2 markers on each controller to increase the detection robustness even more. Both markers has to be detected only one time (to measure how they correlate). After that, it's sufficient to find only one marker per controller as the other can be extrapolated from the previously computed correlation.
Here is a detection result in a bright environment:
in a darker environment:
and when hiding one of the markers (the blue point indicates the extrapolated marker postition):
Failures
The initial shape detection that I implemented didn't perform well. It was very fragile to lighting changes. Furthermore, it required an initial calibration step.
After the shape detection approach I tried SIFT and ORB in combination with brute force and knn matcher to extract and locate features in the frames. It turned out that mono colored objects don't provide much keypoints (what a surprise). The performance of SIFT was terrible anyway (ca. 10 fps # 540p).
I drew some lines and other shapes on the controller which resulted in more keypoints beeing available. However, this didn't yield in huge improvements.
I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link
I have a Kinect and I'm using OpenCV and point cloud library. I would like to project the IR Image onto a 2D plane for forklift pallet detection. How would I do that?
I'm trying to detect the pallet in the forklift here is an image:
Where are the RGB data? You can use it to help with the detection. You do not need to project the image onto any plane to detect a pellet. There are basically 2 ways used for detection
non-deterministic based on neural network, fuzzy logic, machine learning, etc
This approach need a training dataset to recognize the object. Much experience is needed for proper training set and classifier architecture/topology selection. But other then that you do not need to program it... as usually some readily available lib/tool is used just configure and pass the data.
deterministic based on distance or correlation coefficients
I would start with detecting specific features like:
pallet has specific size
pallet has sharp edges and specific geometry shape in depth data
pallet has specific range of colors (yellowish wood +/- lighting and dirt)
Wood has specific texture patterns
So compute some coefficient for each feature how close the object is to real pallet. And then just treshold the distance of all coefficients combined (possibly weighted as some features are more robust).
I do not use the #1 approach so I would go for #2. So combine the RGB and depth data (they have to be matched exactly). Then segmentate the image (based on depth and color). After that for each found object classify if it is pallet ...
[Edit1]
Your colored image does not correspond to depth data. The aligned gray-scale has poor quality and the depth data image is also very poor. Is the depth data processed somehow (loosing precision)? If you look at your data from different sides:
You can see how poor it is so I doubt you can use depth data for detection at all...
PS. I used my Align already captured rgb and depth images for the visualization.
The only thing left is the colored image and detect areas with matching color only. Then detect the features and classify. The color of your pallet in the image is almost white. Here HSV reduced colors to basic 16 colors (too lazy to segmentate)
You should obtain range of colors of the pallets possible by your setup to ease up the detection. Then check those objects for the features like size, shape,area,circumference...
[Edit2]
So I would start with Image preprocessing:
convert to HSV
treshold only pixels close to pallet color
I chose (H=40,S=18,V>100) as a pallet color. My HSV ranges are <0,255> per channel so Hue angle difference can be only <-180deg,+180deg> max which corresponds to <-128,+128> in my ranges.
remove too thin areas
Just scan all Horizontal an Vertical lines count consequent set pixels and if too small size recolor them to black...
This is the result:
On the left the original image (downsized so it fits to this page), In the middle is the color treshold result and last is the filtering out of small areas. You can play with tresholds and pallet color to change behavior to suite your needs.
Here C++ code:
int tr_d=10; // min size of pallet [pixels[
int h,s,v,x,y,xx;
color c;
pic1=pic0;
pic1.pf=_pf_rgba;
pic2.resize(pic1.xs*3,pic1.ys); xx=0;
pic2.bmp->Canvas->Draw(xx,0,pic0.bmp); xx+=pic1.xs;
// [color selection]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
// get color from image
c=pic0.p[y][x];
rgb2hsv(c);
// distance to white-yellowish color in HSV (H=40,S=18,V>100)
h=c.db[picture::_h]-40;
s=c.db[picture::_s]-18;
v=c.db[picture::_v];
// hue is cyclic angular so use only shorter angle
if (h<-128) h+=256;
if (h>+128) h-=256;
// abs value
if (h< 0) h=-h;
if (s< 0) s=-s;
// treshold close colors
c.dd=0;
if (h<25)
if (s<25)
if (v>100)
c.dd=0x00FFFFFF;
pic1.p[y][x]=c;
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
// [remove too thin areas]
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;)
{
for ( ;x<pic1.xs;x++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=x;x<pic1.xs;x++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (x-h<tr_d) for (;h<x;h++) pic1.p[y][h].dd=0; // if too small size recolor to zero
}
for (x=0;x<pic1.xs;x++)
for (y=0;y<pic1.ys;)
{
for ( ;y<pic1.ys;y++) if ( pic1.p[y][x].dd) break; // find set pixel
for (h=y;y<pic1.ys;y++) if (!pic1.p[y][x].dd) break; // find unset pixel
if (y-h<tr_d) for (;h<y;h++) pic1.p[h][x].dd=0; // if too small size recolor to zero
}
pic2.bmp->Canvas->Draw(xx,0,pic1.bmp); xx+=pic1.xs;
See how to extract the borders of an image (OCT/retinal scan image) for the description of picture and color. Or look at any of my DIP/CV tagged answers. Now the code is well commented and straightforward but just need to add:
You can ignore pic2 stuff it is just the image posted above so I do not need to manually print screen and merge the subresult in paint... To improve robustness you should add enhancing of dynamic range (so the tresholds have the same conditions for any input images). Also you should compare to more then just single color (if more wood types of pallet is present).
Now you should segmentate or label the areas
loop through entire image
find first pixel set with the pallet color
flood fill the area with some distinct ID color different from set pallet color
I use black 0x00000000 space and white 0x00FFFFFF as pallete pixel color. So use ID={1,2,3,4,5...}. Also remember number of filled pixels (that is your area) so you do not need to compute it again. You can also compute bounding box directly while filling.
compute and compare features
You need to experiment with more then one image. To find out what properties are good for detection. I would go for circumference length vs area ratio. and or bounding box size... The circumference can be extracted by simply selecting all pixels with proper ID color neighboring black pixel.
See also similar Fracture detection in hand using image proccessing
Good luck and have fun ...
I want to find the background in multiple images captured with a fixed camera. Camera detect moving objects(animal) and captured sequential Images. So I need to find a simple background model image by process 5 to 10 captured images with same background.
Can someone help me please??
Is your eventual goal to find foreground? Can you show some images?
If animals move fast enough they will create a lot of intensity changes while background pixels will remain closely correlated among most of the frames. I won’t write you real code but will give you a pseudo-code in openCV. The main idea is to average only correlated pixels:
Mat Iseq[10];// your sequence
Mat result, Iacc=0, Icnt=0; // Iacc and Icnt are float types
loop through your sequence, i=0; i<N-1; i++
matchTemplate(Iseg[i], Iseq[i+1], result, CV_TM_CCOEFF_NORMED);
mask = 1 & (result>0.9); // get correlated part, which is probably background
Iacc += Iseq[i] & mask + Iseq[i+1] & mask; // accumulate background infer
Icnt += 2*mask; // keep count
end of loop;
Mat Ibackground = Iacc.mul(1.0/Icnt); // average background (moving parts fade away)
To improve the result you may reduce mage resolution or apply blur to enhance correlation. You can also clean every mask from small connected components by erosion, for example.
If
each pixel location appears as background in more than half the frames, and
the colour of a pixel does not vary much across the subset of frames in which it is background,
then there's a very simple algorithm: for each pixel location, just take the median intensity over all frames.
How come? Suppose the image is greyscale (this makes it easier to explain, but the process will work for colour images too -- just treat each colour component separately). If a particular pixel appears as background in more than half the frames, then when you take the intensities of that pixel across all frames and sort them, a background-coloured pixel must appear at the half-way (median) position. (In the worst case, all background-coloured pixels get pushed to the very front or the very back in this order, but even then there are enough of them to cover the half-way point.)
If you only have 5 images it's going to be hard to identify background and most sophisticated techniques probably won't work. For general background identification methods, see Link