i want the hand image to be a black and white shape of the hand. here's a sample of the input and the desired output:
using a threshold doesn't give the desired output because some of the colors inside the hand are the same with the background color. how can i get the desired output?
Adaptive threshold, find contours, floodfill?
Basically, adaptive threshold turns your image into black and white, but takes the threshold level based on local conditions around each pixel - that way, you should avoid the problem you're experiencing with an ordinary threshold. In fact, I'm not sure why anyone would ever want to use a normal threshold.
If that doesn't work, an alternative approach is to find the largest contour in the image, draw it onto a separate matrix and then floodfill everything inside it with black. (Floodfill is like the bucket tool in MSPaint - it starts at a particular pixel, and fills in everything connected to that pixel which is the same colour with another colour of your choice.)
Possibly the most robust approach against various lighting conditions is to do them all in the sequence at the top. But you may be able to get away with only the threshold or the countours/floodfill.
By the way, perhaps the trickiest part is actually finding the contours, because findContours returns an arraylist/vector/whatever (depends on the platform I think) of MatOfPoints. MatOfPoint is a subclass of Mat but you can't draw it directly - you need to use drawContours. Here's some code for OpenCV4Android that I know works:
private Mat drawLargestContour(Mat input) {
/** Allocates and returns a black matrix with the
* largest contour of the input matrix drawn in white. */
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(input, contours, new Mat() /* hierarchy */,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = 0;
int index = -1;
for (MatOfPoint contour : contours) { // iterate over every contour in the list
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
index = contours.indexOf(contour);
}
}
if (index == -1) {
Log.e(TAG, "Fatal error: no contours in the image!");
}
Mat border = new Mat(input.rows(), input.cols(), CvType.CV_8UC1); // initialized to 0 (black) by default because it's Java :)
Imgproc.drawContours(border, contours, index, new Scalar(255)); // 255 = draw contours in white
return border;
}
Two quick things you can try:
After thresholding you can:
Do a morphological closing,
or, the most straightforward: cv::findContours, keep the largest if it's more than one, then draw it using cv::fillConvexPoly and you will get this mask. (fillConvexPoly will fill the holes for you)
Related
I am able to find the contours and labelled all of them. Now I want to remove some contours from the image and need some specific contours only using opencv.
I have used following code to get the contours. This code is working fine for me to get the contours and its labels: as you see the binary image and its contours in the picture. Here, I want to remove contours which are above contour 45 and below contour 22. Basically, I need the center part between the two long horizontal lines.
int main(int argc, char *argv[])
{
if (argc != 2) {
cerr << "usage: "<<argv[0]<< "<input_file with path>"<<endl;
return -1;
}
cv::Mat im_bw = cv::imread(argv[1],cv::IMREAD_GRAYSCALE); //loading an image//
// Binarize the output image
cv::Mat binarized_image;
cv::threshold(im_bw, binarized_image, 128, 255, cv::THRESH_BINARY);
cv::imshow("binary_image.png", binarized_image);
cv::waitKey(0);
vector<vector<cv::Point>> contours;
vector<cv::Vec4i> hierarchy;
cv::findContours(binarized_image,contours,hierarchy,cv::RETR_TREE,cv::CHAIN_APPROX_SIMPLE);
cv::Mat mask = cv::Mat::zeros(im_bw.size(),CV_8UC3);
cv::drawContours(mask,contours,-1,cv::Scalar(0,255,255),1);
for( int i=0; i<contours.size(); i++)
{
cv::putText(mask,to_string(i),contours[i][0],1,1,cv::Scalar(255,0,0),1);
}
cout<<"Contours : "<<contours.size()<<endl;
for(cv::Vec4i k:hierarchy)
{
cout<<k<<endl;
}
cv::imshow("Contours_binary_image.png", mask);
cv::waitKey(0);
return 0;
}
There are probably a few ways you could do it depending on what you care about. Do you want it to be fast, or accurate?
Here is a way you could do it geometrically that should be fairly accurate, and with some optimizations, could be pretty fast.
You'll need a way to identify contours 45 and 22. In the question you identified them as the two large horizontal lines. That is a good place to start.
A simple way to do it would be to iterate through all the contours and keep track of the min and max point values. The horizontal lines will be the largest distance in the min/max X direction and have a relatively small distance between min and max Y. It will probably requires some tweaking and defining some more rules for what is allowed to be considered a "horizontal line".
Once you have the horizontal lines identified, the next step is removing all the ones above and below them. Doing this for the top and bottom will be the same, just in opposite directions. I'll walk through how you could do it with the top line.
Each contour is made up of smaller individual line segments. For each linesegment in the horizontal line's contour, check that every other contour (each segment in said contour) is either above it, or intersects with it. If either is true, then you can remove it. For a large number of contours, or very complex contours, this will be quite slow. There are a number of optimizations you could make. You could simplify the contour shapes. Or compute bounding boxes around them and check the bounding box is above the horizontal line, if it intersects, you can look at it closer and see if the line itself intersects.
Here are the steps I can suggest you to handle this issue:
First you need to get the mass center of each contour by using
opencv moments. This will give you the weighted centeral point
of contours as x,y coordinates.
Then make a filter according to the mass center's y-axis. Between 45th and 22th contour's y-axis values will be the valid contours.
Only draw the contours which are valid according to your filter.
This may help to find the mass centers.
I am trying to implement Automatic perspective correction in my iOS program and when I use the test image I found on the tutorial everything works as expected. But when I take a picture I get back a weird result.
I am using code found in this tutorial
When I give it an image that looks like this:
I get this as the result:
Here is what dst gives me that might help.
I am using this to call the method which contains the code.
quadSegmentation(Img, bw, dst, quad);
Can anyone tell me when I am getting so many green lines compared to the tutorial? And how I might be able to fix this and properly crop the image to only contain the card?
For perspective transform you need,
source points->Coordinates of quadrangle vertices in the source image.
destination points-> Coordinates of the corresponding quadrangle vertices in the destination image.
Here we will calculate these point by contour process.
Calculate Coordinates of quadrangle vertices in the source image
You will get the your card as contour by just by blurring, thresholding, then find contour, find largest contour etc..
After finding largest contour just calculate approximates a polygonal curve, here you should get 4 Point which represent corners of your card. You can adjust the parameter epsilon to make 4 co-ordinates.
Calculate Coordinates of the corresponding quadrangle vertices in the destination image
This can be easily find out by calculating bounding rectangle for largest contour.
In below image the red rectangle represent source points and green for destination points.
Adjust the co-ordinates order and Apply Perspective transform
Here I manually adjust the co-ordinates order and you can use some sorting algorithm.
Then calculate transformation matrix and apply wrapPrespective
See the final result
Code
Mat src=imread("card.jpg");
Mat thr;
cvtColor(src,thr,CV_BGR2GRAY);
threshold( thr, thr, 70, 255,CV_THRESH_BINARY );
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
int largest_contour_index=0;
int largest_area=0;
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0)); //create destination image
findContours( thr.clone(), contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ){
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
}
}
drawContours( dst,contours, largest_contour_index, Scalar(255,255,255),CV_FILLED, 8, hierarchy );
vector<vector<Point> > contours_poly(1);
approxPolyDP( Mat(contours[largest_contour_index]), contours_poly[0],5, true );
Rect boundRect=boundingRect(contours[largest_contour_index]);
if(contours_poly[0].size()==4){
std::vector<Point2f> quad_pts;
std::vector<Point2f> squre_pts;
quad_pts.push_back(Point2f(contours_poly[0][0].x,contours_poly[0][0].y));
quad_pts.push_back(Point2f(contours_poly[0][1].x,contours_poly[0][1].y));
quad_pts.push_back(Point2f(contours_poly[0][3].x,contours_poly[0][3].y));
quad_pts.push_back(Point2f(contours_poly[0][2].x,contours_poly[0][2].y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x,boundRect.y+boundRect.height));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y));
squre_pts.push_back(Point2f(boundRect.x+boundRect.width,boundRect.y+boundRect.height));
Mat transmtx = getPerspectiveTransform(quad_pts,squre_pts);
Mat transformed = Mat::zeros(src.rows, src.cols, CV_8UC3);
warpPerspective(src, transformed, transmtx, src.size());
Point P1=contours_poly[0][0];
Point P2=contours_poly[0][1];
Point P3=contours_poly[0][2];
Point P4=contours_poly[0][3];
line(src,P1,P2, Scalar(0,0,255),1,CV_AA,0);
line(src,P2,P3, Scalar(0,0,255),1,CV_AA,0);
line(src,P3,P4, Scalar(0,0,255),1,CV_AA,0);
line(src,P4,P1, Scalar(0,0,255),1,CV_AA,0);
rectangle(src,boundRect,Scalar(0,255,0),1,8,0);
rectangle(transformed,boundRect,Scalar(0,255,0),1,8,0);
imshow("quadrilateral", transformed);
imshow("thr",thr);
imshow("dst",dst);
imshow("src",src);
imwrite("result1.jpg",dst);
imwrite("result2.jpg",src);
imwrite("result3.jpg",transformed);
waitKey();
}
else
cout<<"Make sure that your are getting 4 corner using approxPolyDP..."<<endl;
teethe This typically happens when you rely on somebody else code to solve your particular problem instead of adopting the code. Look at the processing stages and also the difference between their and your image (it is a good idea by the way to start with their image and make sure the code works):
Get the edge map. - will probably work since your edges are fine
Detect lines with Hough transform. - fail since you have lines not only on the contour but also inside of your card. So expect a lot of false alarm lines
Get the corners by finding intersections between lines. - fail for the above mentioned reason
Check if the approximate polygonal curve has 4 vertices. - fail
Determine top-left, bottom-left, top-right, and bottom-right corner. - fail
Apply the perspective transformation. - fail completely
To fix your problem you have to ensure that only lines on the periphery are extracted. If you always have a dark background you can use this fact to discard the lines with other contrasts/polarities. Alternatively you can extract all the lines and then select the ones that are closest to the image boundary (if your background doesn't have lines).
The program I'm working right now is almost done but I'm not very satisfy with the result. By using Canny algorithm, I managed to get a very clear of the object's contour but the program has some problem to recognize the contour and draw the contour with a red line. The program:
void setwindowSettings(){
namedWindow("Contours", CV_WINDOW_AUTOSIZE);
createTrackbar("LowerC", "Contours", &lowerC, 255, NULL);
createTrackbar("UpperC", "Contours", &upperC, 255, NULL);
}
void wait(void)
{
long t=30000000;
while(t--);
}
int main(void)
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
Mat frame,foreground,image;
double pt1, pt2, area;
Rect rect;
int i;
vector<vector<Point> > contours;
vector<vector<Point> > largest_contours;
namedWindow("Capture", CV_WINDOW_AUTOSIZE);
setwindowSettings();
while(1){
cap >> frame; // get a new frame from camera
if( frame.empty() )
break;
image=frame.clone();
cvtColor(image,foreground,CV_BGR2GRAY);
GaussianBlur(foreground,foreground,Size(9,11),0,0);
Canny(foreground,foreground,lowerC,upperC,3);
findContours(foreground,contours,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE);
if(contours.empty())
continue;
double largest_area = 0;
for( i= 0; i < contours.size(); i++){ // get the largest contour
area = fabs(contourArea(contours[i]));
if(area >= largest_area){
largest_area = area;
largest_contours.clear();
largest_contours.push_back(contours[i]);
}
}
if(largest_area>=3000){ // draw the largest contour if exceeded minimum largest area
drawContours(image,largest_contours,-1,Scalar(0,0,255),2);
printf("area = %.f\n",largest_area);
}
wait();
imshow( "Capture",image );
imshow("Contours",foreground);
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
Program summary:
Get images from camera
Noise filtration (Convert to gray → blur → Canny)
Find contours
Find the largest contour and its area in the image aka the object
Draw a red line around the object and print out the largest area
Rinse and repeat
And the results:
Rarely I got what I want; Contour detected, red line drawn (GOOD ONE):
...and usually I got this; No contour detected, not red line (BAD ONE):
The chances to get the GOOD ONE are about 1/20 which is not very good. Also, the line of the object's contour in Contours screen will blink when the red line appears around the object (see the GOOD ONE picture).
I'm using one of my object (A small black square box) for this question but please note that the main objective of this object detection program is to detect the object regardless of its shape or its color.
So my questions are:
Why I still get the BAD ONES despite the contour is as clear as day?
Can anyone share a better idea on how to improve the contour detection? (i.e better blur algorithm)
How to avoid the contour's line from blinking when the red line is drawn around the object?
EDIT: I just discovered that contour's line blinking is not because of the red line drawn around it (either with drawContoursor line function) but it happens after the largest contour is detected by findContours function and calculated as the largest contour.
For question about no. 3 click HERE. VIDEO HERE, CLICK IT!!!
Thanks in advance.
PS: I'm using OpenCV 2.4.3 on Ms Visual C++ 2010 Exp.
Since you are using the fact of largest contour so I presume you are trying to detect the largest object appearing in the field of view of the camera.I wonder why the window light/bright light source at top right doesn't create any contour(may be due to blurring). You can store the background image and subtract it from the image where the object appears. This way you can derive the object.You can apply a contour finding in the difference image.absdiff(frame_now,frame_backgrnd,diff) where diff is the difference image.
If the object is in motion and you want to detect you can use optical flow combined with largest contour to detect the object.
Try doing you process without the blurring function and then detect the largest contourArea.
For plotting the points try this
for(int i = 1;i<(int)largest_contours[0].size();i++)
line(image,largest_contours[0][i-1],largest_contours[0][i],cv::Scalar(0,0,255),2,8,0);
I am developing some image processing tools in iOS. Currently, I have a contour of features computed, which is of type InputArrayOfArrays.
Declared as:
std::vector<std::vector<cv::Point> > contours_final( temp_contours.size() );
Now, I would like to extract areas of the original RGB picture circled by contours and may further store sub-image as cv::Mat format. How can I do that?
Thanks in advance!
I'm guessing what you want to do is just extract the regions in the the detected contours. Here is a possible solution:
using namespace cv;
int main(void)
{
vector<Mat> subregions;
// contours_final is as given above in your code
for (int i = 0; i < contours_final.size(); i++)
{
// Get bounding box for contour
Rect roi = boundingRect(contours_final[i]); // This is a OpenCV function
// Create a mask for each contour to mask out that region from image.
Mat mask = Mat::zeros(image.size(), CV_8UC1);
drawContours(mask, contours_final, i, Scalar(255), CV_FILLED); // This is a OpenCV function
// At this point, mask has value of 255 for pixels within the contour and value of 0 for those not in contour.
// Extract region using mask for region
Mat contourRegion;
Mat imageROI;
image.copyTo(imageROI, mask); // 'image' is the image you used to compute the contours.
contourRegion = imageROI(roi);
// Mat maskROI = mask(roi); // Save this if you want a mask for pixels within the contour in contourRegion.
// Store contourRegion. contourRegion is a rectangular image the size of the bounding rect for the contour
// BUT only pixels within the contour is visible. All other pixels are set to (0,0,0).
subregions.push_back(contourRegion);
}
return 0;
}
You might also want to consider saving the individual masks to optionally use as a alpha channel in case you want to save the subregions in a format that supports transparency (e.g. png).
NOTE: I'm NOT extracting ALL the pixels in the bounding box for each contour, just those within the contour. Pixels that are not within the contour but in the bounding box are set to 0. The reason is that your Mat object is an array and that makes it rectangular.
Lastly, I don't see any reason for you to just save the pixels in the contour in a specially created data structure because you would then need to store the position for each pixel in order to recreate the image. If your concern is saving space, that would not save you much space if at all. Saving the tightest bounding box would suffice. If instead you wish to just analyze the pixels in the contour region, then save a copy of the mask for each contour so that you can use it to check which pixels are within the contour.
You are looking for the cv::approxPolyDP() function to connect the points.
I shared a similar use of the overall procedure in this post. Check the for loop after the findContours() call.
I think what you're looking for is cv::boundingRect().
Something like this:
using namespace cv;
Mat img = ...;
...
vector<Mat> roiVector;
for(vector<vector<Point> >::iterator it=contours.begin(); it<contours.end(); it++) {
if (boundingRect( (*it)).area()>minArea) {
roiVector.push_back(img(boundingRect(*it)));
}
}
cv::boundingRect() takes a vector of Points and returns a cv::Rect. Initializing a Mat myRoi = img(myRect) gives you a pointer to that part of the image (so modifying myRoi will ALSO modify img).
See more here.
I setup an area of interest somewhere near the center of my image using:
Mat frame;
//frame has been initialized as a frame from a camera input
Rect roi= cvRect(frame.cols*.45, frame.rows*.45, 10, 8);
image_roi= frame(roi);
//I stoped here not knowing what to do next
I'm using a camera and at any time when I grab a frame, the ROI will be anywhere between 30% to 100% filled with my desired color, which is Red in this case. What is the most efficient method to know if Red is present in my current frame?
Solution:
image_roi= frame(roi);// a frame from my camera as a cv::Mat
cvtColor(image_roi, image_roi, CV_BGR2HSV);
thrs= new Mat(image_roi.rows, image_roi.cols, CV_8UC1);//allocate space for new img
inRange(image_roi, Scalar(0,100,100), Scalar(12,255,255), *thrs);//do hsv thresholding for red
for(int i= 0; i < thrs->rows; i++)//sum up
{
for(int j=0; j < thrs->cols; j++)
{
sum= sum+ thrs->data[(thrs->rows)* i + j];
}
}
if(sum> 100)//my application only cares about red
cout<<"Red"<<endl;
else
cout<<"White"<<endl;
sum=0;
This solution should address not only red but any color distribution:
Get a color histogram for your ROI, a bidimensional hue and saturation histogram (follow the example here).
Use calcBackProject to project the histogram back in the full image. You will get larger values in pixels presenting a color near the modes of the histogram (in this case, reds).
Threshold the result to get the pixels that better match the distribution (in this case, the "best reds").
This solution can be used, for example, to get a simple but very functional skin detector.
I'm assuming you just want to know the percentage of red in the ROI. If that's not correct, please clarify.
I'd scan the ROI and convert each pixel into a better color space for color comparison, such as YCbCr, or HSV. I'd then count the number of pixels where the hue is within some delta of red's hue (usually 0 degrees on the color wheel). You might need to deal with some edge cases where the brightness or saturation are too low for a human to think they're red, even though technically they are, depending on what you're trying to achieve.