I am working on a small OpenCV project to detect lines of a certain colour from a mobile phone camera.
In short would like to:
Transform the input image into an image of a certain colour (e.g. Red from a specific upper and lower range)
Apply Hough line transformation to the resulting image so that it detects only lines of that specific colour
Superimpose on the original image the lines detected
Those are the functions that I'd like to use but not quiet sure how to fill the missing bits.
This is the processImage function called from an smartphone app when processing images from an instance of CvVideoCamera
- (void)processImage:(Mat&)image;
{
cv::Mat orig_image = image.clone();
cv::Mat red_image = ??
// Apply houghes transformation to detect lines between a minimum length and a maximum length (I was thinking of using the CV_HOUGH_PROBABILISTIC method..)
// Comment.. see below..
I am unable to understand the documentation here as the C++
method signature does not have a method field
vector<Vec2f> lines;
From the official documentation:
C++: void HoughLines(InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0 )
HoughLines(dst, lines, 1, CV_PI/180, 100, 0, 0 );
Taken from sample code, haven't understood properly how it works..
(e.g. what's the usage of theta? How does giving a different angle
reflect into line detection?)
for( size_t i = 0; i < lines.size(); i++ )
{
Here I should only consider lines above a certain size.. (no idea how)
}
Here I should then add the resulting lines to original image (no idea how) so that they can be shown on the screen..
Any help would be greatly appreciated.
You can use HSV color space to extract color tone information.
Here's some code with comments, if there are any questions feel free to ask:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/coloredLines.png");
// convert to HSV color space
cv::Mat hsvImage;
cv::cvtColor(input, hsvImage, CV_BGR2HSV);
// split the channels
std::vector<cv::Mat> hsvChannels;
cv::split(hsvImage, hsvChannels);
// hue channels tells you the color tone, if saturation and value aren't too low.
// red color is a special case, because the hue space is circular and red is exactly at the beginning/end of the circle.
// in literature, hue space goes from 0 to 360 degrees, but OpenCV rescales the range to 0 up to 180, because 360 does not fit in a single byte. Alternatively there is another mode where 0..360 is rescaled to 0..255 but this isn't as common.
int hueValue = 0; // red color
int hueRange = 15; // how much difference from the desired color we want to include to the result If you increase this value, for example a red color would detect some orange values, too.
int minSaturation = 50; // I'm not sure which value is good here...
int minValue = 50; // not sure whether 50 is a good min value here...
cv::Mat hueImage = hsvChannels[0]; // [hue, saturation, value]
// is the color within the lower hue range?
cv::Mat hueMask;
cv::inRange(hueImage, hueValue - hueRange, hueValue + hueRange, hueMask);
// if the desired color is near the border of the hue space, check the other side too:
// TODO: this won't work if "hueValue + hueRange > 180" - maybe use two different if-cases instead... with int lowerHueValue = hueValue - 180
if (hueValue - hueRange < 0 || hueValue + hueRange > 180)
{
cv::Mat hueMaskUpper;
int upperHueValue = hueValue + 180; // in reality this would be + 360 instead
cv::inRange(hueImage, upperHueValue - hueRange, upperHueValue + hueRange, hueMaskUpper);
// add this mask to the other one
hueMask = hueMask | hueMaskUpper;
}
// now we have to filter out all the pixels where saturation and value do not fit the limits:
cv::Mat saturationMask = hsvChannels[1] > minSaturation;
cv::Mat valueMask = hsvChannels[2] > minValue;
hueMask = (hueMask & saturationMask) & valueMask;
cv::imshow("desired color", hueMask);
// now perform the line detection
std::vector<cv::Vec4i> lines;
cv::HoughLinesP(hueMask, lines, 1, CV_PI / 360, 50, 50, 10);
// draw the result as big green lines:
for (unsigned int i = 0; i < lines.size(); ++i)
{
cv::line(input, cv::Point(lines[i][0], lines[i][1]), cv::Point(lines[i][2], lines[i][3]), cv::Scalar(0, 255, 0), 5);
}
cv::imwrite("C:/StackOverflow/Output/coloredLines_mask.png", hueMask);
cv::imwrite("C:/StackOverflow/Output/coloredLines_detection.png", input);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
using this input image:
Will extract this "red" color (adjust hueValue and hueRange to detect different colors):
and HoughLinesP detects those lines from the mask (should work with HoughLines similarly):
Here's another set of images with non-lines too...
About your different questions:
There are two functions HoughLines and HoughLinesP. HoughLines does not extract a line length, but you can compute it in a post-processing by checking again, which pixels of the edge-mask (HoughLines input) correspond to the extracted line.
parameters:
image - edge image (should be clear?)
lines - lines given by angle and position, no length or sth. they are interpreted infinitely long
rho - the accumulator resolution. The bigger, the more robust in case of slightly distorted lines it should be, but the less accurate in the extracted lines' position/angle
threshold - the bigger the less false positives, but you might miss some lines
theta - angle resolution: the smaller, the more different lines (depending on the orientation) can be detected. If your line's orientation does not fit in the angle steps, the line might not be detected. For example if you CV_PI/180 will detect in 1° resolution, if your line has a 0.5° (e.g. 33.5°) orientation, it might be missed.
I'm not so extremely sure about all the parameters, maybe you'll have to look at the literature about hough line detection, or someone else can add some hints here.
If you instead use cv::HoughLinesP, line segments with start- and end-point will be detected, which is easier to interpret and you can compute the line length from cv::norm(cv::Point(lines[i][0], lines[i][1]) - cv::Point(lines[i][2], lines[i][3]))
I will not show the code but the steps with some tricks.
Assume that you want to detect road lanes (which are lines with white or light yellow color and have some specific properties).
Original Image( I add some extra lines to make noise )
Step 1: Remove parts of image that not need to be used, save the CPU usage (simple but useful)
Step 2: Convert to gray image
Step 3:threshold
Using threshold according to the color of your line, the color will become white and others will become black
Step 4: Using Contours to find the bound of objects
Step 5: Using Fitline with Contours in previous step are input to return equations of lines
Fitline returns (x0,y0) and vector v = (a,b)
Step 6: With the equations of lines you can draw in any line you want
Related
Firstly I integrate OpenCV framework to XCode and All the OpenCV code is on ObjectiveC and I am using in Swift Using bridging header. I am new to OpenCV Framework and trying to achieve count of vertical lines from the image.
Here is my code:
First I am converting the image to GrayScale
+ (UIImage *)convertToGrayscale:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
cv::Mat gray;
cv::cvtColor(mat, gray, CV_RGB2GRAY);
UIImage *grayscale = MatToUIImage(gray);
return grayscale;
}
Then, I am detecting edges so I can find the line of gray color
+ (UIImage *)detectEdgesInRGBImage:(UIImage *)image {
cv::Mat mat;
UIImageToMat(image, mat);
//Prepare the image for findContours
cv::threshold(mat, mat, 128, 255, CV_THRESH_BINARY);
//Find the contours. Use the contourOutput Mat so the original image doesn't get overwritten
std::vector<std::vector<cv::Point> > contours;
cv::Mat contourOutput = mat.clone();
cv::findContours( contourOutput, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
NSLog(#"Count =>%lu", contours.size());
//For Blue
/*cv::GaussianBlur(mat, gray, cv::Size(11, 11), 0); */
UIImage *grayscale = MatToUIImage(mat);
return grayscale;
}
This both Function is written on Objective C
Here, I am calling both function Swift
override func viewDidLoad() {
super.viewDidLoad()
let img = UIImage(named: "imagenamed")
let img1 = Wrapper.convert(toGrayscale: img)
self.capturedImageView.image = Wrapper.detectEdges(inRGBImage: img1)
}
I was doing this for some days and finding some useful documents(Reference Link)
OpenCV - how to count objects in photo?
How to count number of lines (Hough Trasnform) in OpenCV
OPENCV Documents
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?#findcontours
Basically, I understand the first we need to convert this image to black and white, and then using cvtColor, threshold and findContours we can find the colors or lines.
I am attaching the image that vertical Lines I want to get.
Original Image
Output Image that I am getting
I got number of lines count =>10
I am not able to get accurate count here.
Please guide me on this. Thank You!
Since you want to detect the number of the vertical lines, there is a very simple approach I can suggest for you. You already got a clear output and I used this output in my code. Here are the steps before the code:
Preprocess the input image to get the lines clearly
Check each row and check until get a pixel whose value is higher than 100(threshold value I chose)
Then increase the line counter for that row
Continue on that line until get a pixel whose value is lower than 100
Restart from step 3 and finish the image for each row
At the end, check the most repeated element in the array which you assigned line numbers for each row. This number will be the number of vertical lines.
Note: If the steps are difficult to understand, think like this way:
" I am checking the first row, I found a pixel which is higher than
100, now this is a line edge starting, increase the counter for this
row. Search on this row until get a pixel smaller than 100, and then
research a pixel bigger than 100. when row is finished, assign the
line number for this row to a big array. Do this for all image. At the
end, since some lines looks like two lines at the top and also some
noises can occur, you should take the most repeated element in the big
array as the number of lines."
Here is the code part in C++:
#include <vector>
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui/highgui.hpp>
int main()
{
cv::Mat img = cv::imread("/ur/img/dir/img.jpg",cv::IMREAD_GRAYSCALE);
std::vector<int> numberOfVerticalLinesForEachRow;
cv::Rect r(0,0,img.cols-10,200);
img = img(r);
bool blackCheck = 1;
for(int i=0; i<img.rows; i++)
{
int numberOfLines = 0;
for(int j=0; j<img.cols; j++)
{
if((int)img.at<uchar>(cv::Point(j,i))>100 && blackCheck)
{
numberOfLines++;
blackCheck = 0;
}
if((int)img.at<uchar>(cv::Point(j,i))<100)
blackCheck = 1;
}
numberOfVerticalLinesForEachRow.push_back(numberOfLines);
}
// In this part you need a simple algorithm to check the most repeated element
for(int k:numberOfVerticalLinesForEachRow)
std::cout<<k<<std::endl;
cv::namedWindow("WinWin",0);
cv::imshow("WinWin",img);
cv::waitKey(0);
}
Here's another possible approach. It relies mainly on the cv::thinning function from the extended image processing module to reduce the lines at a width of 1 pixel. We can crop a ROI from this image and count the number of transitions from 255 (white) to 0 (black). These are the steps:
Threshold the image using Otsu's method
Apply some morphology to clean up the binary image
Get the skeleton of the image
Crop a ROI from the center of the image
Count the number of jumps from 255 to 0
This is the code, be sure to include the extended image processing module (ximgproc) and also link it before compiling it:
#include <iostream>
#include <opencv2/opencv.hpp>
#include <opencv2/ximgproc.hpp> // The extended image processing module
// Read Image:
std::string imagePath = "D://opencvImages//";
cv::Mat inputImage = cv::imread( imagePath+"IN2Xh.png" );
// Convert BGR to Grayscale:
cv::cvtColor( inputImage, inputImage, cv::COLOR_BGR2GRAY );
// Get binary image via Otsu:
cv::threshold( inputImage, inputImage, 0, 255, cv::THRESH_OTSU );
The above snippet produces the following image:
Note that there's a little bit of noise due to the thresholding, let's try to remove those isolated blobs of white pixels by applying some morphology. Maybe an opening, which is an erosion followed by dilation. The structuring elements and iterations, though, are not the same, and these where found by experimentation. I wanted to remove the majority of the isolated blobs without modifying too much the original image:
// Apply Morphology. Erosion + Dilation:
// Set rectangular structuring element of size 3 x 3:
cv::Mat SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
// Set the iterations:
int morphoIterations = 1;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_ERODE, SE, cv::Point(-1,-1), morphoIterations);
// Set rectangular structuring element of size 5 x 5:
SE = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(5, 5) );
// Set the iterations:
morphoIterations = 2;
cv::morphologyEx( inputImage, inputImage, cv::MORPH_DILATE, SE, cv::Point(-1,-1), morphoIterations);
This combination of structuring elements and iterations yield the following filtered image:
Its looking alright. Now comes the main idea of the algorithm. If we compute the skeleton of this image, we would "normalize" all the lines to a width of 1 pixel, which is very handy, because we could reduce the image to a 1 x 1 (row) matrix and count the number of jumps. Since the lines are "normalized" we could get rid of possible overlaps between lines. Now, skeletonized images sometimes produce artifacts near the borders of the image. These artifacts resemble thickened anchors at the first and last row of the image. To prevent these artifacts we can extend borders prior to computing the skeleton:
// Extend borders to avoid skeleton artifacts, extend 5 pixels in all directions:
cv::copyMakeBorder( inputImage, inputImage, 5, 5, 5, 5, cv::BORDER_CONSTANT, 0 );
// Get the skeleton:
cv::Mat imageSkelton;
cv::ximgproc::thinning( inputImage, imageSkelton );
This is the skeleton obtained:
Nice. Before we count jumps, though, we must observe that the lines are skewed. If we reduce this image directly to a one row, some overlapping could indeed happen between to lines that are too skewed. To prevent this, I crop a middle section of the skeleton image and count transitions there. Let's crop the image:
// Crop middle ROI:
cv::Rect linesRoi;
linesRoi.x = 0;
linesRoi.y = 0.5 * imageSkelton.rows;
linesRoi.width = imageSkelton.cols;
linesRoi.height = 1;
cv::Mat imageROI = imageSkelton( linesRoi );
This would be the new ROI, which is just the middle row of the skeleton image:
Let me prepare a BGR copy of this just to draw some results:
// BGR version of the Grayscale ROI:
cv::Mat colorROI;
cv::cvtColor( imageROI, colorROI, cv::COLOR_GRAY2BGR );
Ok, let's loop through the image and count the transitions between 255 and 0. That happens when we look at the value of the current pixel and compare it with the value obtained an iteration earlier. The current pixel must be 0 and the past pixel 255. There's more than a way to loop through a cv::Mat in C++. I prefer to use cv::MatIterator_s and pointer arithmetic:
// Set the loop variables:
cv::MatIterator_<cv::Vec3b> it, end;
uchar pastPixel = 0;
int jumpsCounter = 0;
int i = 0;
// Loop thru image ROI and count 255-0 jumps:
for (it = imageROI.begin<cv::Vec3b>(), end = imageROI.end<cv::Vec3b>(); it != end; ++it) {
// Get current pixel
uchar ¤tPixel = (*it)[0];
// Compare it with past pixel:
if ( (currentPixel == 0) && (pastPixel == 255) ){
// We have a jump:
jumpsCounter++;
// Draw the point on the BGR version of the image:
cv::line( colorROI, cv::Point(i, 0), cv::Point(i, 0), cv::Scalar(0, 0, 255), 1 );
}
// current pixel is now past pixel:
pastPixel = currentPixel;
i++;
}
// Show image and print number of jumps found:
cv::namedWindow( "Jumps Found", CV_WINDOW_NORMAL );
cv::imshow( "Jumps Found", colorROI );
cv::waitKey( 0 );
std::cout<<"Jumps Found: "<<jumpsCounter<<std::endl;
The points where the jumps were found are drawn in red, and the number of total jumps printed is:
Jumps Found: 9
I'd like to detect a custom "multiple-bar pattern" in an image.
The pattern looks like this, kind of a group of parallel black bars with the same width but different height, see this image:
This pattern could be on the image or even not but if it is - I'd like to get it's position.
Note: The color of the pattern is black in every case.
Note: The size of the pattern is unknown, so it could be big or could be super small.
Note: The pattern bar count is a fixed number. It will be the same ( in this case 7) for every occurrence.
An image could look like this:
And after performing the code search algorithm this should happen:
Any help would be very appreciated. Thanks a million in advance, Tempi.
Note: The code I got so far (not working)
Mat myImage; // this is the mat of the photo you can see above
Mat algorithmImage;
myImage.coptyTo(algorithmImage);
cvtColor(algorithmImage, algorithmImage, CV_RGB2HSV);
double imgThreshold = 20;
cv::inRange(algorithmImage, cv::Scalar(0, 0, 0, 0), cv::Scalar(180, 255, 30, 0), 20);
Mat canny;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
Canny( algorithmImage, canny, 3, 6, 3 );
findContours( canny, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
for( int i = 0; i<contours.size(); i++ ) {
// ??
}
bool isLineAlreadyFound(const Vec4i& _l1, const Vec4i& _l2) {
Vec4i l1(_l1), l2(_l2);
float length1 = sqrtf((l1[2] - l1[0])*(l1[2] - l1[0]) + (l1[3] - l1[1])*(l1[3] - l1[1]));
float length2 = sqrtf((l2[2] - l2[0])*(l2[2] - l2[0]) + (l2[3] - l2[1])*(l2[3] - l2[1]));
float product = (l1[2] - l1[0])*(l2[2] - l2[0]) + (l1[3] - l1[1])*(l2[3] - l2[1]);
if (fabs(product / (length1 * length2)) < cos(CV_PI / 30))
return false;
float mx1 = (l1[0] + l1[2]) * 0.5f;
float mx2 = (l2[0] + l2[2]) * 0.5f;
float my1 = (l1[1] + l1[3]) * 0.5f;
float my2 = (l2[1] + l2[3]) * 0.5f;
float dist = sqrtf((mx1 - mx2)*(mx1 - mx2) + (my1 - my2)*(my1 - my2));
if (dist > std::max(length1, length2) * 0.5f)
return false;
return true;
}
Here's how I would approach your problem.
Since your pattern is totally black, may be you can benefit from that
and just do a threshold. Try a very low threshold (if it's a perfect
black, use 1 as a threshold).
After above step, using findContours should already do a good job in
detecting shapes. But what you'll need is a way to recognize your
pattern. Since you your patterns may show up in different scale and/or
orientation, you will need a scale/orientation invariant descriptor. I
mean here, a shape descriptor. You can look at
https://en.wikipedia.org/wiki/Image_moment for Hu moments.
That will be for only one bar in each pattern. You should complete
your search by designing a specific feature for your pattern with the
help of the features extracted from each bar.
Th trick here is that you have 14 parallel edges. Those are impossible to miss. As noted in other answer, threshold to isolate them. Directly after that, run the edge detector. This turns black&white edges into lines, while the rest of the image might produce a few scattered pixels. Visualize this to see for yourself.
Next, run the Hough Transform from OpenCV. This gives you lines in the format {offset, direction}. Even if a few scattered background pixels would coincidentally line up, they wouldn't form 14 lines with the same direction.
You can calculate the maximum offset difference to find the scale of the bar pattern, and double-check the relative offsets to check individual bar widths and spacings. Remember, the 14 lines are edges, so you need to pair them.
At this point you've identified the direction and scale of the bar pattern. it might be a bit surprising to realize you haven't actually identified the actual bar positions. The reason for this procedure is that we tackle the hard problem first. We now go back to the output of the edge detection, and divide all the edges there into three categories: the parallel edges, the rounded end caps between two pairs of parallel edges (follow the contours), and random background pixels. You might need a MORPH_CLOSE operation to close gaps in the contour.
I have a project to create a face detection program. One of the requirements is for me to draw a line across the center of the face.
Now the way I have done this is by first drawing a line across the two eyes (by getting each eye's coordinates and using them as line points) and then drawing a perpendicular line to that. This way I can account for face rotation aswell.
However this line is the same length as the line across the eyes. I want this perpendicular line to cut across the entire face if possible. Attached is an example of my output for one the rotated face images. I need the green line to be longer.
unfortunately your image is a jpeg so it has compression artifacts.
I first tried to extract the line, but that didnt give good angle information, so I use cv::minAreaRect after green area segmentation.
cv::Mat green;
cv::inRange(input, cv::Scalar(0,100,0), cv::Scalar(100,255,100),green);
// instead of this, maybe try to find contours and choose the right one, if there are more green "objects" in the image
std::vector<cv::Point2i> locations;
cv::findNonZero(green, locations);
// find the used color to draw in same color later
cv::Vec3d averageColorTmp(0,0,0);
for(unsigned int i=0; i<locations.size(); ++i)
{
averageColorTmp += input.at<cv::Vec3b>(locations[i]);
}
averageColorTmp *= 1.0/locations.size();
// compute direction of the segmented region
cv::RotatedRect line = cv::minAreaRect(locations);
// extract directions:
cv::Point2f start = line.center;
cv::Point2f rect_points[4];
line.points(rect_points);
cv::Point2f direction1 = rect_points[0] - rect_points[1];
cv::Point2f direction2 = rect_points[0] - rect_points[3];
// use dominant direction
cv::Point2f lineWidthDirection;
lineWidthDirection = (cv::norm(direction1) < cv::norm(direction2)) ? direction1 : direction2;
double lineWidth = cv::norm(lineWidthDirection);
cv::Point2f lineLengthDirection;
lineLengthDirection = (cv::norm(direction1) > cv::norm(direction2)) ? direction1 : direction2;
lineLengthDirection = 0.5*lineLengthDirection; // because we operate from center;
// input parameter:
// how much do you like to increase the line size?
// value of 1.0 is original size
// value of > 1 is increase of line
double scaleFactor = 3.0;
// draw the line
cv::line(input, start- scaleFactor*lineLengthDirection, start+ scaleFactor*lineLengthDirection, cv::Scalar(averageColorTmp[0],averageColorTmp[1],averageColorTmp[2]), lineWidth);
giving this result:
This method should be quite robust, IF there is no other green in the image but only the green line. If there are other green parts in the image you should extract contours first and choose the right contour to assign the variable locations
Ill write it out in pseudocode
Take the furthest right hand edge coordinate (row1,col1) in your image
Take the right hand point of your line (row2,col2)
Draw a line from that your line (row2,col2) to (row2,col1)
Repeat for the left hand side
This will result in drawing two additional lines from either side of your line to the edge of your image.
I'm fairly new to OpenCV, and very excited to learn more. I've been toying with the idea of outlining edges, shapes.
I've come across this code (running on an iOS device), which uses Canny. I'd like to be able to render this in color, and circle each shape. Can someone point me in the right direction?
Thanks!
IplImage *grayImage = cvCreateImage(cvGetSize(iplImage), IPL_DEPTH_8U, 1);
cvCvtColor(iplImage, grayImage, CV_BGRA2GRAY);
cvReleaseImage(&iplImage);
IplImage* img_blur = cvCreateImage( cvGetSize( grayImage ), grayImage->depth, 1);
cvSmooth(grayImage, img_blur, CV_BLUR, 3, 0, 0, 0);
cvReleaseImage(&grayImage);
IplImage* img_canny = cvCreateImage( cvGetSize( img_blur ), img_blur->depth, 1);
cvCanny( img_blur, img_canny, 10, 100, 3 );
cvReleaseImage(&img_blur);
cvNot(img_canny, img_canny);
And example might be these burger patties. OpenCV would detect the patty, and outline it.
Original Image:
Color information is often handled by conversion to HSV color space which handles "color" directly instead of dividing color into R/G/B components which makes it easier to handle same colors with different brightness etc.
if you convert your image to HSV you'll get this:
cv::Mat hsv;
cv::cvtColor(input,hsv,CV_BGR2HSV);
std::vector<cv::Mat> channels;
cv::split(hsv, channels);
cv::Mat H = channels[0];
cv::Mat S = channels[1];
cv::Mat V = channels[2];
Hue channel:
Saturation channel:
Value channel:
typically, the hue channel is the first one to look at if you are interested in segmenting "color" (e.g. all red objects). One problem is, that hue is a circular/angular value which means that the highest values are very similar to the lowest values, which results in the bright artifacts at the border of the patties. To overcome this for a particular value, you can shift the whole hue space. If shifted by 50° you'll get something like this instead:
cv::Mat shiftedH = H.clone();
int shift = 25; // in openCV hue values go from 0 to 180 (so have to be doubled to get to 0 .. 360) because of byte range from 0 to 255
for(int j=0; j<shiftedH.rows; ++j)
for(int i=0; i<shiftedH.cols; ++i)
{
shiftedH.at<unsigned char>(j,i) = (shiftedH.at<unsigned char>(j,i) + shift)%180;
}
now you can use a simple canny edge detection to find edges in the hue channel:
cv::Mat cannyH;
cv::Canny(shiftedH, cannyH, 100, 50);
You can see that the regions are a little bigger than the real patties, that might be because of the tiny reflections on the ground around the patties, but I'm not sure about that. Maybe it's just because of jpeg compression artifacts ;)
If you instead use the saturation channel to extract edges, you'll end up with something like this:
cv::Mat cannyS;
cv::Canny(S, cannyS, 200, 100);
where the contours aren't completely closed. Maybe you can combine hue and saturation within preprocessing to extract edges in the hue channel but only where saturation is high enough.
At this stage you have edges. Regard that edges aren't contours yet. If you directly extract contours from edges they might not be closed/separated etc:
// extract contours of the canny image:
std::vector<std::vector<cv::Point> > contoursH;
std::vector<cv::Vec4i> hierarchyH;
cv::findContours(cannyH,contoursH, hierarchyH, CV_RETR_TREE , CV_CHAIN_APPROX_SIMPLE);
// draw the contours to a copy of the input image:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
you can remove those small contours by checking cv::contourArea(contoursH[i]) > someThreshold before drawing. But you see the two patties on the left to be connected? Here comes the hardest part... use some heuristics to "improve" your result.
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
cv::dilate(cannyH, cannyH, cv::Mat());
Dilation before contour extraction will "close" the gaps between different objects but increase the object size too.
if you extract contours from that it will look like this:
If you instead choose only the "inner" contours it is exactly what you like:
cv::Mat outputH = input.clone();
for( int i = 0; i< contoursH.size(); i++ )
{
if(cv::contourArea(contoursH[i]) < 20) continue; // ignore contours that are too small to be a patty
if(hierarchyH[i][3] < 0) continue; // ignore "outer" contours
cv::drawContours( outputH, contoursH, i, cv::Scalar(0,0,255), 2, 8, hierarchyH, 0);
}
mind that the dilation and inner contour stuff is a little fuzzy, so it might not work for different images and if the initial edges are placed better around the object border it might 1. not be necessary to do the dilate and inner contour thing and 2. if it is still necessary, the dilate will make the object smaller in this scenario (which luckily is great for the given sample image.).
EDIT: Some important information about HSV: The hue channel will give every pixel a color of the spectrum, even if the saturation is very low ( = gray/white) or if the color is very low (value) so often it is desired to threshold the saturation and value channels to find some specific color! This might be much easier and much more stavle to handle than the dilation I've used in my code.
i want the hand image to be a black and white shape of the hand. here's a sample of the input and the desired output:
using a threshold doesn't give the desired output because some of the colors inside the hand are the same with the background color. how can i get the desired output?
Adaptive threshold, find contours, floodfill?
Basically, adaptive threshold turns your image into black and white, but takes the threshold level based on local conditions around each pixel - that way, you should avoid the problem you're experiencing with an ordinary threshold. In fact, I'm not sure why anyone would ever want to use a normal threshold.
If that doesn't work, an alternative approach is to find the largest contour in the image, draw it onto a separate matrix and then floodfill everything inside it with black. (Floodfill is like the bucket tool in MSPaint - it starts at a particular pixel, and fills in everything connected to that pixel which is the same colour with another colour of your choice.)
Possibly the most robust approach against various lighting conditions is to do them all in the sequence at the top. But you may be able to get away with only the threshold or the countours/floodfill.
By the way, perhaps the trickiest part is actually finding the contours, because findContours returns an arraylist/vector/whatever (depends on the platform I think) of MatOfPoints. MatOfPoint is a subclass of Mat but you can't draw it directly - you need to use drawContours. Here's some code for OpenCV4Android that I know works:
private Mat drawLargestContour(Mat input) {
/** Allocates and returns a black matrix with the
* largest contour of the input matrix drawn in white. */
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(input, contours, new Mat() /* hierarchy */,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = 0;
int index = -1;
for (MatOfPoint contour : contours) { // iterate over every contour in the list
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
index = contours.indexOf(contour);
}
}
if (index == -1) {
Log.e(TAG, "Fatal error: no contours in the image!");
}
Mat border = new Mat(input.rows(), input.cols(), CvType.CV_8UC1); // initialized to 0 (black) by default because it's Java :)
Imgproc.drawContours(border, contours, index, new Scalar(255)); // 255 = draw contours in white
return border;
}
Two quick things you can try:
After thresholding you can:
Do a morphological closing,
or, the most straightforward: cv::findContours, keep the largest if it's more than one, then draw it using cv::fillConvexPoly and you will get this mask. (fillConvexPoly will fill the holes for you)