OpenCV - approxPolyDP for edge maps (not contours) - c++

I have successfully applied the method cv::approxPolyDP on contours (cv::findContours), in order to represent a contour with a simpler polygon and implicitly do some denoising.
I would like to do the same thing on an edge map acquired from an RGBD camera (which is in general very noisy), but with not much success up to now and I cannot find relative examples online. The reason I need this, is that by means of an edge map one can also use the edges between fingers, edges created by finger occlusion or edges created in the palm.
Is this method applicable to general edge maps, other than contours?
Could someone pinpoint me to an example?
Some images attached:
Successful example for contours:
Problematic case for edge maps:
Most probably I draw things in the wrong way, but drawing just the pixels returned by the method shows that probably large areas are not represented in the end result (and this doesn't change much according to the epsilon-parameter).
I attach also a depth image, similar to the ones I use in the experimental pipeline descibed above. This depth image was not acquired by a depth camera, but was synthetically generated by reading the depth buffer of the gpu, using OpenGL.
Just for reference, this is also the edge map of the depth image acquired straight from the depth camera (using the raw image, no smoothing etc applied)
(hand as viewd from a depth camera, palm facing upwards, fingers "closing" towards the palm)

Your issue with approxPolyDP is due to the formatting of the input into approxPolyDP.
Explanation
approxPolyDP expects its input to be a vector of Points. These points define a polygonal curve that will be processed by approxPolyDP. The curve could be open or closed, which can be controlled by a flag.
The ordering of the points in the list is important. Just as one traces out a polygon by hand, each subsequent point in the vector must be the next vertex of the polygon, clockwise or counter-clockwise.
If the list of points is stored in raster order (sorted by Y and then X), then the point[k] and point[k+1] do not necessarily belong to the same curve. This is the cause of the problem.
This issue is explained with illustrations in OpenCV - How to extract edges form result of Canny Function? . Quote from Mikhail: "Canny doesn't connect pixels into chains or segments."
Illustration of "raster order" that is generated by Canny.
Illustration of "contour order" that is expected by approxPolyDP
What is needed
What you need is a list of "chains of edge pixels". Each chain must contain edge pixels that are adjacent to each other, just like someone tracing out an object's outline by a pencil, without the tip of the pencil leaving the paper.
This is not what is returned from edge detection methods, such as Canny. Further processing is needed to convert an edge map into chains of adjacent (continuous) edge pixels.
Suggested solutions
(1) Use binary threshold instead of edge detection as the input to findContours
This would be applicable if there exists a threshold value that separates the hand from the background, and that this value works for the whole hand (not just part of the hand).
(2) Scan the edge map, and build the list of adjacent pixels by examining the neighbors of each edge pixel.
This is similar to the connected-components algorithm, except that instead of finding a blob (where you only need to know each pixel's membership), you try to find chains of pixels such that you can tell the previous and next edge pixels along the chain.
(3) Use an alternative edge detection algorithm, such as Edge Drawing.
Details can be found at http://ceng.anadolu.edu.tr/cv/EdgeDrawing/
Unfortunately, this is not provided out-of-the-box from OpenCV, so you may have to find an implementation elsewhere.
Sample code for option #1.
#include <stdint.h>
#include <iostream>
#include <vector>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
Mat matInput = imread("~/Data/mA9EE.png", false);
// ---- Preprocessing of depth map. (Optional.) ----
GaussianBlur(matInput, matInput, cv::Size(9, 9), 4.0);
// ---- Here, we use cv::threshold instead of cv::Canny as explained above ----
Mat matEdge;
//Canny(matInput, matEdge, 0.1, 1.0);
threshold(matInput, matEdge, 192.0, 255.0, THRESH_BINARY_INV);
// ---- Use findContours to find chains of consecutive edge pixels ----
vector<vector<Point> > contours;
findContours(matEdge, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// ---- Code below is only used for visualizing the result. ----
Mat matContour(matEdge.size(), CV_8UC1);
for (size_t k = 0; k < contours.size(); ++k)
{
const vector<Point>& contour = contours[k];
for (size_t k2 = 0; k2 < contour.size(); ++k2)
{
const Point& p = contour[k2];
matContour.at<uint8_t>(p) = 255;
}
}
imwrite("~/Data/output.png", matContour);
cout << "Done!" << endl;
return 0;
}

Related

Incomplete Delaunay triangulation

I'm using C++ and OpenCV to create a Delaunay triangle mesh from user-specified sample points on an image (which will then be extrapolated across the domain using the FEM for the relevant ODE).
Since the 4 corners of the (rectangular) image are in the list of vertices supplied to Subdiv2D, I expect the outer convex hull of the triangulation to trace the perimeter of the image. However, very frequently, there are missing elements around the outside.
Sometimes I can get the expected result by nudging the coordinates of certain points to avoid high aspect ratio triangles. But this is not a solution as in general the user most be able to specify any valid coordinates.
An example output is like this: CV Output. Elements are in white with black edges. At the bottom and right edges, no triangles have been added, and you can see through to the black background.
How can I make the outer convex hull of the triangulation trace the image perimeter with no gaps please?
Here is a MWE (with a plotting function included):
#include <opencv2/opencv.hpp>
#include <vector>
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv);
int main(int argc,char** argv)
{
// image dim
int width=3440;
int height=2293;
// sample coords
std::vector<int> x={0,width-1,width-1,0,589,1015,1674,2239,2432,3324,2125,2110,3106,3295,1298,1223,277,208,54,54,1749,3245,431,1283,1397,3166};
std::vector<int> y={0,0,height-1,height-1,2125,1739,1154,817,331,143,1377,2006,1952,1501,872,545,812,310,2180,54,2244,2234,1387,1412,118,1040};
// add delaunay nodes
cv::Rect rect(0,0,width,height);
cv::Subdiv2D subdiv(rect);
for(size_t i=0;i<x.size();++i)
{
cv::Point2f p(x[i],y[i]);
subdiv.insert(p);
}
// draw elements
cv::Mat image(height,width,CV_8U);
DrawDelaunay(image,subdiv);
cv::resize(image,image,cv::Size(),0.3,0.3);
cv::imshow("Delaunay",image);
cv::waitKey(0);
return 0;
}
void DrawDelaunay(cv::Mat& image,cv::Subdiv2D& subdiv)
{
std::vector<cv::Vec6f> elements;
subdiv.getTriangleList(elements);
std::vector<cv::Point> pt(3);
for(size_t i=0;i<elements.size();++i)
{
// node coords
cv::Vec6f t=elements[i];
pt[0]=cv::Point(cvRound(t[0]),cvRound(t[1]));
pt[1]=cv::Point(cvRound(t[2]),cvRound(t[3]));
pt[2]=cv::Point(cvRound(t[4]),cvRound(t[5]));
// element edges
cv::Scalar black(0,0,0);
cv::line(image,pt[0],pt[1],black,3);
cv::line(image,pt[1],pt[2],black,3);
cv::line(image,pt[2],pt[0],black,3);
// element fill
int nump=3;
const cv::Point* pp[1]={&pt[0]};
cv::fillPoly(image,pp,&nump,1,cv::Scalar(255,0,0));
}
}
If relevant, I coded this in Matlab first where the Delaunay triangulation worked exactly as I expected.
My solution was to add a border around the 'cv::Rect rect' provided to cv::Subdiv2D, making it larger in width and height than the image (20% larger seems to work well).
Then instead of adding nodes to the corners of the image, I added 4 corner nodes and 4 edge nodes to the perimiter of this enlarged 'cv::Rect rect' variable which holds the Delaunay points.
This seems to solve the problem. I think what was happening was that if the user placed any samples near the edge of the image, it resulted in high aspect ratio triangles at the edges. This ticket suggests there is a bug around this in the OpenCV implementation of the Delaunay algorithm.
My solution hopefully means that corner and edge nodes are never too close to user samples, side-stepping the issue.
I haven't tested this extensively yet. I'm not sure how robust the solution will turn out to be. It has worked so far.
I'm still interested to know of other solutions.
I ran your data points through the Tinfour project's demo application and got the results shown below. It looks like your data is fine. Unfortunately, the Tinfour project is written in Java and you're working in C++, so it will have limited value to you.
Since you plan on using Finite Element Methods, you might want to see whether there is any way you can run a Delaunay Refinement operation over your data to improve the geometry. The skinny triangles sometimes lead to numerical issues when using FEM software.

Rectangle detection / tracking using OpenCV

What I need
I'm currently working on an augmented reality kinda game. The controller that the game uses (I'm talking about the physical input device here) is a mono colored, rectangluar pice of paper. I have to detect the position, rotation and size of that rectangle in the capture stream of the camera. The detection should be invariant on scale and invariant on rotation along the X and Y axes.
The scale invariance is needed in case that the user moves the paper away or towards the camera. I don't need to know the distance of the rectangle so scale invariance translates to size invariance.
The rotation invariance is needed in case the user tilts the rectangle along its local X and / or Y axis. Such a rotation changes the shape of the paper from rectangle to trapezoid. In this case, the object oriented bounding box can be used to measure the size of the paper.
What I've done
At the beginning there is a calibration step. A window shows the camera feed and the user has to click on the rectangle. On click, the color of the pixel the mouse is pointing at is taken as reference color. The frames are converted into HSV color space to improve color distinguishing. I have 6 sliders that adjust the upper and lower thresholds for each channel. These thresholds are used to binarize the image (using opencv's inRange function).
After that I'm eroding and dilating the binary image to remove noise and unite nerby chunks (using opencv's erode and dilate functions).
The next step is finding contours (using opencv's findContours function) in the binary image. These contours are used to detect the smallest oriented rectangles (using opencv's minAreaRect function). As final result I'm using the rectangle with the largest area.
A short conclusion of the procedure:
Grab a frame
Convert that frame to HSV
Binarize it (using the color that the user selected and the thresholds from the sliders)
Apply morph ops (erode and dilate)
Find contours
Get the smallest oriented bouding box of each contour
Take the largest of those bounding boxes as result
As you may noticed, I don't make an advantage of the knowledge about the actual shape of the paper, simply because I don't know how to use this information properly.
I've also thought about using the tracking algorithms of opencv. But there were three reasons that prevented me from using them:
Scale invariance: as far as I read about some of the algorithms, some don't support different scales of the object.
Movement prediction: some algorithms use movement prediction for better performance, but the object I'm tracking moves completely random and therefore unpredictable.
Simplicity: I'm just looking for a mono colored rectangle in an image, nothing fancy like car or person tracking.
Here is a - relatively - good catch (binary image after erode and dilate)
and here is a bad one
The Question
How can I improve the detection in general and especially to be more resistant against lighting changes?
Update
Here are some raw images for testing.
Can't you just use thicker material?
Yes I can and I already do (unfortunately I can't access these pieces at the moment). However, the problem still remains. Even if I use material like cartboard. It isn't bent as easy as paper, but one can still bend it.
How do you get the size, rotation and position of the rectangle?
The minAreaRect function of opencv returns a RotatedRect object. This object contains all the data I need.
Note
Because the rectangle is mono colored, there is no possibility to distinguish between top and bottom or left and right. This means that the rotation is always in range [0, 180] which is perfectly fine for my purposes. The ratio of the two sides of the rect is always w:h > 2:1. If the rectangle would be a square, the range of roation would change to [0, 90], but this can be considered irrelevant here.
As suggested in the comments I will try histogram equalization to reduce brightness issues and take a look at ORB, SURF and SIFT.
I will update on progress.
The H channel in the HSV space is the Hue, and it is not sensitive to the light changing. Red range in about [150,180].
Based on the mentioned information, I do the following works.
Change into the HSV space, split the H channel, threshold and normalize it.
Apply morph ops (open)
Find contours, filter by some properties( width, height, area, ratio and so on).
PS. I cannot fetch the image you upload on the dropbox because of the NETWORK. So, I just use crop the right side of your second image as the input.
imgname = "src.png"
img = cv2.imread(imgname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## Split the H channel in HSV, and get the red range
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
h[h<150]=0
h[h>180]=0
## normalize, do the open-morp-op
normed = cv2.normalize(h, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8UC1)
kernel = cv2.getStructuringElement(shape=cv2.MORPH_ELLIPSE, ksize=(3,3))
opened = cv2.morphologyEx(normed, cv2.MORPH_OPEN, kernel)
res = np.hstack((h, normed, opened))
cv2.imwrite("tmp1.png", res)
Now, we get the result as this (h, normed, opened):
Then find contours and filter them.
contours = cv2.findContours(opened, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
print(len(contours))[-2]
bboxes = []
rboxes = []
cnts = []
dst = img.copy()
for cnt in contours:
## Get the stright bounding rect
bbox = cv2.boundingRect(cnt)
x,y,w,h = bbox
if w<30 or h < 30 or w*h < 2000 or w > 500:
continue
## Draw rect
cv2.rectangle(dst, (x,y), (x+w,y+h), (255,0,0), 1, 16)
## Get the rotated rect
rbox = cv2.minAreaRect(cnt)
(cx,cy), (w,h), rot_angle = rbox
print("rot_angle:", rot_angle)
## backup
bboxes.append(bbox)
rboxes.append(rbox)
cnts.append(cnt)
The result is like this:
rot_angle: -2.4540319442749023
rot_angle: -1.8476102352142334
Because the blue rectangle tag in the source image, the card is splited into two sides. But a clean image will have no problem.
I know it's been a while since I asked the question. I recently continued on the topic and solved my problem (although not through rectangle detection).
Changes
Using wood to strengthen my controllers (the "rectangles") like below.
Placed 2 ArUco markers on each controller.
How it works
Convert the frame to grayscale,
downsample it (to increase performance during detection),
equalize the histogram using cv::equalizeHist,
find markers using cv::aruco::detectMarkers,
correlate markers (if multiple controllers),
analyze markers (position and rotation),
compute result and apply some error correction.
It turned out that the marker detection is very robust to lighting changes and different viewing angles which allows me to skip any calibration steps.
I placed 2 markers on each controller to increase the detection robustness even more. Both markers has to be detected only one time (to measure how they correlate). After that, it's sufficient to find only one marker per controller as the other can be extrapolated from the previously computed correlation.
Here is a detection result in a bright environment:
in a darker environment:
and when hiding one of the markers (the blue point indicates the extrapolated marker postition):
Failures
The initial shape detection that I implemented didn't perform well. It was very fragile to lighting changes. Furthermore, it required an initial calibration step.
After the shape detection approach I tried SIFT and ORB in combination with brute force and knn matcher to extract and locate features in the frames. It turned out that mono colored objects don't provide much keypoints (what a surprise). The performance of SIFT was terrible anyway (ca. 10 fps # 540p).
I drew some lines and other shapes on the controller which resulted in more keypoints beeing available. However, this didn't yield in huge improvements.

How can I detect the position and the radius of the ball using opencv?

I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link

Extract one object from bunch of objects and detect edges

For my college project I need to identify a species of a plant from plant leaf shape by detecting edges of a leaf. (I use OpenCV 2.4.9 and C++), but the source image has taken in the real environment of the plant and has more than one leaf. See the below example image. So here I need to extract the edge pattern of just one leaf to process further.
Using Canny Edge Detector I can identify edges of the whole image.
But I don't know how to proceed from here to extract edge pattern of just one leaf, may be more clear and complete leaf. I don't know even if this is possible also. Can anyone please tell me if this is possible how to extract edges of one leaf I just want to know the image peocessing steps that I need to apply to the image. I don't want any code samples. I'm new to image processing and OpenCV and learning by doing experiments.
Thanks in advance.
Edit
As Luis said said I have done Morphological close to the image after doing edge detection using Canny edge detection, but it seems still it is difficult me to find the largest contour from the image.
Here are the steps I have taken to process the image
Apply Bilateral Filter to reduce noise
bilateralFilter(img_src, img_blur, 31, 31 * 2, 31 / 2);
Adjust contrast by histogram equaliztion
cvtColor(img_blur,img_equalized,CV_BGR2GRAY);
Apply Canny edge detector
Canny(img_equalized, img_edge_detected, 20, 60, 3);
Threshold binary image to remove some background data
threshold(img_edge_detected, img_threshold, 1, 255,THRESH_BINARY_INV);
Morphological close of the image
morphologyEx(img_threshold, img_closed, MORPH_CLOSE, getStructuringElement(MORPH_ELLIPSE, Size(2, 2)));
Following are the resulting images I'm getting.
This result I'm getting for the above original image
Source image and result for second image
Source :
Result :
Is there any way to detect the largest contour and extract it from the image ?
Note that my final target is to create a plant identification system using real environmental image, but here I cannot use template matching or masking kind of things because the user has to take an image and upload it so the system doesn't have any prior idea about the leaf.
Here is the full code
#include <opencv\cv.h>
#include <opencv\highgui.h>
using namespace cv;
int main()
{
Mat img_src, img_blur,img_gray,img_equalized,img_edge_detected,img_threshold,img_closed;
//Load original image
img_src = imread("E:\\IMAG0196.jpg");
//Apply Bilateral Filter to reduce noise
bilateralFilter(img_src, img_blur, 31, 31 * 2, 31 / 2);
//Adjust contrast by histogram equaliztion
cvtColor(img_blur,img_equalized,CV_BGR2GRAY);
//Apply Canny edge detector
Canny(img_equalized, img_edge_detected, 20, 60, 3);
//Threshold binary image to remove some background data
threshold(img_edge_detected, img_threshold, 15, 255,THRESH_BINARY_INV);
//Morphological close of the image
morphologyEx(img_threshold, img_closed, MORPH_CLOSE, getStructuringElement(MORPH_ELLIPSE, Size(2, 2)));
imshow("Result", img_closed);
waitKey(0);
return 0;
}
Thank you.
Well there is a similar question that was asked here:
opencv matching edge images
It seems that edge information is not a good descriptor for the image, still if you want to try it I'll do the following steps:
Load image and convert it to grayscale
Detect edges - Canny, Sobel try them and find what it suits you best
Set threshold to a given value that eliminates most background - Binarize image
Close the image - Morphological close dont close the window!
Count and identify objects in the image (Blobs, Watershed)
Check each object for a shape (assuming you have described shapes of the leaf you could find before or a standard shape like an ellipse) features like:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html
http://www.math.uci.edu/icamp/summer/research_11/park/shape_descriptors_survey.pdf
If a given object has a given shape that you described as a leaf then you detected the leaf!.
I believe that given images are taken in the real world these algorithm will perform poorly but it's a start. Well hope it helps :).
-- POST EDIT 06/07
Well since you have no prior information about the leaf, I think the best we could do is the following:
Load image
Bilateral filter
Canny
Extract contours
Assume: that the contour with the largest perimeter is the leaf
Convex hull the 3 or 2 largest contours (the blue line is the convex hull done)
Use this convex hull to do a graph cut on the image and segmentate it
If you do those steps, you'll end up with images like these:
I won't post the code here, but you can check it out in my messy github. I hope you don't mind it was made in python.
Leaf - Github
Still, I have a couple of things to finish that could improve the result.. Roadmap would be:
Define the mask in the graphcut (like its described in the doc)
Apply region grow may give a better convex hull
Remove all edges that touch the border of the image can help to identify larger edges
Well, again, I hope it helps

Detect clusters of circular objects by iterative adaptive thresholding and shape analysis

I have been developing an application to count circular objects such as bacterial colonies from pictures.
What make it easy is the fact that the objects are generally well distinct from the background.
However, few difficulties make the analysis tricky:
The background will present gradual as well as rapid intensity change.
In the edges of the container, the object will be elliptic rather than circular.
The edges of the objects are sometimes rather fuzzy.
The objects will cluster.
The object can be very small (6px of diameter)
Ultimately, the algorithms will be used (via GUI) by people that do not have deep understanding of image analysis, so the parameters must be intuitive and very few.
The problem has been address many times in the scientific literature and "solved", for instance, using circular Hough transform or watershed approaches, but I have never been satisfied by the results.
One simple approach that was described is to get the foreground by adaptive thresholding and split (as I described in this post) the clustered objects using distance transform.
I have successfully implemented this method, but it could not always deal with sudden change in intensity. Also, I have been asked by peers to come out with a more "novel" approach.
I therefore was looking for a new method to extract foreground.
I therefore investigated other thresholding/blob detection methods.
I tried MSERs but found out that they were not very robust and quite slow in my case.
I eventually came out with an algorithm that, so far, gives me excellent results:
I split the three channels of my image and reduce their noise (blur/median blur). For each channel:
I apply a manual implementation of the first step of adaptive thresholding by calculating the absolute difference between the original channel and a convolved (by a large kernel blur) one. Then, for all the relevant values of threshold:
I apply a threshold on the result of 2)
find contours
validate or invalidate contours on the grant of their shape (size, area, convexity...)
only the valid continuous regions (i.e. delimited by contours) are then redrawn in an accumulator (1 accumulator per channel).
After accumulating continuous regions over values of threshold, I end-up with a map of "scores of regions". The regions with the highest intensity being those that fulfilled the the morphology filter criteria the most often.
The three maps (one per channel) are then converted to grey-scale and thresholded (the threshold is controlled by the user)
Just to show you the kind of image I have to work with:
This picture represents part of 3 sample images in the top and the result of my algorithm (blue = foreground) of the respective parts in the bottom.
Here is my C++ implementation of : 3-7
/*
* cv::Mat dst[3] is the result of the absolute difference between original and convolved channel.
* MCF(std::vector<cv::Point>, int, int) is a filter function that returns an positive int only if the input contour is valid.
*/
/* Allocate 3 matrices (1 per channel)*/
cv::Mat accu[3];
/* We define the maximal threshold to be tried as half of the absolute maximal value in each channel*/
int maxBGR[3];
for(unsigned int i=0; i<3;i++){
double min, max;
cv::minMaxLoc(dst[i],&min,&max);
maxBGR[i] = max/2;
/* In addition, we fill accumulators by zeros*/
accu[i]=cv::Mat(compos[0].rows,compos[0].cols,CV_8U,cv::Scalar(0));
}
/* This loops are intended to be multithreaded using
#pragma omp parallel for collapse(2) schedule(dynamic)
For each channel */
for(unsigned int i=0; i<3;i++){
/* For each value of threshold (m_step can be > 1 in order to save time)*/
for(int j=0;j<maxBGR[i] ;j += m_step ){
/* Temporary matrix*/
cv::Mat tmp;
std::vector<std::vector<cv::Point> > contours;
/* Thresholds dst by j*/
cv::threshold(dst[i],tmp, j, 255, cv::THRESH_BINARY);
/* Finds continous regions*/
cv::findContours(tmp, contours, CV_RETR_LIST, CV_CHAIN_APPROX_TC89_L1);
if(contours.size() > 0){
/* Tests each contours*/
for(unsigned int k=0;k<contours.size();k++){
int valid = MCF(contours[k],m_minRad,m_maxRad);
if(valid>0){
/* I found that redrawing was very much faster if the given contour was copied in a smaller container.
* I do not really understand why though. For instance,
cv::drawContours(miniTmp,contours,k,cv::Scalar(1),-1,8,cv::noArray(), INT_MAX, cv::Point(-rect.x,-rect.y));
is slower especially if contours is very long.
*/
std::vector<std::vector<cv::Point> > tpv(1);
std::copy(contours.begin()+k, contours.begin()+k+1, tpv.begin());
/* We make a Roi here*/
cv::Rect rect = cv::boundingRect(tpv[0]);
cv::Mat miniTmp(rect.height,rect.width,CV_8U,cv::Scalar(0));
cv::drawContours(miniTmp,tpv,0,cv::Scalar(1),-1,8,cv::noArray(), INT_MAX, cv::Point(-rect.x,-rect.y));
accu[i](rect) = miniTmp + accu[i](rect);
}
}
}
}
}
/* Make the global scoreMap*/
cv::merge(accu,3,scoreMap);
/* Conditional noise removal*/
if(m_minRad>2)
cv::medianBlur(scoreMap,scoreMap,3);
cvtColor(scoreMap,scoreMap,CV_BGR2GRAY);
I have two questions:
What is the name of such foreground extraction approach and do you see any reason for which it could be improper to use it in this case ?
Since recursively finding and drawing contours is quite intensive, I would like to make my algorithm faster. Can you indicate me any way to achieve this goal ?
Thank you very much for you help,
Several years ago I wrote an aplication that detects cells in a microscope image. The code is written in Matlab, and I think now that is more complicated than it should be (it was my first CV project), so I will only outline tricks that will actually be helpful for you. Btw, it was deadly slow, but it was really good at separating large groups of twin cells.
I defined a metric by which to evaluate the chance that a given point is the center of a cell:
- Luminosity decreases in a circular pattern around it
- The variance of the texture luminosity follows a given pattern
- a cell will not cover more than % of a neighboring cell
With it, I started to iteratively find the best cell, mark it as found, then look for the next one. Because such a search is expensive, I employed genetic algorithms to search faster in my feature space.
Some results are given below: