OpenCV, how to use arrays of points for smoothing and sampling contours? - c++

I have a problem to get my head around smoothing and sampling contours in OpenCV (C++ API).
Lets say I have got sequence of points retrieved from cv::findContours (for instance applied on this this image:
Ultimately, I want
To smooth a sequence of points using different kernels.
To resize the sequence using different types of interpolations.
After smoothing, I hope to have a result like :
I also considered drawing my contour in a cv::Mat, filtering the Mat (using blur or morphological operations) and re-finding the contours, but is slow and suboptimal. So, ideally, I could do the job using exclusively the point sequence.
I read a few posts on it and naively thought that I could simply convert a std::vector(of cv::Point) to a cv::Mat and then OpenCV functions like blur/resize would do the job for me... but they did not.
Here is what I tried:
int main( int argc, char** argv ){
cv::Mat conv,ori;
ori=cv::imread(argv[1]);
ori.copyTo(conv);
cv::cvtColor(ori,ori,CV_BGR2GRAY);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i > hierarchy;
cv::findContours(ori, contours,hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
for(int k=0;k<100;k += 2){
cv::Mat smoothCont;
smoothCont = cv::Mat(contours[0]);
std::cout<<smoothCont.rows<<"\t"<<smoothCont.cols<<std::endl;
/* Try smoothing: no modification of the array*/
// cv::GaussianBlur(smoothCont, smoothCont, cv::Size(k+1,1),k);
/* Try sampling: "Assertion failed (func != 0) in resize"*/
// cv::resize(smoothCont,smoothCont,cv::Size(0,0),1,1);
std::vector<std::vector<cv::Point> > v(1);
smoothCont.copyTo(v[0]);
cv::drawContours(conv,v,0,cv::Scalar(255,0,0),2,CV_AA);
std::cout<<k<<std::endl;
cv::imshow("conv", conv);
cv::waitKey();
}
return 1;
}
Could anyone explain how to do this ?
In addition, since I am likely to work with much smaller contours, I was wondering how this approach would deal with border effect (e.g. when smoothing, since contours are circular, the last elements of a sequence must be used to calculate the new value of the first elements...)
Thank you very much for your advices,
Edit:
I also tried cv::approxPolyDP() but, as you can see, it tends to preserve extremal points (which I want to remove):
Epsilon=0
Epsilon=6
Epsilon=12
Epsilon=24
Edit 2:
As suggested by Ben, it seems that cv::GaussianBlur() is not supported but cv::blur() is. It looks very much closer to my expectation. Here are my results using it:
k=13
k=53
k=103
To get around the border effect, I did:
cv::copyMakeBorder(smoothCont,smoothCont, (k-1)/2,(k-1)/2 ,0, 0, cv::BORDER_WRAP);
cv::blur(smoothCont, result, cv::Size(1,k),cv::Point(-1,-1));
result.rowRange(cv::Range((k-1)/2,1+result.rows-(k-1)/2)).copyTo(v[0]);
I am still looking for solutions to interpolate/sample my contour.

Your Gaussian blurring doesn't work because you're blurring in column direction, but there is only one column. Using GaussianBlur() leads to a "feature not implemented" error in OpenCV when trying to copy the vector back to a cv::Mat (that's probably why you have this strange resize() in your code), but everything works fine using cv::blur(), no need to resize(). Try Size(0,41) for example. Using cv::BORDER_WRAP for the border issue doesn't seem to work either, but here is another thread of someone who found a workaround for that.
Oh... one more thing: you said that your contours are likely to be much smaller. Smoothing your contour that way will shrink it. The extreme case is k = size_of_contour, which results in a single point. So don't choose your k too big.

Another possibility is to use the algorithm openFrameworks uses:
https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/graphics/ofPolyline.cpp#L416-459
It traverses the contour and essentially applies a low-pass filter using the points around it. Should do exactly what you want with low overhead and (there's no reason to do a big filter on an image that's essentially just a contour).

How about approxPolyDP()?
It uses this algorithm to 'smooth' a contour (basically gettig rid of most of the contour's points and leave the ones that represent a good approximation of your contour)

From 2.1 OpenCV doc section Basic Structures:
template<typename T>
explicit Mat::Mat(const vector<T>& vec, bool copyData=false)
You probably want to set 2nd param to true in:
smoothCont = cv::Mat(contours[0]);
and try again (this way cv::GaussianBlur should be able to modify the data).

I know this was written a long time ago, but did you tried a big erode followed by a big dilate (opening), and then find the countours? It looks like a simple and fast solution, but I think it could work, at least to some degree.

Basically the sudden changes in contour corresponds to high frequency content. An easy way to smooth your contour would be to find the fourier coefficients assuming the coordinates form a complex plane x + iy and then by eliminating the high frequency coefficients.

My take ... many years later ...!
Maybe two easy ways to do it:
loop a few times with dilate,blur,erode. And find the contours on that updated shape. I found 6-7 times gives good results.
create a bounding box of the contour, and draw an ellipse inside the bounded rectangle.
Adding the visual results below:

This applies to me. The edges are smoother than before:
medianBlur(mat, mat, 7)
morphologyEx(mat, mat, MORPH_OPEN, getStructuringElement(MORPH_RECT, Size(12.0, 12.0)))
val contours = getContours(mat)
This is opencv4android code.

Related

Search contours on the image

I'm trying solve the recognition problem with a help OpenCV library for C++.
I have a some text(below) and i want to separate each symbol in this text using by cvFindContours(...) function. After, I want to send each separated symbol on the input of neural network for recognition it. It's all ok. I will can get all contours in my image and i can drawn it on my image with a help cvDrawContours(...) function(below). But cvFindContours(...) returns unordered sequence(pointer on the first contour in this sequence) where contains all the found contours. For my task order is very important.
CVAPI(int) cvFindContours( CvArr* image, CvMemStorage* storage, CvSeq** first_contour,
int header_size CV_DEFAULT(sizeof(CvContour)),
int mode CV_DEFAULT(CV_RETR_LIST),
int method CV_DEFAULT(CV_CHAIN_APPROX_SIMPLE),
CvPoint offset CV_DEFAULT(cvPoint(0,0)));
-image- source image
-storage- for storing where contains contours
-first_contour- pointer to the first contour in the storage
-mode- mode of search (I use the CV_RETR_EXTERNAL for search external contours)
-method- method of approximation (I'm using the CV_CHAIN_APPROX_SIMPLE by default)
How can I make the cvFindContours(...) function that returns the contours in the order in which they in the picture? Is it possible?
Thanks!
You can't directly force findContours to yield contours in a certain order (I mean there is no parameter to tune this in the function call).
To sort your contours in a "read text" order, you could do a loop which goes through all your contours and retrieves for each contour the top-leftest point, either by directly going through all points in each contour object, or by using a boundingbox (see minAreaRect for example).
Once you have all these points, sort them from left to right and bottom to top (some adjustments will probably have to be made, like detecting all contours starting within a range of heights to be all part of the same text line)
You have found bounding rectangles for all the contours present in your image. Instead of going about with the left-most point approach, you can sort your contours based on the centroid of each contour, which is more robust since your approach is being for text.
THIS ANSWER from the OpenCV community might help provide a start

How to align 2 images based on their content with OpenCV

I am totally new to OpenCV and I have started to dive into it. But I'd need a little bit of help.
So I want to combine these 2 images:
I would like the 2 images to match along their edges (ignoring the very right part of the image for now)
Can anyone please point me into the right direction? I have tried using the findTransformECC function. Here's my implementation:
cv::Mat im1 = [imageArray[1] CVMat3];
cv::Mat im2 = [imageArray[0] CVMat3];
// Convert images to gray scale;
cv::Mat im1_gray, im2_gray;
cvtColor(im1, im1_gray, CV_BGR2GRAY);
cvtColor(im2, im2_gray, CV_BGR2GRAY);
// Define the motion model
const int warp_mode = cv::MOTION_AFFINE;
// Set a 2x3 or 3x3 warp matrix depending on the motion model.
cv::Mat warp_matrix;
// Initialize the matrix to identity
if ( warp_mode == cv::MOTION_HOMOGRAPHY )
warp_matrix = cv::Mat::eye(3, 3, CV_32F);
else
warp_matrix = cv::Mat::eye(2, 3, CV_32F);
// Specify the number of iterations.
int number_of_iterations = 50;
// Specify the threshold of the increment
// in the correlation coefficient between two iterations
double termination_eps = 1e-10;
// Define termination criteria
cv::TermCriteria criteria (cv::TermCriteria::COUNT+cv::TermCriteria::EPS, number_of_iterations, termination_eps);
// Run the ECC algorithm. The results are stored in warp_matrix.
findTransformECC(
im1_gray,
im2_gray,
warp_matrix,
warp_mode,
criteria
);
// Storage for warped image.
cv::Mat im2_aligned;
if (warp_mode != cv::MOTION_HOMOGRAPHY)
// Use warpAffine for Translation, Euclidean and Affine
warpAffine(im2, im2_aligned, warp_matrix, im1.size(), cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
else
// Use warpPerspective for Homography
warpPerspective (im2, im2_aligned, warp_matrix, im1.size(),cv::INTER_LINEAR + cv::WARP_INVERSE_MAP);
UIImage* result = [UIImage imageWithCVMat:im2_aligned];
return result;
I have tried playing around with the termination_eps and number_of_iterations and increased/decreased those values, but they didn't really make a big difference.
So here's the result:
What can I do to improve my result?
EDIT: I have marked the problematic edges with red circles. The goal is to warp the bottom image and make it match with the lines from the image above:
I did a little bit of research and I'm afraid the findTransformECC function won't give me the result I'd like to have :-(
Something important to add:
I actually have an array of those image "stripes", 8 in this case, they all look similar to the images shown here and they all need to be processed to match the line. I have tried experimenting with the stitch function of OpenCV, but the results were horrible.
EDIT:
Here are the 3 source images:
The result should be something like this:
I transformed every image along the lines that should match. Lines that are too far away from each other can be ignored (the shadow and the piece of road on the right portion of the image)
By your images, it seems that they overlap. Since you said the stitch function didn't get you the desired results, implement your own stitching. I'm trying to do something close to that too. Here is a tutorial on how to implement it in c++: https://ramsrigoutham.com/2012/11/22/panorama-image-stitching-in-opencv/
You can use Hough algorithm with high threshold on two images and then compare the vertical lines on both of them - most of them should be shifted a bit, but keep the angle.
This is what I've got from running this algorithm on one of the pictures:
Filtering out horizontal lines should be easy(as they are represented as Vec4i), and then you can align the remaining lines together.
Here is the example of using it in OpenCV's documentation.
UPDATE: another thought. Aligning the lines together can be done with the concept similar to how cross-correlation function works. Doesn't matter if picture 1 has 10 lines, and picture 2 has 100 lines, position of shift with most lines aligned(which is, mostly, the maximum for CCF) should be pretty close to the answer, though this might require some tweaking - for example giving weight to every line based on its length, angle, etc. Computer vision never has a direct way, huh :)
UPDATE 2: I actually wonder if taking bottom pixels line of top image as an array 1 and top pixels line of bottom image as array 2 and running general CCF over them, then using its maximum as shift could work too... But I think it would be a known method if it worked good.

Can findContour in OpenCV work like bwlabel in Matlab?

Some people in this Q & A site suggested I use findContour to imitate what bwlabel in Matlab. But I am not sure because I think a contour is closed shape of detected edges and element from bwlabel is a connected shape. I guess they might be logically the same. What about them in practice? Are they really same?
Use either of these two library....cvBlobslib or cvblob...you will get many features about the connected components such as size and contour and ellipticity and bounding box...you can filter blobs and add togethar 2 or more blobs...try it..under the hood algo of bwlabel is a two scan connected component where as cvblob or cvBlobslib is a one scan algo...
bwlabel will give you the image connected components, i.e. different label for different connected objects in a background.
Probably what you mean is the combination of im2bw and imcontours provides, i.e. a combination of binarizing the image and trivially finding the single contour (boundaries) per retained object on the output.
Consider the following example:
I = imread('coins.png'); % grayscale
level = graythresh(I); % find thershold
BW = im2bw(I, level); % threshold image
imcontour(BW, 1); % plot single contour
For a grayscale image you can increase the number of requested contours, though findContours operates on binary images.
I found an exact article about this. Quick answer is "Yeah, their eventual output will be the same." So I might go with findContour after all considering cvBlob still using old C-style API and having its own implementation of finding contours.

Detect clusters of circular objects by iterative adaptive thresholding and shape analysis

I have been developing an application to count circular objects such as bacterial colonies from pictures.
What make it easy is the fact that the objects are generally well distinct from the background.
However, few difficulties make the analysis tricky:
The background will present gradual as well as rapid intensity change.
In the edges of the container, the object will be elliptic rather than circular.
The edges of the objects are sometimes rather fuzzy.
The objects will cluster.
The object can be very small (6px of diameter)
Ultimately, the algorithms will be used (via GUI) by people that do not have deep understanding of image analysis, so the parameters must be intuitive and very few.
The problem has been address many times in the scientific literature and "solved", for instance, using circular Hough transform or watershed approaches, but I have never been satisfied by the results.
One simple approach that was described is to get the foreground by adaptive thresholding and split (as I described in this post) the clustered objects using distance transform.
I have successfully implemented this method, but it could not always deal with sudden change in intensity. Also, I have been asked by peers to come out with a more "novel" approach.
I therefore was looking for a new method to extract foreground.
I therefore investigated other thresholding/blob detection methods.
I tried MSERs but found out that they were not very robust and quite slow in my case.
I eventually came out with an algorithm that, so far, gives me excellent results:
I split the three channels of my image and reduce their noise (blur/median blur). For each channel:
I apply a manual implementation of the first step of adaptive thresholding by calculating the absolute difference between the original channel and a convolved (by a large kernel blur) one. Then, for all the relevant values of threshold:
I apply a threshold on the result of 2)
find contours
validate or invalidate contours on the grant of their shape (size, area, convexity...)
only the valid continuous regions (i.e. delimited by contours) are then redrawn in an accumulator (1 accumulator per channel).
After accumulating continuous regions over values of threshold, I end-up with a map of "scores of regions". The regions with the highest intensity being those that fulfilled the the morphology filter criteria the most often.
The three maps (one per channel) are then converted to grey-scale and thresholded (the threshold is controlled by the user)
Just to show you the kind of image I have to work with:
This picture represents part of 3 sample images in the top and the result of my algorithm (blue = foreground) of the respective parts in the bottom.
Here is my C++ implementation of : 3-7
/*
* cv::Mat dst[3] is the result of the absolute difference between original and convolved channel.
* MCF(std::vector<cv::Point>, int, int) is a filter function that returns an positive int only if the input contour is valid.
*/
/* Allocate 3 matrices (1 per channel)*/
cv::Mat accu[3];
/* We define the maximal threshold to be tried as half of the absolute maximal value in each channel*/
int maxBGR[3];
for(unsigned int i=0; i<3;i++){
double min, max;
cv::minMaxLoc(dst[i],&min,&max);
maxBGR[i] = max/2;
/* In addition, we fill accumulators by zeros*/
accu[i]=cv::Mat(compos[0].rows,compos[0].cols,CV_8U,cv::Scalar(0));
}
/* This loops are intended to be multithreaded using
#pragma omp parallel for collapse(2) schedule(dynamic)
For each channel */
for(unsigned int i=0; i<3;i++){
/* For each value of threshold (m_step can be > 1 in order to save time)*/
for(int j=0;j<maxBGR[i] ;j += m_step ){
/* Temporary matrix*/
cv::Mat tmp;
std::vector<std::vector<cv::Point> > contours;
/* Thresholds dst by j*/
cv::threshold(dst[i],tmp, j, 255, cv::THRESH_BINARY);
/* Finds continous regions*/
cv::findContours(tmp, contours, CV_RETR_LIST, CV_CHAIN_APPROX_TC89_L1);
if(contours.size() > 0){
/* Tests each contours*/
for(unsigned int k=0;k<contours.size();k++){
int valid = MCF(contours[k],m_minRad,m_maxRad);
if(valid>0){
/* I found that redrawing was very much faster if the given contour was copied in a smaller container.
* I do not really understand why though. For instance,
cv::drawContours(miniTmp,contours,k,cv::Scalar(1),-1,8,cv::noArray(), INT_MAX, cv::Point(-rect.x,-rect.y));
is slower especially if contours is very long.
*/
std::vector<std::vector<cv::Point> > tpv(1);
std::copy(contours.begin()+k, contours.begin()+k+1, tpv.begin());
/* We make a Roi here*/
cv::Rect rect = cv::boundingRect(tpv[0]);
cv::Mat miniTmp(rect.height,rect.width,CV_8U,cv::Scalar(0));
cv::drawContours(miniTmp,tpv,0,cv::Scalar(1),-1,8,cv::noArray(), INT_MAX, cv::Point(-rect.x,-rect.y));
accu[i](rect) = miniTmp + accu[i](rect);
}
}
}
}
}
/* Make the global scoreMap*/
cv::merge(accu,3,scoreMap);
/* Conditional noise removal*/
if(m_minRad>2)
cv::medianBlur(scoreMap,scoreMap,3);
cvtColor(scoreMap,scoreMap,CV_BGR2GRAY);
I have two questions:
What is the name of such foreground extraction approach and do you see any reason for which it could be improper to use it in this case ?
Since recursively finding and drawing contours is quite intensive, I would like to make my algorithm faster. Can you indicate me any way to achieve this goal ?
Thank you very much for you help,
Several years ago I wrote an aplication that detects cells in a microscope image. The code is written in Matlab, and I think now that is more complicated than it should be (it was my first CV project), so I will only outline tricks that will actually be helpful for you. Btw, it was deadly slow, but it was really good at separating large groups of twin cells.
I defined a metric by which to evaluate the chance that a given point is the center of a cell:
- Luminosity decreases in a circular pattern around it
- The variance of the texture luminosity follows a given pattern
- a cell will not cover more than % of a neighboring cell
With it, I started to iteratively find the best cell, mark it as found, then look for the next one. Because such a search is expensive, I employed genetic algorithms to search faster in my feature space.
Some results are given below:

Contours opencv : How to eliminate small contours in a binary image

I am currently working on image processing project. I am using Opencv2.3.1 with VC++.
I have written the code such that, the input image is filtered to only blue color and converted to a binary image. The binary image has some small objects which I don't want. I wanted to eliminate those small objects, so i used openCV's cvFindContours() method to detect contours in Binary image. but the problem is I cant eliminate the small objects in the image output. I used cvContourArea() function , but didn't work properly.. , erode function also didn't work properly.
So please someone help me with this problem..
The binary image which I obtained :
The result/output image which I want to obtain :
Ok, I believe your problem could be solved with the bounding box demo recently introduced by OpenCV.
As you have probably noticed, the object you are interested at should be inside the largest rectangle draw in the picture. Luckily, this code is not very complex and I'm sure you can figure it all out by investigating and experimenting with it.
Here is my solution to eliminate small contours.
The basic idea is check the length/area for each contour, then delete the smaller one from vector container.
normally you will get contours like this
Mat canny_output; //example from OpenCV Tutorial
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
Canny(src_img, canny_output, thresh, thresh*2, 3);//with or without, explained later.
findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0,0));
With Canny() pre-processing, you will get contour segments, however each segment is stored with boundary pixels as a closed ring. In this case, you can check the length and delete the small one like
for (vector<vector<Point> >::iterator it = contours.begin(); it!=contours.end(); )
{
if (it->size()<contour_length_threshold)
it=contours.erase(it);
else
++it;
}
Without Canny() preprocessing, you will get contours of objects.
Similarity, you can also use area to define a threshold to eliminate small objects, as OpenCV tutorial shown
vector<Point> contour = contours[i];
double area0 = contourArea(contour);
this contourArea() is the number of non-zero pixels
Are you sure filtering by small contour area didn't work? It's always worked for me. Can we see your code?
Also, as sue-ling mentioned, it's a good idea to use both erode and dilate to approximately preserve area. To remove small noisy bits, use erode first, and to fill in holes, use dilate first.
And another aside, you may want to check out the new C++ versions of the cv* functions if you weren't aware of them already (documentation for findContours). They're much easier to use, in my opinion.
Judging by the before and after images, you need to determine the area of all the white areas or blobs, then apply a threshold area value. This would eliminate all areas less than the value and leave only the large white region which is seen in the 2nd image. After using the cvFindContours function, try using 0 order moments. This would return the area of the blobs in the image. This link might be helpful in implementing what I've just described.
http://www.aishack.in/2010/07/tracking-colored-objects-in-opencv/
I believe you can use morphological operators like erode and dilate (read more here)
You need to perform erosion with a kernel size near to the radius of the circle on the right (the one you want to eliminate).
followed by dilation using the same kernel to fill the gaps created by the erosion step.
FYI erosion followed by dilation using the same kernel is called opening.
the code will be something like this
int erosion_size = 30; // adjust with you application
Mat erode_element = getStructuringElement( MORPH_ELLIPSE,
Size( 2*erosion_size + 1, 2*erosion_size+1 ),
Point( erosion_size, erosion_size ) );
erode( binary_img, binary_img, erode_element );
dilate( binary_img, binary_img, erode_element );
It is not a fast way but may be usefull in some cases.
There is a new function in OpencCV 3.0 - connectedComponentsWithStats. With it we can get area of connected components and eliminate unnecessary. So we can easy remove circle with holes, with the same bounding box as solid circle.