I have performed Closing morphological operation and I am getting different result with the C and C++ API (OpenCV 2.4.2)
Input:
With OpenCV 'C':
//Set ROI
//Perform Gaussian smoothing
//Perform Canny edge analysis
cvMorphologyEx( src, dst, temp, Mat(), MORPH_CLOSE, 5 );
RESULT:
http://i47.tinypic.com/33e0yfb.png
With Opencv C++
//Set ROI
//Perform Gaussian smoothing
//Perform Canny edge analysis
cv::morphologyEx( src, dst, cv::MORPH_CLOSE, cv::Mat(), cv::Point(-1,-1), 5 );
RESULT:
http://i50.tinypic.com/i5vxjo.png
As you can see, the C++ API yields an output with White/Gray border color. Hence, the results are different for both of these APIs.
I have tried different borderType with the C++ API but it always yields the same result.
How can I get the same output as C API in C++? I need it because it affects the detected contours
Thanks in advance
Thank you everybody for answering this question. I have found my error. I am going to describe it in brief below. Hope it helps others facing this problem.
1) I had executed the C and C++ commands on a ROI image. Apparently, the way OpenCV 'C' and 'C++' API treat ROI is different.
2) In 'C', a ROI is treated as a completely different image. Hence, when you execute functions such as cvSmooth, cvDilate, etc, where one needs to mentions border Pixel extrapolation methods, the 'C' API does not refer back to the original image for pixels beyond left/right/top/bottom most pixel. It actually interpolates the pixel values according to the method you mentioned.
3) But in 'C++', I have found that it always refers back to the original image for pixels beyond left/right/top/bottom most pixel. Hence, the border pixel extrapolation method mentioned doesn't affect your output if there are pixels in the original image around your ROI.
I think it applies the order pixel extrapolation method to the original image instead of the ROI unlike the 'C' API. I don't know if this a bug; I haven't completely read the OpenCV 2.4.2 C++ API documentation. (Please correct me if I am wrong)
To claim my support, I have posted input/output images below:
Output for 'C' and C++ API:
INPUT:
<--- input
OpenCV 'C' API:
IplImage *src = cvLoadImage("input.png", 0);
cvSetImageROI( src, cvRect(33,19,250,110));
cvSaveImage( "before_gauss.png", src );
cvSmooth( src, src, CV_GAUSSIAN );
cvSaveImage("after_gauss.png", src);
IplConvKernel *element = cvCreateStructuringElementEx(3,3,1,1,CV_SHAPE_RECT);
cvCanny( src, src, 140, 40 );
cvSaveImage("after_canny.png", src);
cvDilate( src, src, element, 5);
cvSaveImage("dilate.png", src);
OUTPUT:
<-- before_gauss
<--- after_gauss
<--- after_canny
<--- dilate
OpenCV 'C++' API:
cv::Mat src = cv::imread("input.png", 0);
cv::Mat src_ROI = src( cv::Rect(33,19,250,110));
cv::imwrite( "before_gauss.png", src_ROI );
cv::GaussianBlur( src_ROI, src_ROI, cv::Size(3,3),0 );
cv::imwrite( "after_gauss.png", src_ROI );
cv::Mat element = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3), cv::Point(1,1));
cv::Canny( src_ROI, src_ROI, 140, 40);
cv::imwrite( "after_canny.png", src_ROI );
cv::dilate( src_ROI, src_ROI, element, cv::Point(1,1), 5);
cv::imwrite( "dilate.png", src_ROI );
OUTPUT:
<-- before_gauss
^^^^^ after_gauss (NOTE: the borders are no more completely black, they are grayish)
^^^^^ after_canny
^^^^^ dilate
SOLUTION:
Create a separate ROI copy and use it for further analysis;
src_ROI.copyTo( new_src_ROI );
Use new_src_ROI for further analysis.
If anyone has better solution, please post below
The defaults are not the same between C and C++ - especially the structuring element.
In C: the default structuring element is:
cvCreateStructuringElementEx(3, 3, 1, 1, CV_SHAPE_RECT)
whereas in C++, the default structuring element is:
getStructuringElement(MORPH_RECT, Size(1+iterations*2,1+iterations*2));
You should specify all fields(including the anchor) if you want the same results.
Check out this sample code from the OpenCV v2.4.2 documentation. You might also want to check this code for using the Canny edge detector. These will hopefully help you track down the error :)
Also note that morphological closing is an idempotent operator, so it can be applied multiple times without changing the result beyond the initial application.
Related
I have a black and white image with lines. some of these lines, however, are not perfectly connected where they should be (though they are close) I have attached an example.
I want to make it so that the lines are close to 1px thick. I have been playing with a few ideas, but not having much sucess. I have tried dilate erote, and dilate like such:
int dsize = 5;
cv::Mat element = getStructuringElement(cv::MORPH_CROSS,
cv::Size(2*dsize + 1, 2*dsize + 1),
cv::Point( dsize, dsize ) );
cv::dilate( src, src, element );
Is there a better way, as op[p[osed to just dilating and eroding to do specifically what I am after?
There is at least a couple of solutions we can try out, but I'm gonna need more info about your problem. For example, are you trying to close the (in)complete contour of a detected object? How much "contour degradation" are you willing to take to approximate a fully closed contour?
Here's a first and very basic solution, assuming you need a 1 pixel width contour. It involves dilating the image N times and then applying a thinning/skeletonize transformation. (The function is part of the Extended Image Processing module of OpenCV ).
Let's see the code:
#include <opencv2/ximgproc.hpp>
//Read input image:
std::string imagePath = "C://opencvImages//lineImg.png";
cv::Mat imageInput= cv::imread( imagePath );
//Convert it to grayscale:
cv::Mat grayImg;
cv::cvtColor( imageInput, grayImg, cv::COLOR_BGR2GRAY );
//Get binary image via Otsu:
cv::threshold( grayImg, grayImg, 0, 255 , cv::THRESH_OTSU );
//Dilate the binary image with 5 iterations:
cv::Mat morphKernel = cv::getStructuringElement( cv::MORPH_RECT, cv::Size(3, 3) );
int morphIterations = 5;
cv::morphologyEx( grayImg, grayImg, cv::MORPH_DILATE, morphKernel, cv::Point(-1,-1), morphIterations );
This is the Dilated image:
//Get the skeleton:
cv::Mat skel;
int algorithmType = 1;
cv::ximgproc::thinning( grayImg, skel, algorithmType );
This is the Skeleton Image. The line has been "thinned" back to a width of 1 pixel:
I don't know if this is good enough for your application, but, as I said, depending on what you are doing we can try a couple of alternative solutions.
Is it you who draw the lines to the mat, It seems like the problem should be take in hands before.
You should draw line in a bigger cv::mat then resize to make your line thicker.
if you want to have complete line, don't draw each points on the map but line between points to get line from bresenham.
Opencv 2.4.10.
At the end of the code below, a dilation is called with a 9 wide disk structuring element on a matrix, Img2. Originally, Img2 was created from Img1 by a simple header copy (Img2=Img1). Note, Img1 was made without copying data from Img0 via Ranges such that Img1 doesn't have the first and last 3 rows of Img0. The result of the dilation was incorrect.
However, if I used a full copy for Img2 via clone, Img2=Img1.clone(), the dilation worked correctly.
Note that using imwrite, not shown in the code below, on Img2 was the same regardless of which copy method I used. So, shouldn't the morphological operators work the same too?
Mat Tmp;
Mat Img1=Img0(Range(3-1, Img0.rows - 3+1),Range::all());
Img1(Range(0,1), Range::all()) = 0;
Img1(Range(Img1.rows-1,Img1.rows), Range::all()) = 0;
// bad
//Mat Img2 = Img1; // header only copy: the dilation results are wrong on the top and bottom
// good
Mat Img2 = Img1.clone(); // full copy, dilation works right.
Mat Disk4;
// exact replacement for mmatlab strel('disk',4,0), somewhat difference than opencv's ellipse structuring element.
MakeFilledEllipse( 4, 4, Disk4);
// If I use Img2 from clone, this is the same as matlab's.
// If I just do a header copy some areas the top and bottom are different
dilate(Img2, Tmp,Disk4, Point(-1,-1),1,BORDER_CONSTANT, Scalar(0));
EDIT- I subsequently simplified the code so that Img2 replaces img1 and there is no img1 so that I could repeat the problem with only 1 level of Mat header indirection and it still failed (was incorrect) the same way.
Mat Tmp;
Mat Img2=Img0(Range(3-1, Img0.rows - 3+1),Range::all());
Img2(Range(0,1), Range::all()) = 0;
Img2(Range(Img2.rows-1,Img2.rows), Range::all()) = 0;
Mat Disk4;
// exact replacement for mmatlab strel('disk',4,0), somewhat difference than opencv's ellipse structuring element.
MakeFilledEllipse( 4, 4, Disk4);
// bad result
dilate(Img2, Tmp,Disk4, Point(-1,-1),1,BORDER_CONSTANT, Scalar(0));
Mat became non-continuous in effect of selecting a ROI within itself.
I'm not sure that in your case Mat Mat::operator()( Range _rowRange, Range _colRange ) const will set CONTINUOUS_FLAG to false but the SUBMATRIX_FLAG will be set surely which can lead to different operation.
Here, I guess some parts of cv::dilate() (such as the border pixel extrapolation method, or the structuring element that determines the shape of a pixel neighborhood) affect your output if there are pixels in the original image around your ROI.
I suggest to use the following to reorder the memory before calling cv::dilate():
if (!mat.isContinuous() || mat.isSubmatrix())
{
mat = mat.clone();
}
I'm trying to segment an input image and blur it tile-by-tile, but after all cv::blur invocations on adjacent tiles I've got border pixels, which differ from what I've got when collectively applying cv::blur to the whole image at once.
Mat upper(im, Rect( 0,0, 10,10 ));
Mat lower(im, Rect( 0,11, 10,20 ))
blur( upper, upper, Size( 5, 5 ) );
blur( lower, lower, Size( 5, 5 ) );
It looks like the library version I use (2.4.8) doesn't do what I expect reading through the following:
Unlike the earlier versions of OpenCV, now the filtering operations fully support the notion of image ROI, that is, pixels outside of the ROI but inside the image can be used in the filtering operations.
(Taken from: See FilterEngine::apply description here)
P.S.1: I've tried to extract the cv::boxFilter implementation and change the srcRoi parameter value, but have wrong results either.
Mat src = im.clone();
Mat dst = src; // Trying to perform the operation in-place
Size ksize( 5, 5 );
Point anchor(-1,-1);
Ptr<FilterEngine> f = createBoxFilter(
src.type(), dst.type(),
ksize, anchor, true, BORDER_DEFAULT
);
f->apply(
src, dst,
Rect(0,0,10,10),
Point(0,0), false
);
f->apply(
src, dst,
Rect(0,0,10,10),
Point(0,11), false
);
P.S.2: Help on coloring the source code would be helpful.
The problem you are seeing is because you are trying to do this in-place. Once you've blurred part of the image then you have invalidated source pixels that would be needed for blurring any adjacent part of the image. The solution is to not do this in-place, so that the original source pixels are available for whatever part of the image you want to blur.
I want to detect circles in an image using OpenCV and C++. I COULD do that by referring to the official documentation and adjusting the parameters of the piece of code written by the OpenCV Team.
So, the code I'm working with is as follows: (parameters already adjusted)
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
using namespace cv;
int main(int, char** argv)
{
Mat src, src_gray;
/// Read the image
src = imread( argv[1], 1 );
if( !src.data )
{ return -1; }
/// Convert it to gray
cvtColor( src, src_gray, CV_BGR2GRAY );
/// Reduce the noise so we avoid false circle detection
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 6.0, 5, 110, 70, 3, 20 );
/// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][2]));
int radius = cvRound(circles[i][3]);
// circle center
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );
// circle outline
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
/// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
src.release();
src_gray.release();
return 0;
}
And the image whose circles I want to detect is the following: Test image
These are actually the contour of two blobs that I obtained using cvBlobsLib and redrew as a new image.
That algorithm is able to identify the centers of each circle, but, when I hit any key to close the program, it crashes... :( And I have to forcefully close it.
I need to adapt that algorithm to run in a camera, so I cannot proceed with the implementation while it crashes like that.
So, does anyone know what could be causing this problem?
I'm doing the development on Visual Studio 2012 and OpenCV version 2.4.2.
If someone could give me a suggestion of what it could be or maybe try running the algorithm, I would be very grateful!
I have four advices for you.
First: To see whether a Mat is empty or not, you use
if( src.empty() ) // instead of !src.data.
The chances are src.data has random (stale) value for an empty Mat.
Second: correct the indices like this:
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
(actually you don't need cvRound, but whatever).
Third: It is worth to check whether imread understood that you want to load the image in color mode, by checking its number of channels:
src.channels()==3
//or
src.type()==CV_8UC3; // that is what you are counting for, really.
Otherwise a line like CV_BGR2GRAY on a gray image could cause weird behaviour.
Fourth: you don't need to release Mat's. That's the reason they created Mat class in the first place, so that they automatically take care of releasing.
I don't see anything obvious except that you are overrunning the Vec3f bounds:
Point center(cvRound(circles[i][0]), cvRound(circles[i][2]));
int radius = cvRound(circles[i][3]);
Instead of index 2 and 3, I think you meant 1 and 2.
That wouldn't necessarily be causing the crash (by corrupting the stack or heap), but then again it is undefined behaviour...
The other thing I suggest is removing the two lines that follow the waitKey call:
src.release();
src_gray.release();
These are handled automatically by the destructor in the object, so I don't see why you need to do it manually. That might not change a thing, of course.
From there, if you are still getting crashes you should start omitting sections of your code until you can isolate the one that crashes it.
I started feeling suspicious about the environment, so I got a friend who had OpenCV all set up to try out my code and he could run it with no problem...
So I reinstalled everything, but this time I chose Microsoft Visual Studio 2010 SP1 and OpenCV 2.4.3, and it worked correctly.
If someone is having the same problem, I recommend downgrading to VS2010. Also, this video installation guide was really helpful when I was setting the environment!
Thank you :)
I was having the same problem. Please ensure that while running your application in release mode, you are using opencv release dll's. Doing this solved my problem.
Reference:
https://code.ros.org/trac/opencv/ticket/953
I'm trying to make a program to detect an object in any shape using a video camera/webcam based on Canny filter and contour finding function. Here is my program:
int main( int argc, char** argv )
{
CvCapture *cam;
CvMoments moments;
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = NULL;
CvSeq* contours2 = NULL;
CvPoint2D32f center;
int i;
cam=cvCaptureFromCAM(0);
if(cam==NULL){
fprintf(stderr,"Cannot find any camera. \n");
return -1;
}
while(1){
IplImage *img=cvQueryFrame(cam);
if(img==NULL){return -1;}
IplImage *src_gray= cvCreateImage( cvSize(img->width,img->height), 8, 1);
cvCvtColor( img, src_gray, CV_BGR2GRAY );
cvSmooth( src_gray, src_gray, CV_GAUSSIAN, 5, 11);
cvCanny(src_gray, src_gray, 70, 200, 3);
cvFindContours( src_gray, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0));
if(contours==NULL){ contours=contours2;}
contours2=contours;
cvMoments(contours, &moments, 1);
double m_00 = cvGetSpatialMoment( &moments, 0, 0 );
double m_10 = cvGetSpatialMoment( &moments, 1, 0 );
double m_01 = cvGetSpatialMoment( &moments, 0, 1 );
float gravityX = (m_10 / m_00)-150;
float gravityY = (m_01 / m_00)-150;
if(gravityY>=0&&gravityX>=0){
printf("center point=(%.f, %.f) \n",gravityX,gravityY); }
for (; contours != 0; contours = contours->h_next){
CvScalar color = CV_RGB(250,0,0);
cvDrawContours(img,contours,color,color,-1,-1, 8, cvPoint(0,0));
}
cvShowImage( "Input", img );
cvShowImage( "Contours", src_gray );
cvClearMemStorage(storage);
if(cvWaitKey(33)>=0) break;
}
cvDestroyWindow("Contours");
cvDestroyWindow("Source");
cvReleaseCapture(&cam);
}
This program will detect all contours captured by the camera and the average coordinate of the contours will be printed. My question is how to filter out only one object/contour so I can get more precise (x,y) position of the object? If possible, can anyone show me how to mark the center of the object by using (x,y) coordinates?
Thanks in advance. Cheers
p/s:Sorry I couldn't upload a screenshot yet but if anything helps, here's the link.
Edit: To make my question more clear:
For example, if I only want to filter out only the square from my screenshot above, what should I do?
The object I want to filter out has the biggest contour area and most importantly has a shape(any shape), not a straight or a curve line
I'm still experimenting with the smooth and canny values so if anybody have the problem to detect the contours using my program please alter the values.
I think it can be solved fairly easy. I would suggest some morphological operations before contour detection. Also, I would suggest filtering "out" smaller elements, and getting the biggest element as the only one still in the image.
I suggest:
for filtering out lines (straight or curved): you have to decide what do you yourself consider a border between a "line" and a "shape". Let's say you consider all the objects of a thickness 5 pixel or more to be objects, while the ones that are less than 5 pixels across to be lines. An morphological opening that uses a 5x5 square or a 3-pixel sized diamond shape as a structuring element would take care of this.
for filtering out small objects in general: if objects are of arbitrary shapes, purely morphological opening won't do: you have to do an algebraic opening. A special type of algebraic openings is an area opening: an operation that removes all the connected components in the image that have (pixel) area smaller than a given threshold. If you have an upper bound on the size of uninteresting objects, or a lower bound on the size of interesting ones, that value should be used as a threshold. You can probably get a similar effect with a larger morphological opening, but it will not be so flexible.
for filtering out all the objects except the largest: it sounds like removing connected components from the smallest one to the largest one should work. Try labeling the connected components. On a binary (black & white image), this image transformation works by creating a greyscale image, labeling the background as 0 (black), and each component with a different, increasing grey value. In the end, pixels of each object are marked by a different value. You can now simply look at the gray level histogram, and find the grey value with the most pixels. Set all the other grey levels to 0 (black), and the only object left in the image is the biggest one.
The suggestions are written from the simplest to the most complex ones. Still, I think OpenCV can be of help with any of these. Morphological erosion, dilation, opening and closing are implemented in OpenCV. I think you might need to construct an algebraic opening operator on your own (or play with combining OpenCV basic morphology), but I'm sure OpenCV can help you with both labeling the connected components and examining the histogram of the resulting greyscale image.
In the end, when only pixels from one object are left, you do the Canny contour detection.
This is a blob processing problem that can not be solved (easily) by OpenCV itself. Have a look at cvBlobsLib. This library is extends OpenCV with functions/classes for connected component labeling.
http://opencv.willowgarage.com/wiki/cvBlobsLib