I'm using OpenCV to filter an image for certain colours, so I've got a binary image of the detected regions.
Now I want to erode those areas and then get rid of the smaller ones, and find the x,y coordinates of the largest 'blob'
I was looking for recommendations as to what the best library would be to use? I've seen cvBlob and cvBlobsLib but I'm not too sure how to set them up. Do I want to compile them along with the project or do I want to compile and install them to the system (like I did with OpenCV)?
I'm currently using the Code::Blocks IDE on Ubuntu (although that shouldn't restrict things)
I'm late to the party, but I'd just like to chime in that there is a way to do connected components in opencv, it's just not mainlined yet.
Update: It is mainlined, it's just been stuck waiting for 3.0 to release for multiple years. Linky to documentation
See http://code.opencv.org/issues/1236 and http://code.opencv.org/attachments/467/opencv-connectedcomponents.patch
Disclaimer - I'm the author.
You can use findContours to do that, see the opencv manual and a Tutorial to find connected components.
Edit: Code from the tutorial (via Archive.org)
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
int main(int argc, char *argv[])
{
IplImage *img, *cc_color; /*IplImage is an image in OpenCV*/
CvMemStorage *mem;
CvSeq *contours, *ptr;
img = cvLoadImage(argv[1], 0); /* loads the image from the command line */
cc_color = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 3);
cvThreshold(img, img, 150, 255, CV_THRESH_BINARY);
mem = cvCreateMemStorage(0);
cvFindContours(img, mem, &contours, sizeof(CvContour), CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0));
for (ptr = contours; ptr != NULL; ptr = ptr->h_next) {
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours(cc_color, ptr, color, CV_RGB(0,0,0), -1, CV_FILLED, 8, cvPoint(0,0));
}
cvSaveImage("result.png", cc_color);
cvReleaseImage(&img);
cvReleaseImage(&cc_color);
return 0;
}
Unfortunately OpenCV doesn't have any connected component labelling functionality, which seems like a serious omission for a computer vision library. Anyway I had a similar requirement recently so I implemented my own CCL routine - there are a couple of different algorithms described on the CCL Wikipedia page and they are both pretty simple to implement.
I think the best and easy option to work with blobs with OpenCV is to use cvBlob library. Its a complemntary library with OpenCV a its so easy to use.
Related
I am trying to find and separate all edges in an edge detected image using python OpenCV. The edges can be in a form of contour but they don't have to. I just want all connected edges pixels to be grouped together. So technically the algorithm may procedurally sound like this:
For each edge pixel, find a neighbouring (connected) edge pixel and add it to a current subdivision of the image, until you can't find one anymore.
Then move on to the next unchecked edge pixel and start a new subdivision and do 1) again.
I have looked through cv.findContours but the results wasn't satisfying, maybe because it was intended for contours (enclosed edges) rather than free-ended ones. Here are the results:
Original Edge Detected:
After Contour Processing:
I expected the five edges would each be grouped into its own subdivision of the image, but apparently the cv2.findContours function breaks 2 of the edges even further into subdivisions which I don't want.
Here is the code I used to save these 2 images:
def contourForming(imgData):
cv2.imshow('Edge', imgData)
cv2.imwrite('EdgeOriginal.png', imgData)
contours = cv2.findContours(imgData, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cv2.imshow('Contours', imgData)
cv2.imwrite('AfterFindContour.png', imgData)
cv2.waitKey(0)
pass
There are restrictions to my implementation, however. I have to use Python 2.7 and OpenCV2. I cannot use any other revision or languages besides these. I say this because I know OpenCV 2 has a connectedComponent function using C++. I could have used that but the problem is, I cannot use it due to certain limitations.
So, any idea how I should approach the problem?
Using findContours is the correct approach, you're simply doing it wrong.
Take a closer look to the documentation:
Note: Source image is modified by this function.
Your "After Contour Processing" image is in fact the garbage result from findContours. Because of this, if you want the original image to be intact after the call to findContours, it's common practice to pass a cloned image to the function.
The meaningful result of findContours is in contours. You need to draw them using drawContours, usually on a new image.
This is the result I get:
with the following C++ code:
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char** argv)
{
// Load the grayscale image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Prepare the result image, 3 channel, same size as img, all black
Mat3b res(img.rows, img.cols, Vec3b(0,0,0));
// Call findContours
vector<vector<Point>> contours;
findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Draw each contour with a random color
for (int i = 0; i < contours.size(); ++i)
{
drawContours(res, contours, i, Scalar(rand() & 255, rand() & 255, rand() & 255));
}
// Show results
imshow("Result", res);
waitKey();
return 0;
}
It should be fairly easy to port to Python (I'm sorry but I can't give you Python code, since I cannot test it). You can also have a look at the specific OpenCV - Python tutorial to check how to correctly use findContours and drawContours.
I have not worked with OpenCV for a while, so please bear with my beginner questions. I curiously thought of something as I was looking through OpenCV tutorials and sample code.
Why do people create multiple Mat images when going through multiple transformations? Here is an example:
Mat mat, gray, thresh, equal;
mat = imread("E:/photo.jpg");
cvtColor(mat, gray, CV_BGR2GRAY);
equalizeHist(gray, equal);
threshold(equal, thresh, 50, 255, THRESH_BINARY);
Example of a code that uses only two Mat images:
Mat mat, process;
mat = imread("E:/photo.jpg");
cvtColor(mat, process, CV_BGR2GRAY);
equalizeHist(process, process);
threshold(process, process, 50, 255, THRESH_BINARY);
Is there anything different between the two examples? Also, another beginner question: will OpenCV run faster when it only creates two Mat images, or will it still be the same?
Thank you in advance.
The question comes down to, do you still need the unequalized image later on in the code? If you want to further process the gray image then the first option is better. If not, then use the second option.
Some functions might not work in-place; specifically, ones that transform the matrix to a different format, either by changing its dimensions (such as copyMakeBorder) or number of channels (such as cvtColor).
For your use case, the two blocks of code perform the same number of calculations, so the speed wouldn't change at all. The second option is obviously more memory efficient.
As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.
Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)
The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.
Hey I've a circular image that I want to make a cartesian in openCV.
I've successfully made it on matlab however I want to do it on OpenCV.
After some digging on internet. I figured out there are actually functions called logPolar, polarToCart and cartToPolar. However OpenCV official documentation is lack of information to use them. Since I don't really understand parameters those functions take I couldn't really use them
So could someone give me (actually I think a lot of people looking for it) appropriate example to use those functions please ?
Just in case I am sharing my sample image too.
thanks in advance
if you're using opencv3, you probably want linearPolar:
note, that for both versions, you need a seperate src and dst image (does not work inplace)
#include "opencv2/opencv.hpp" // needs imgproc, imgcodecs & highgui
Mat src = imread("my.png", 0); // read a grayscale img
Mat dst; // empty.
linearPolar(src,dst, Point(src.cols/2,src.rows/2), 120, INTER_CUBIC );
imshow("linear", dst);
waitKey();
or logPolar:
logPolar(src,dst,Point(src.cols/2,src.rows/2),40,INTER_CUBIC );
[edit:]
if you're still using opencv2.4, you can only use the arcane c-api functions, and need IplImage conversions (not recommended):
Mat src=...;
Mat dst(src.size(), src.type()); // yes, you need to preallocate here
IplImage ipsrc = src; // new header, points to the same pixels
IplImage ipdst = dst;
cvLogPolar( &ipsrc, &ipdst, cvPoint2D32f(src.cols/2,src.rows/2), 40, CV_INTER_CUBIC);
// result is in dst, no need to release ipdst (and please don't do so.)
(polarToCart and cartToPolar work on point coords, not images)
I am currently trying to implement a basic image stitching C++ (OpenCV) code in Eclipse. The feature detection part shows great results for SURF Features. However, when I attempt to warp the 2 images together, I get only half the image as the output. I have tried to find a solution everywhere but to no avail. I even tried to offset the homography matrix , like in this answer OpenCV warpperspective . Nothing has helped so far.
I'll attach the output images in the comments since I don't have enough reputation points.
For feature detection and homography, I used the exact code from here
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html
And then I added the following piece of code after the given code,
Mat result;
warpPerspective(img_object,result,H, Size(2*img_object.cols,img_object.rows));
Mat half(result,Rect(0,0,img_scene.cols,img_scene.rows));
img_scene.copyTo(half);
imshow( "Warped Image", result);
I'm quite new at this and just trying to put the pieces together. So I apologize if there's some basic error.
If you're only trying to put the pieces together, you cold try the built in OpenCV image stitcher class: http://docs.opencv.org/modules/stitching/doc/high_level.html#stitcher
I found a related question here Stitching 2 images in opencv and implemented the additional code given. It worked!
For reference, the edited code I wrote was
Mat result;
warpPerspective(img_scene, result, H, Size(img_scene.cols*2, img_scene.rows*2), INTER_CUBIC);
Mat final(Size(img_scene.cols + img_object.cols, img_scene.rows*2),CV_8UC3);
Mat roi1(final, Rect(0, 0, img_object.cols, img_object.rows));
Mat roi2(final, Rect(0, 0, result.cols, result.rows));
result.copyTo(roi2);
img_object.copyTo(roi1);