I've a problem that I don't really know how to solve. The situation is that I've got a picture, inside this picture I have one ROI (a large rect). Inside this rect I then have X number of smaller ROI:s.
I then use the getPerspectiveTransform() method on the larger of the ROI. What I then want to do is to apply this matrix on the smaller ROI:s in-order to warp them separately but using the matrix from the larger ROI. The reason I want to do this is basically because I want to keep the bounding rects I've got before the warp. If there is any way to warp the larger ROI and keep the bounding rects inside of this (i.e. keep the smaller ROI:s) that would be very helpful!
Anyway, this is what I've tried and the results it produces:
vector<Mat> warpSmallerRoi( vector<Rect> smallerRoi ) {
vector<Mat> quad;
for( int i = 0; i < smallerRoi.size(); ++ i ) {
Mat wholeQuad;
wholeQuad = Mat::zeros( originalImage(smallerRoi[i]).rows, originalImage(smallerRoi[i]).cols, CV_8UC1 );
vector<Point2f> largerRoiCorners;
vector<Point2f> quadPoints;
// Takes the corners from the first smaller ROI and the last smaller ROI, represents the bigger ROI
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].tl().y ) ); // Top left
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].tl().y ) ); // Top right
largerRoiCorners.push_back( Point2f( smallerRoi[ smallerRoi.size() - 1 ].br().x, smallerRoi[ smallerRoi.size() - 1 ].br().y ) ); // Bottom right
largerRoiCorners.push_back( Point2f( smallerRoi[0].tl().x, smallerRoi[0].br().y ) ); // Bottom left
quadPoints.push_back( Point2f(0, 0) );
quadPoints.push_back( Point2f(wholeQuad.cols, 0) );
quadPoints.push_back( Point2f( wholeQuad.cols, wholeQuad.rows ) );
quadPoints.push_back( Point2f(0, wholeQuad.rows) );
// Transform matrix for the larger ROI, this warps into a perfect result.
Mat largerRoiTransformMatrix = getPerspectiveTransform( largerRoiCorners, quadPoints );
//Corners of the smaller ROI
vector<Point2f> corners;
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].tl().y ) ); // Top left
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].tl().y ) ); // Top right
corners.push_back( Point2f( smallerRoi[i].br().x, smallerRoi[i].br().y )); // Bottom right
corners.push_back( Point2f( smallerRoi[i].tl().x, smallerRoi[i].br().y ) ); // Bottom left
Mat transformMatrix = getPerspectiveTransform( corners, quadPoints );
/* This part is just experimental and does not work all the time, it works well sometimes though
Uses the pars from the larger ROI transform matrix and applies it on the smaller ROI
then warps it.
*/
transformMatrix.at<double>(1,0) = largerRoiTransformMatrix.at<double>(1,0);
transformMatrix.at<double>(1,1) = largerRoiTransformMatrix.at<double>(1,1);
transformMatrix.at<double>(1,2) = largerRoiTransformMatrix.at<double>(1,2);
transformMatrix.at<double>(2,0) = largerRoiTransformMatrix.at<double>(2,0);
transformMatrix.at<double>(2,1) = largerRoiTransformMatrix.at<double>(2,1);
transformMatrix.at<double>(2,2) = largerRoiTransformMatrix.at<double>(2,2);
// Warps the
warpPerspective( plateRgb, wholeQuad, transformMatrix, wholeQuad.size() );
quad.push_back(wholeQuad);
}
return quad;
}
This function works sometimes, especially when the larger ROI is already pretty straight (I guess this is because my replacements are not that big difference from the original values then.
Eg:
From:
To:
But then when the larger ROI has much skew the result is not so good:
From:
To:
As you can see the right part of the "H" is here a bit outside of the image. How should I go forward to transform my ROI:s so that the "H" (and all others) fit into the image and are warped with the correct tranformation matrix?
Sorry if I missed out on any information, ask in that case! Thanks :)
Related
I have an image with one circle like shape that contains another similar shape. I am trying find the areas of those two shapes. I am using openCv c++ Hough circle detection, but it does not detect the shapes. Is there any other functions in OpenCV can be used to detect the shapes and find the ares?
[EDIT] The image has been added.
Here is my sample code
int main()
{
Mat src, gray;
src = imread( "detect_circles_simple.jpg", 1 );resize(src,src,Size(640,480));
cvtColor( src, gray, CV_BGR2GRAY );
// Reduce the noise so we avoid false circle detection
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
// Apply the Hough Transform to find the circles
HoughCircles( gray, circles, CV_HOUGH_GRADIENT, 1, 30, 200, 50, 0, 0 );
cout << "No. of circles : " << circles.size()<<endl;
// Draw the circles detected
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle( src, center, 3, Scalar(0,255,0), -1, 8, 0 );// circle center
circle( src, center, radius, Scalar(0,0,255), 3, 8, 0 );// circle outline
cout << "center : " << center << "\nradius : " << radius << endl;
}
exit(0);
// Show your results
namedWindow( "Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE );
imshow( "Hough Circle Transform Demo", src );
waitKey(0);
return 0;
}
I have a similar approach.
img1 = cv2.imread('disc1.jpg', 1)
img2 = img1.copy()
img = cv2.cvtColor(img1,cv2.COLOR_BGR2GRAY)
#--- Blur the gray scale image
img = cv2.GaussianBlur(img,(5, 5),0)
#--- Perform Canny edge detection (in my case lower = 84 and upper = 255, because I resized the image, may vary in your case)
edges = cv2.Canny(img, lower, upper)
cv2.imshow('Edges', edges )
#---Find and draw all existing contours
_, contours , _= cv2.findContours(edges, cv2.RETR_TREE, 1)
rep = cv2.drawContours(img1, contours, -1, (0,255,0), 3)
cv2.imshow(Contours',rep)
Since you are analyzing the shape of a circular edge, determining the eccentricity of your contours will help in this case.
#---Determine eccentricity
cnt = contours
for i in range(0, len(cnt)):
ellipse = cv2.fitEllipse(cnt[i])
(center,axes,orientation) =ellipse
majoraxis_length = max(axes)
minoraxis_length = min(axes)
eccentricity=(np.sqrt(1-(minoraxis_length/majoraxis_length)**2))
cv2.ellipse(img2,ellipse,(0,0,255),2)
cv2.imshow('Detected ellipse', img2)
Now based on the value given by the eccentricity variable you can come to a conclusion whether your contour is circular or not. The threshold depends on what you consider to be circular or an approximate circle.
If you have complete shapes (the edge completely or very nearly joins) it is generally easier to edge detect -> contour -> analyse the contour shape.
Hough lines or circles are very useful when you only have small fragments of a line or circle, but can be tricky to tune
edit: Try cv::adaptiveThreshold to get the edges, then cv::findContours.
For each contour compare the area to the perimeter to see if it is the right size to be your target. Then do cv::fitEllipse to check if it is a circle and get the accurate center. FindCOntours also has a mode which tells you which contours are inside which others, so you can easily find one circle inside another.
You might (depending on lighting) find the same circle with 2 or more contours, ie. for the inner and outer edge.
I have the following code:
findContours( src, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
Mat drawing = Mat::zeros( src.size(), CV_8UC3 );
double largest_area = 0;
for( int i = 0; i < contours.size(); i++) { // get the largest contour
area = fabs( contourArea( contours[i] ) );
if( area >= largest_area ){
largest_area = area;
largest_contours.clear();
largest_contours.push_back( contours[i] );
}
}
if( largest_area >= 3000 ){ // draw the largest contour if exceeded minimum largest area
drawContours( drawing, largest_contours, -1, Scalar(0,0,255), 2 );
}
... which produces the following output image:
I want to get coordinates of four points (marked with green), is that possible?
Do you trying to find corners of rectangle in perspective?
You may want to try several solutions:
Use HoughLines for line detection and find their intersection.
Use Generalized Hough Transform
Use Harris corner detector. But you need to filter extra corners.
For similar task I used following procedure (it works fine in my case):
do cv::approxPolyDP for input contour with increasing epsilon parameter until it returns 4 or less polylines. If it returns 4 polylines you may get 4 corner points exact what you need. If it returns less than 4 polylines most probably something is wrong.
I have an image from which I want to get a vertical ROI, apply some transformations and add to another image.
I read a lot of questions and answer on StackOverflow and other forums, but I'm still stuck with this problem. For the moment I'm using the C interface of OpenCV, but I could use the C++ one if needed (I would have to write a conversion function, since I'm working with CGImageRef in Cocoa).
To get from the top image (see below) to the bottom image, I guess I have to :
Get the ROI on the first image ;
Scale it down ;
Get the intersection points on the lines between the center and the 2 circles for my "width" angle (the angle is fixed) ;
Distort the image so the corners stick to my intersection points ;
Rotate around the center point and put it in the output image.
For the moment, I manage well to do this :
Getting the ROI ;
Scaling it with cvResize ;
Getting the intersection points shouldn't be too complicated, as it is pure geometry and I implemented it yet for another purpose.
But, I have no idea at all of how to distort the resulting image of my ROI, and I don't know if it is even possible in OpenCV. Would I have to use a kind of perspective correction ?
And, I've been trying the few good posts solutions I found by here to rotate with the rotated bounding box, but with no good results for the moment.
EDIT :
Well, I managed to do the first part of the work :
Getting a ROI in a basis image ;
Rotating and placing it at a fixed distance from the center.
I used the method explained and coded in this post : https://stackoverflow.com/a/16285286/1060921
I only added a variable to set the rotation point and get my inner circle.
NB : I set the ROI BEFORE to call the method, so the ROI in the post method is... the image size. Then I place it at the center of my final image with a cvAdd.
Here I get one pixel slices of my camera input. What I want to do now is to distort bigger slices, for example from 2 pixels on the inner circle to 5 pixels on the outer one.
See this tutorial which uses warpPerspective to correct perspective distortion.
EDIT: In your case warpAffine should be better and simpler solution.
So, you could do something like this, just use four points instead of three:
Point2f srcTri[3];
Point2f dstTri[3];
Mat rot_mat( 2, 3, CV_32FC1 );
Mat warp_mat( 2, 3, CV_32FC1 );
Mat src, warp_dst, warp_rotate_dst;
/// Load the image
src = imread( ... );
/// Set the dst image the same type and size as src
warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
/// Set your 3 points to calculate the Affine Transform
srcTri[0] = Point2f( 0,0 );
srcTri[1] = Point2f( src.cols - 1, 0 );
srcTri[2] = Point2f( 0, src.rows - 1 );
dstTri[0] = Point2f( src.cols*0.0, src.rows*0.33 );
dstTri[1] = Point2f( src.cols*0.85, src.rows*0.25 );
dstTri[2] = Point2f( src.cols*0.15, src.rows*0.7 );
/// Get the Affine Transform
warp_mat = getAffineTransform( srcTri, dstTri );
/// Apply the Affine Transform just found to the src image
warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
I am trying to perform a dilation on my image and want to use a disc for the dilation operation. But whatever I am trying, i always end up getting a black square:
dilSize = 12;
kern = cv::getStructuringElement( CV_SHAPE_ELLIPSE, cv::Size( dilSize + 1, dilSize + 1 ) );
cv::dilate( im, im, kern, cv::Point( -1, -1 ), 10 );
cv::imwrite( "ker.png", ker );
The result is a 13x13 pixel black square in the PNG image...
What am I doing wrong?
Regards
Ok figured it out, since cv::getStructuringElement just creates zeros and ones, there is no optical difference.
Adding:
kernel *= 255;
before writing the image, solves the mystery ;)
I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.