get the center of the detected circle in an image / opencv / c++ - c++

I am using this code to get the coordinate of the center of detected circles in the image.
vector<Vec3f> circles;
cv::HoughCircles( t, circles, CV_HOUGH_GRADIENT, 1, t.rows/8, 200, 100, 0, 0 );
for( size_t i = 0; i < circles.size(); i++ ){
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
cout << "center" << center << endl;
int radius = cvRound(circles[i][2]);
// circle center
circle( t2, center, 3, 1 , -1, 8, 0 );
// circle outline
circle( t2, center, radius, 1 , 3, 8, 0 );
}
imshow( "circles", t2 );
I can detect the circles but did no get any result for cooardinate of the center points :(
thanks in advance.
after edition:
I added this line but the answer was zero.
cout << "number of circles found: " << circles.size() << endl;
images:
the first one is the main circle and the second one is after applying gaussian filter and HoughCircles function:

If I understand you correctly your code Draws the circles but the
cout << "center" << center << endl;
line does not give the correct output.
This is because cv::Point does not support direct output via <<.
Try to use:
cout << "center" << center.x << ", " << center.y << endl;
If the Problem is that you canĀ“t find any circles make sure that the min_radius and max_radius are choosen correctly. Try starting with a wide range of allowed radii and then try to choose a smaller Range until you get only the circles you want.
This Values can make a huge difference in the detection Ratio.

Related

Opencv hough circle not detecting circles

I am trying to detect the circle inside traffic light, and I am able to detect only 1 out of the 2 circle, and the size of the circle which i am getting seems to be too big
Input Image: https://i.imgur.com/VkNDt2B.png
Output image: https://i.imgur.com/BBq5tE0.png
int main()
{
Mat src, gray;
src = imread("C:\/test_image2.png", 1);
resize(src, src, Size(640, 480));
cvtColor(src, gray, CV_BGR2GRAY);
// Reduce the noise so we avoid false circle detection
GaussianBlur(gray, gray, Size(9, 9), 2, 2);
vector<Vec3f> circles;
// Apply the Hough Transform to find the circles
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 1, 60, 200, 20, 0, 35);
// Draw the circles detected
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
circle(src, center, 3, Scalar(0, 255, 0), -1, 8, 0);// circle center
circle(src, center, radius, Scalar(0, 0, 255), 3, 8, 0);// circle outline
cout << "center : " << center << "\nradius : " << radius << endl;
}
// Show your results
namedWindow("Hough Circle Transform Demo", CV_WINDOW_AUTOSIZE);
imshow("Hough Circle Transform Demo", src);
waitKey(0);
return 0;
}
HoughCircles works best if you know in advance the approx size of the circles you're looking for. I suggest you give a better value for min_radius and max_radius parameters.
In any case, you need to play with param1 and param2 parameters. If circles are not perfect circles you can try to lower the image resolution using the dp parameter (f.ex. with dp = 2 the image is downscaled to half its resolution).
Basically: play with param1 and param2 until your circles are detected, no matter if other circles are detected. Use this result to find out what radius your circles are, then fix the min and max radius to remove most circles you don't want and finally play again with param1 and param2 until only your circles are left.
this is a pretty huge image
try cropping to the traffic light part first ( to get something to begin with ) and then by trying different combinations of min_distance and param_1,param_2 parameter try getting most circles ( even the wrong ones ) detected. find out what values get the most circles and what combination gets least ( or no ) circles and then fine tune the parameters to get lesser circles detected and finally find the perfect combination

Head pose estimation fails with specific image sizes

I want to find angles of rotation of the head using opencv and dlib. So, I tried to use this code from the tutorial:
cv::Mat im = imread("img.jpg");
matrix<bgr_pixel> dlibImage;
assign_image(dlibImage, cv_image<bgr_pixel>(im));
auto face = detector(dlibImage)[0];
auto shape = sp(dlibImage, face);
// 2D image points.
std::vector<cv::Point2d> image_points;
image_points.push_back(cv::Point2d(shape.part(30).x(), shape.part(30).y())); // Nose tip
image_points.push_back(cv::Point2d(shape.part(8).x(), shape.part(8).y())); // Chin
image_points.push_back(cv::Point2d(shape.part(36).x(), shape.part(36).y())); // Left eye left corner
image_points.push_back(cv::Point2d(shape.part(45).x(), shape.part(45).y())); // Right eye right corner
image_points.push_back(cv::Point2d(shape.part(48).x(), shape.part(48).y())); // Left Mouth corner
image_points.push_back(cv::Point2d(shape.part(54).x(), shape.part(54).y())); // Right mouth corner
// 3D model points.
std::vector<cv::Point3d> model_points;
model_points.push_back(cv::Point3d(0.0f, 0.0f, 0.0f)); // Nose tip
model_points.push_back(cv::Point3d(0.0f, -330.0f, -65.0f)); // Chin
model_points.push_back(cv::Point3d(-225.0f, 170.0f, -135.0f)); // Left eye left corner
model_points.push_back(cv::Point3d(225.0f, 170.0f, -135.0f)); // Right eye right corner
model_points.push_back(cv::Point3d(-150.0f, -150.0f, -125.0f)); // Left Mouth corner
model_points.push_back(cv::Point3d(150.0f, -150.0f, -125.0f)); // Right mouth corner
// Camera internals
double focal_length = im.cols; // Approximate focal length.
Point2d center = cv::Point2d(im.cols/2,im.rows/2);
cv::Mat camera_matrix = (cv::Mat_<double>(3,3) << focal_length, 0, center.x, 0 , focal_length, center.y, 0, 0, 1);
cv::Mat dist_coeffs = cv::Mat::zeros(4,1,cv::DataType<double>::type); // Assuming no lens distortion
cout << "Camera Matrix " << endl << camera_matrix << endl ;
// Output rotation and translation
cv::Mat rotation_vector; // Rotation in axis-angle form
cv::Mat translation_vector;
// Solve for pose
cv::solvePnP(model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
// Project a 3D point (0, 0, 1000.0) onto the image plane.
// We use this to draw a line sticking out of the nose
std::vector<Point3d> nose_end_point3D;
std::vector<Point2d> nose_end_point2D;
nose_end_point3D.push_back(Point3d(0,0,1000.0));
projectPoints(nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D);
for(int i=0; i < image_points.size(); i++)
{
circle(im, image_points[i], 3, Scalar(0,0,255), -1);
}
cv::line(im,image_points[0], nose_end_point2D[0], cv::Scalar(255,0,0), 2);
cout << "Rotation Vector " << endl << rotation_vector << endl;
cout << "Translation Vector" << endl << translation_vector << endl;
cout << nose_end_point2D << endl;
// Display image.
cv::imshow("Output", im);
cv::waitKey(0);
But, unfortunately, I get completely different results depending on the size of the same image!
If I use this img.jpg which has size 299x299 px(many sizes are ok, but we take the nearest), then all ok and I get right result:
Output:
Rotation Vector
[-0,04450161828760668;
-2,133664002574712;
-0,2208024002827168]
But if I use this img.jpg which has size 298x298 px, then I get absolutely wrong result:
Output:
Rotation Vector
[-2,999117288644056;
0,0777816930911016;
-0,7573144061217354]
I also understood that it's due to the coords of the landmarks, not due to size of image, because result are same for the same hardcoded landmarks while sizes of this image are different.
How can I always get a correct pose estimation, as in the first case?
P.S. also, I want to note this problem has very nondeterministic behaviour - now all ok with 298x298, but I get wrong result with 297x297 size.

Tracking of multiple objects in openCV using C++

I am doing a project in OpenCV on estimating the speed of moving vehicle using the video captured. Here the camera is stationary. I have estimated the speed of single object using centroid and Euclidean distance. Now the problem is, I am not getting how to do the same for multiple objects.
Here, I need to calculate the Euclidean distance of objects between 2 subsequent frames.
I am grateful if anyone would help.
I have created the class-
class centroids
{
public:
vector<Point2f> ce;
vector<float> area;
};
centroids c[100];
And this is the code I've written. I would be grateful if anyone helped me with the code:
findContours( fgMaskMOG2,
contours,
hierarchy,
CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE );
int morph_size = 6;
Mat element = getStructuringElement( MORPH_RECT,
Size( 2*morph_size+1, 2*morph_size+1 ),
Point( morph_size, morph_size ) );
Scalar color( 255, 255, 255 ); // color of the contour in the
//Draw the contour and rectangle
for( int i = 0; i < contours.size(); i++ )
{
drawContours( fgMaskMOG2,
contours,
i,
color,
CV_FILLED,
8,
hierarchy );
}
//imshow("morpho window",dst);
vector<Moments> mu( contours.size() );
vector<Point2f> mc( contours.size() );
vector<Point2f> m ;
vector<double> time;
vector<Point2f> centroid( mc.size() );
//vector< vector<Point> >::iterator itc = contours.begin();
// iterate through each contour.
double time1[1000];
for( int i = 0; i < contours.size(); i++ )
{
// Find the area of contour
double a = contourArea( contours[i], false );
if( a > 500 )
{
mu[i] = moments( contours[i], false );
mc[i] = Point2f( (mu[i].m10 / mu[i].m00), (mu[i].m01 / mu[i].m00) );
m.push_back( mc[i] );
Point2f diff;
double euclidian = 0;
for( int f = 0; f < m.size(); f++ )
{
if( k == 1 )
{
c[f].ce.push_back( m[f] );
cout << "cen" << c[f].ce << endl;
euclidian = 0;
}
else
{
c[f+1].ce.push_back( m[f] );
cout << "cent" << c[f+1].ce << endl;
diff = c[f].ce[f] - c[f-1].ce[f-1];
euclidian = abs( sqrt( (diff.x*diff.x) + (diff.y*diff.y) ) );
cout << "euclidian" << euclidian << endl;
}
}
cout << "\n centroid" << m << endl;
circle( fgMaskMOG2,
mc[i],
5,
Scalar( 0, 0, 255 ),
1,
8,
0 );
}
}
Thanks in advance :)
You can estimate speed of a moving vehicle based on the video frames only if approximate distance between vehicle and camera is constant throughout the calculation i.e. vehicle is moving in a straight line perpendicular to the camera's vision. So, if camera is looking from side, all the vehicles will be at different distance and calculation will become highly inaccurate for multiple vehicles. Even the vehicles will overlap and their segmentation will be difficult.
There are two scenarios in which your calculation may work -
First, when the camera is capturing from top looking vertically down on vehicles. In this case, there will be a stark difference between the vehicle color and road color. You can use several ways to segment out those individual vehicles, tag them based on their features and identify those vehicles in next frame using the features. This way you'll get position of individual vehicles and then you can predict the speed based on your algorithm. These are following links which will be helpful for segmenting the vechicles -
How to define the markers for Watershed in OpenCV?
http://www.codeproject.com/Articles/751744/Image-Segmentation-using-Unsupervised-Watershed-Al
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Watershed_Algorithm_Marker_Based_Segmentation.php
Second, when vehicles are moving in a single line behind one another. In this case, you can use a combination of color and contour based segmentation depending on the background of your vehicles. After segmentation you can again use object features to identify the position of objects in the next frame. Then run your algorithm for both cases.
If you have complete video sequence of the vehicles, you can segment out different vehicles in the first frame automatically or identify them manually, and then apply motion tracking on those identified objects. You can use Opencv's motion analysis functions and object tracking functions to do so. Thus you'll get position of all tracked vehicles in each frame. So you can easily run and test your speed calculating algorithms.

Create Mat from vector<point2f>

I am extremely new to computer vision and the opencv library.
I've done some googling around to try to find how to make a new image from a vector of Point2fs and haven't found any examples that work. I've seen vector<Point> to Mat but when I use those examples I always get errors.
I'm working from this example and any help would be appreciated.
Code: I pass in occludedSquare.
resize(occludedSquare, occludedSquare, Size(0, 0), 0.5, 0.5);
Mat occludedSquare8u;
cvtColor(occludedSquare, occludedSquare8u, CV_BGR2GRAY);
//convert to a binary image. pixel values greater than 200 turn to white. otherwize black
Mat thresh;
threshold(occludedSquare8u, thresh, 170.0, 255.0, THRESH_BINARY);
GaussianBlur(thresh, thresh, Size(7, 7), 2.0, 2.0);
//Do edge detection
Mat edges;
Canny(thresh, edges, 45.0, 160.0, 3);
//Do straight line detection
vector<Vec2f> lines;
HoughLines( edges, lines, 1.5, CV_PI/180, 50, 0, 0 );
//imshow("thresholded", edges);
cout << "Detected " << lines.size() << " lines." << endl;
// compute the intersection from the lines detected...
vector<Point2f> intersections;
for( size_t i = 0; i < lines.size(); i++ )
{
for(size_t j = 0; j < lines.size(); j++)
{
Vec2f line1 = lines[i];
Vec2f line2 = lines[j];
if(acceptLinePair(line1, line2, CV_PI / 32))
{
Point2f intersection = computeIntersect(line1, line2);
intersections.push_back(intersection);
}
}
}
if(intersections.size() > 0)
{
vector<Point2f>::iterator i;
for(i = intersections.begin(); i != intersections.end(); ++i)
{
cout << "Intersection is " << i->x << ", " << i->y << endl;
circle(occludedSquare8u, *i, 1, Scalar(0, 255, 0), 3);
}
}
//Make new matrix bounded by the intersections
...
imshow("localized", localized);
Should be as simple as
std::vector<cv::Point2f> points;
cv::Mat image(points);
//or
cv::Mat image = cv::Mat(points)
The probably confusion is that a cv::Mat is an image width*height*number of channels but it also a mathematical matrix , rows*columns*other dimension.
If you make a Mat from a vector of 'n' 2D points it will create a 2 column by 'n' rows matrix. You are passing this to a function which expects an image.
If you just have a scattered set of 2D points and want to display them as an image you need to make an empty cv::Mat of large enough size (whatever your maximum x,y point is) and then draw the dots using the drawing functions http://docs.opencv.org/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.html
If you just want to set the pixel values at those point coordinates search SO for opencv setting pixel values, there are lots of answers
Martin's answer is right but IMO it depends on how image cv::Mat is used further along the line. I had some issues and Haofeng's comment helped me fix them. Here is my attempt to explain it in detail:
Let's say the code looks like this:
std::vector<cv::Point2f> points = {cv::Point2f(1.0, 2.0), cv::Point2f(3.0, 4.0), cv::Point2f(5.0, 6.0), cv::Point2f(7.0, 8.0), cv::Point2f(9.0, 10.0)};
cv::Mat image(points); // or cv::Mat image = cv::Mat(points)
std::cout << image << std::endl;
This will print:
[1, 2;
3, 4;
5, 6;
7, 8;
9, 10]
So, at first glance, this looks perfectly correct and as expected: for the five 2D points in the given vector, we got a cv::Mat with 5 rows and 2 columns, right? However, that's not the case here!
If further properties are inspected:
std::cout << image.rows << std::endl; // 5
std::cout << image.cols << std::endl; // 1
std::cout << image.channels() << std::endl; // 2
it can be seen that the above cv::Mat has 5 rows, 1 column, and 2 channels. Depending on the pipeline, we may not want that. Most of the time, we want a matrix with 5 rows, 2 columns, and just 1 channel.
To fix this problem, all we need to do is reshape the matrix:
cv::Mat image(points).reshape(1);
In the above code, 1 is for 1 channel. Check out OpenCV reshape() documentation for further information.
If this matrix is printed out, it will look the same as the previous one. However, that's not the whole picture (metaphorically!) The new matrix has 5 rows, 2 columns, and 1 channel.
I wish OpenCV had different ways of printing out these two similar yet different matrices (from the OpenCV data structure point of view)!

track some certain points in frame sequences using opencv and c++

I am new in image processing and c++ programming. this is what i have done until now to be able to keep the coordinate of some certain points in a sequence of frame:
I could find the center of a circle in the frame1.
cv::HoughCircles( tmp2, circles, CV_HOUGH_GRADIENT, 1, 300, 300, 100);
for( size_t i = 0; i < circles.size(); i++ ){
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
cout << "center" << center.x << ", " << center.y << endl;
Vector.push_back(std::make_pair(center.x,center.y)); //coordinates of center points
int radius = cvRound(circles[i][2]);
// circle center
circle( tmp2, center, 3, 1 , -1, 8, 0 );
// circle outline
circle( tmp2, center, radius, 1 , 3, 8, 0 );
}
}
what is this center point contains? does it contain pixel value in
that point?
if I have for example, 3 circle in frame1...is that a good way to copy(make_pair) them in a vector?
how to track these center points in the frame2 to find their new coordinates?
thanks in advance..
Yes, center contains coordinates, it is structure with fields x and y.
It depends on what do you need after this. How do you want to process them further?
Multiple object tracking depends on which kind of images do you have. You cannot "track" just centers of the circles without any prior information. Is it synthetic circles, or just some real objects or something else? Check the first answer here, it is relevant.