How to find the radius of an irregular circle using OpenCV C++ - c++

How can I estimate the radius of an irregular circle I have detected? I have used findContours() to find where the circles are and used moments() to find the centre of mass. I now just need the radius of the circle so I can estimate it's volume.
For finding contours I have used the following code:
findContours(Closed_Image, Contours, Hierarchy,
CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(0,0) );
For finding the centre of mass I have used the following code:
vector<Moments> mu(Contours.size() );
b = 0;
for(b = 0; b < Contours.size(); b++)
{
mu[b] = moments(Contours[b], true);
}
vector<Point2f> mc(Contours.size() );
c = 0;
for(c = 0; c < Contours.size(); c++)
{
mc[c] = Point2f(mu[c].m10/mu[c].m00, mu[c].m01/mu[c].m00);
cout << "Contours Centre = " << mc[c] << endl;
}
I'm guessing that I need to use the contour points stored in Contours but the shape isn't a perfect circle so the distance between each point in the contour and its centre of mass would be different. Is this just a matter of averaging the distance between the centre of mass and each point in the contour (average distance = radius)? If so, how can I actually implement it (I'm not sure how to access data in vector<vector<Point> > Contours;). Otherwise, how would I be able to achieve estimating the radius?
(Click here for the image I am working with)
Centre of mass is as follows: [424.081, 13.4634], [291.159, 10.4545], [157.833, 10.4286], [24.5198, 8.09127]

Related

Can't detected bounding rect of id card

I want to detect the bounding rectangle of an German ID card within an image by using OpenCV.
This is what my code looks like:
capture >> frame;
cv::resize(frame, frame, cv::Size(512,256));
cv::Mat grayScaledFrame, blurredFrame, cannyFrame;
cv::cvtColor(frame, grayScaledFrame, cv::COLOR_BGR2GRAY);
cv::GaussianBlur(grayScaledFrame, blurredFrame, cv::Size(9,9), 1);
cv::Canny(blurredFrame, cannyFrame, 40, 70);
// CONTOURS
std::vector<std::vector<cv::Point>> contours;
cv::findContours(cannyFrame, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// SORT
int maxArea = 0;
std::vector<cv::Point> contour;
for(int i = 0; i < contours.size(); i++) {
int thisArea = cv::contourArea(contours.at(i));
if(thisArea > maxArea) {
maxArea = thisArea;
contour = contours.at(i);
}
}
cv::Rect borderBox = cv::boundingRect(contour);
cv::rectangle(cannyFrame, borderBox, cv::Scalar{255, 32, 32}, 8);
cv::imshow("Webcam", cannyFrame);
The result looks like this:
RESULT
There are some rectangles detected but not the big one I'm interested in.
I've already tried different thresholds for Canny and also different kernel sizes for Gaussian Blur.
Best regards
First of all, as the environmental conditions change, the parameters of the code change, so it is necessary to standardize the environment (light, distance to the object, etc.).
To get this detection right, put the card at a fixed distance from the camera and calculate the area of the rectangles.
When the card is at a certain distance from the camera, you get approximate reference values of the card's area. Then, when drawing a rectangle, you use values within a specified tolerance range.

Calculate perimeter of convex hull in opencv

For detection of table I need to calculate perimeter of convex hull in each connected component. I wrote the following code, but it is giving wrong answer.
findContours(rois[wp], contoursc, hierarchyh, CV_RETR_EXTERNAL, CHAIN_APPROX_NONE);
double perim=0;
vector<vector<Point> > hullh(contoursc.size());
for (int i = 0; i < contoursc.size(); i++)
{
convexHull(contoursc[i], hullh[i], false);
}
for(int i=0;i<hullh.size();i++){
perim=perim + arcLength(hullh[i],true);
}
cout<<"Perimeter of convex hull = "<<peri<<"\n";
Can someone explain what could be the reason of wrong result.

opencv find perimeter of a connected component

I'm using opencv 2.4.13
I'm trying to find the perimeter of a connected component, I was thinking of using ConnectedComponentWithStats but it doesn't return the perimeter, only the area, width, etc...
There is a method to find the area with the contour but not the opposite (with one component i mean, not the entire image).
The method arcLength doesn't work as well beause i have all the points of the component, not only the contour.
I know there is a BF way to find it by iterating through each pixel of the component and see if he has neighbors which aren't in the same component. But I'd like a function which costs less.
Otherwise, if you know a way to link a component with the contours found by the method findContours, it suits me as well.
Thanks
Adding to #Miki's answer, This is a faster way to find the perimeter of the connected component
//getting the connected components with statistics
cv::Mat1i labels, stats;
cv::Mat centroids;
int lab = connectedComponentsWithStats(img, labels, stats, centroids);
for (int i = 1; i < lab; ++i)
{
//Rectangle around the connected component
cv::Rect rect(stats(i, 0), stats(i, 1), stats(i, 2), stats(i, 3));
// Get the mask for the i-th contour
Mat1b mask_i = labels(rect) == i;
// Compute the contour
vector<vector<Point>> contours;
findContours(mask_i, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
if(contours.size() <= 0)
continue;
//Finding the perimeter
double perimeter = contours[0].size();
//you can use this as well for measuring perimeter
//double perimeter = arcLength(contours[0], true);
}
The easiest thing is probably to use findContours.
You can compute the contour on the i-th component computed by connectedComponents(WithStats) , so they are aligned with your labels. Using CHAIN_APPROX_NONE you'll get all the points in the contour, so the size() of the vector is already a measure of the perimeter. You can eventually use arcLength(...) to get a more accurate result:
Mat1i labels;
int n_labels = connectedComponents(img, labels);
for (int i = 1; i < n_labels; ++i)
{
// Get the mask for the i-th contour
Mat1b mask_i = labels == i;
// Compute the contour
vector<vector<Point>> contours;
findContours(mask_i.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
if (!contours.empty())
{
// The first contour (and probably the only one)
// is the one you're looking for
// Compute the perimeter
double perimeter_i = contours[0].size();
}
}

Tracking of multiple objects in openCV using C++

I am doing a project in OpenCV on estimating the speed of moving vehicle using the video captured. Here the camera is stationary. I have estimated the speed of single object using centroid and Euclidean distance. Now the problem is, I am not getting how to do the same for multiple objects.
Here, I need to calculate the Euclidean distance of objects between 2 subsequent frames.
I am grateful if anyone would help.
I have created the class-
class centroids
{
public:
vector<Point2f> ce;
vector<float> area;
};
centroids c[100];
And this is the code I've written. I would be grateful if anyone helped me with the code:
findContours( fgMaskMOG2,
contours,
hierarchy,
CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE );
int morph_size = 6;
Mat element = getStructuringElement( MORPH_RECT,
Size( 2*morph_size+1, 2*morph_size+1 ),
Point( morph_size, morph_size ) );
Scalar color( 255, 255, 255 ); // color of the contour in the
//Draw the contour and rectangle
for( int i = 0; i < contours.size(); i++ )
{
drawContours( fgMaskMOG2,
contours,
i,
color,
CV_FILLED,
8,
hierarchy );
}
//imshow("morpho window",dst);
vector<Moments> mu( contours.size() );
vector<Point2f> mc( contours.size() );
vector<Point2f> m ;
vector<double> time;
vector<Point2f> centroid( mc.size() );
//vector< vector<Point> >::iterator itc = contours.begin();
// iterate through each contour.
double time1[1000];
for( int i = 0; i < contours.size(); i++ )
{
// Find the area of contour
double a = contourArea( contours[i], false );
if( a > 500 )
{
mu[i] = moments( contours[i], false );
mc[i] = Point2f( (mu[i].m10 / mu[i].m00), (mu[i].m01 / mu[i].m00) );
m.push_back( mc[i] );
Point2f diff;
double euclidian = 0;
for( int f = 0; f < m.size(); f++ )
{
if( k == 1 )
{
c[f].ce.push_back( m[f] );
cout << "cen" << c[f].ce << endl;
euclidian = 0;
}
else
{
c[f+1].ce.push_back( m[f] );
cout << "cent" << c[f+1].ce << endl;
diff = c[f].ce[f] - c[f-1].ce[f-1];
euclidian = abs( sqrt( (diff.x*diff.x) + (diff.y*diff.y) ) );
cout << "euclidian" << euclidian << endl;
}
}
cout << "\n centroid" << m << endl;
circle( fgMaskMOG2,
mc[i],
5,
Scalar( 0, 0, 255 ),
1,
8,
0 );
}
}
Thanks in advance :)
You can estimate speed of a moving vehicle based on the video frames only if approximate distance between vehicle and camera is constant throughout the calculation i.e. vehicle is moving in a straight line perpendicular to the camera's vision. So, if camera is looking from side, all the vehicles will be at different distance and calculation will become highly inaccurate for multiple vehicles. Even the vehicles will overlap and their segmentation will be difficult.
There are two scenarios in which your calculation may work -
First, when the camera is capturing from top looking vertically down on vehicles. In this case, there will be a stark difference between the vehicle color and road color. You can use several ways to segment out those individual vehicles, tag them based on their features and identify those vehicles in next frame using the features. This way you'll get position of individual vehicles and then you can predict the speed based on your algorithm. These are following links which will be helpful for segmenting the vechicles -
How to define the markers for Watershed in OpenCV?
http://www.codeproject.com/Articles/751744/Image-Segmentation-using-Unsupervised-Watershed-Al
http://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Watershed_Algorithm_Marker_Based_Segmentation.php
Second, when vehicles are moving in a single line behind one another. In this case, you can use a combination of color and contour based segmentation depending on the background of your vehicles. After segmentation you can again use object features to identify the position of objects in the next frame. Then run your algorithm for both cases.
If you have complete video sequence of the vehicles, you can segment out different vehicles in the first frame automatically or identify them manually, and then apply motion tracking on those identified objects. You can use Opencv's motion analysis functions and object tracking functions to do so. Thus you'll get position of all tracked vehicles in each frame. So you can easily run and test your speed calculating algorithms.

How to find coordinates of largest rectangle in OpenCV?

I have the following code:
findContours( src, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE );
Mat drawing = Mat::zeros( src.size(), CV_8UC3 );
double largest_area = 0;
for( int i = 0; i < contours.size(); i++) { // get the largest contour
area = fabs( contourArea( contours[i] ) );
if( area >= largest_area ){
largest_area = area;
largest_contours.clear();
largest_contours.push_back( contours[i] );
}
}
if( largest_area >= 3000 ){ // draw the largest contour if exceeded minimum largest area
drawContours( drawing, largest_contours, -1, Scalar(0,0,255), 2 );
}
... which produces the following output image:
I want to get coordinates of four points (marked with green), is that possible?
Do you trying to find corners of rectangle in perspective?
You may want to try several solutions:
Use HoughLines for line detection and find their intersection.
Use Generalized Hough Transform
Use Harris corner detector. But you need to filter extra corners.
For similar task I used following procedure (it works fine in my case):
do cv::approxPolyDP for input contour with increasing epsilon parameter until it returns 4 or less polylines. If it returns 4 polylines you may get 4 corner points exact what you need. If it returns less than 4 polylines most probably something is wrong.