I found contours and hull using OpenCV methods(C++) on image. And I want to draw defects points. I found defects points by calling
vector<Vec4i> defects;
convexityDefects(contours, hull, defects);
There are 4 integer number each defect. Which one is x coordinate? I want to get defects points's coordinates. I will draw starter points of black lines which are on hand.
You'll want something like: Point p = contours[defects[d][2]]
I'll quote just the meaningful part of the documentation:
[...] 4-element integer vector: (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect [...]
So the returned values represent indexes in the original contour.
defects[d] represents the d-th contour. Then you take its 3rd member, farthest_pt_index, which is at defects[d][2]. This integer is the index of a point in the original contour that is the farthest from the hull, i.e. the lower arrow head on the drawing. Its coordinates:
Point p = contours[defects[d][2]]
int x = p.x
int y = p.y
And if you want to know how far this point is from the hull, you'll have to divide the 4-th element by 256: float p_distance = defects[d][3] / 256.0
The doc on convexityDefects():
convexityDefects – The output vector of convexity defects. In C++ and the new Python/Java interface each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
So each convexity defect consists of several points, from start_index to end_index in the countour parameter of convexityDefects().
Related
The following is a more elaborative conjecture on what i wish to achieve; here is how far I reached;
A 3d grid, about 303030, or a 3d array, so i can define a function of R3 -> R f(x, y, z) = v More precisely, where x, y, z € [0, N] of float values so for f(0.5, 0.5, 0.5) the result would be the trilinear interpolation for the points (0,0,0), (0,0,1), (0,1,0), (0,1,1), (1,0,0), (1,0,1), (1,1,0) and (1,1,1). With v is equal to the value stored in the array if x, y, and z are integer values, or the trilinear interpolation of the closest points in the array where N_i is the number of points - 1 in the i dimension of the array; x € [0, N_x], y € [0, N_y], and z € [0, N_z]. Now let's Imagine a 1d array(which does not exist, only integer indices), one can make up a value by interpolation between closest actual values, and can extend this to 2d, though if you try to get a value for the position 0.3864 for positions 0 and 1 you need the 4 closest points in the end you can extend to any number of dimensions. Providing the values at (0,0), (0,1), (1,0) and (1,1). n is the number of dimensions which have a non-integer coordinate, but you get the point with a bilinear interpolation, and you'll need exactly 2n points where n is the number of dimensions.
Simplified;
I have a 3d grid of floats which via I wish to access this values in parallel by the thousands In random positions. To which then I want to convert this memory bound process into cpu bound; by flattening the 3d array, and approximate it with a finite Fourier expansion or something similar. Then calculate the values at the required positions of this flattened data and use the calculated values to do the trilinear interpolation. Conclusively, the original code would just access the values by their array indices, one by one. as the values are being accessed randomly and they are far away from each other in memory; which i'm looking for a suitable strategy to access (or calculate if possible) the values based on an index.
I have X-Y plane and points (xi, yi) where x, y and i are integers. Now if I draw infinite lines of slope 1 and -1, I have to find those 2 points which either will lie on the same line or if none of them lie then should output:
Case : If atmost 1 point lies on a line the 2nd point should be the point which has minimum distance from the line. In such cases we can draw the line exactly between those 2 points to minimize the distance.
I am not able to find the solution to this problem. My approach was to look at the points in opposite quadrants but I did not get any solution better than O(n^2).
First, I would transform the points into a different coordinate system that is rotated by 45°:
u = x + y
v = x - y
If the original points lie on a line with slope 1, their v coordinate will be equal. If they lie on a line with slope -1, their u coordinate will be equal.
Now, create two lists of points. One sorted by u, the other sorted by v. Then iterate all the points. To find the point that is closest to the corresponding line, you just have to check the neighbors in the sorted order. If there are neighbors with the same u/v coordinate, you are done. If not, find the neighbor with the smallest u/v difference and remember it. Do this for all the points and report the pair with the smallest distance.
C++:
void
convexHull(InputArray points,
OutputArray hull,
bool clockwise=false,
bool returnPoints=true);
The description given on OutputArray hull is as follows:
hull – Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.
So what is integer vector of indices ?
If I use the output array as a vector<vector<int>>, what do I get in it?
Can I print the results?
Answering to your specific questions:
Integer vector of indices (where index starts from 0) are the indices that indicates which points from InputArray points are in the set of convex hull points.
You can use either a vector of integer or vector of points. In the first case, you get the indices that allow to access to the actual point given the input array of points. In the second case, you can read directly the coordinates of the points from the output array.
This question is not entirely clear as you don't mention where you want to print the results. Assuming that you want to show it in an image, you can draw the convex hull with polylines. Specifically (look at cv::polylines for more information):
void cv::polylines (
InputOutputArray img,
InputArrayOfArrays pts,
bool isClosed,
const Scalar & color,
int thickness = 1,
int lineType = LINE_8,
int shift = 0
)
To print the coordinates of the points in the console, assuming that the output vector is a vector of integers, thus indices:
size_t hull_size = hull.size();
for (size_t i = 0; i < hull_size; i++)
{
std::cout << points[hull[i]] << std::endl;
}
How can i get the extreme points of a convex polygon looking from a determined point? I'm trying to make this by points angle, the smaller and bigger angles is the extreme points, but when observer is closer to points, this is not valid.
This is my code:
Vec2* bigger = &points[0]; // pointer to point with bigger angle
Vec2* smaller = &points[0]; // pointer to point with smaller angle
Vec2 observer = rayCenter;
// iterate through all points of polygon
for(unsigned u = 0 ; u < points.size() ; u++)
{
Vec2 distance = observer - points[u];
if(distance.angle() < (observer - *smaller).angle())
smaller = &points[u];
if(distance.angle() > (observer - *bigger).angle())
bigger = &points[u];
}
The result:
Where blue lines is the excluded points and yellow desirable points.
Is there a best way to resolve this?
Sorry for my english.
Polygon vertex A is extreme for the given location of the observer, iff all other points of the polygon lie on the same side of the observer-to-A line (or, possibly, lie on that line).
If the polygon is known to be convex, then the criterion is greatly simplified. There's no need to analyze all other points of the polygon. The extreme point can be easily recognized by analyzing the locations of its two immediate neighbors.
If A is our candidate point and P and N are its adjacent points in the polygon (previous and next), then A is an extreme point iff both P and N lie on the same side of observer-to-A line.
vec_of_A = A - observer; // observer-to-A vector
vec_of_P = P - observer;
vec_of_N = N - observer;
productP = vec_of_A.x * vec_of_P.y - vec_of_A.y * vec_of_P.x;
productN = vec_of_A.x * vec_of_N.y - vec_of_A.y * vec_of_N.x;
if (sign(productP) == sign(productN))
// A is an extreme point
else
// A is not an extreme point
Some extra decision making will be necessary if P and/or N lie exactly on the observer-to-A line (depends on what point you consider extreme in such cases).
Compute a new convex hull using the points from the existing convex hull plus the observer point. In the new convex hull, the points that are adjacent to the observer point are your "extreme" points.
Here is a matlab implementation. Below is a sample output where the blue point is the observer point and the green polygon is the convex hull of the red points. The implementation returns the points (0,0) and (2,0).
You shouldn't compare angles directly as there is a 360° wraparound.
Prefer to test "this point is more to the left" or "to the right" by computing the signed area of the triangle formed by the observer and two points.
I wanted to detect ellipse in an image. Since I was learning Mathematica at that time, I asked a question here and got a satisfactory result from the answer below, which used the RANSAC algorithm to detect ellipse.
However, recently I need to port it to OpenCV, but there are some functions that only exist in Mathematica. One of the key function is the "GradientOrientationFilter" function.
Since there are five parameters for a general ellipse, I need to sample five points to determine one. Howevere, the more sampling points indicates the lower chance to have a good guess, which leads to the lower success rate in ellipse detection. Therefore, the answer from Mathematica add another condition, that is the gradient of the image must be parallel to the gradient of the ellipse equation. Anyway, we'll only need three points to determine one ellipse using least square from the Mathematica approach. The result is quite good.
However, when I try to find the image gradient using Sobel or Scharr operator in OpenCV, it is not good enough, which always leads to the bad result.
How to calculate the gradient or the tangent of an image accurately? Thanks!
Result with gradient, three points
Result without gradient, five points
----------updated----------
I did some edge detect and median blur beforehand and draw the result on the edge image. My original test image is like this:
In general, my final goal is to detect the ellipse in a scene or on an object. Something like this:
That's why I choose to use RANSAC to fit the ellipse from edge points.
As for your final goal, you may try
findContours and [fitEllipse] in OpenCV
The pseudo code will be
1) some image process
2) find all contours
3) fit each contours by fitEllipse
here is part of code I use before
[... image process ....you get a bwimage ]
vector<vector<Point> > contours;
findContours(bwimage, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(size_t i = 0; i < contours.size(); i++)
{
size_t count = contours[i].size();
Mat pointsf;
Mat(contours[i]).convertTo(pointsf, CV_32F);
RotatedRect box = fitEllipse(pointsf);
/* You can put some limitation about size and aspect ratio here */
if( box.size.width > 20 &&
box.size.height > 20 &&
box.size.width < 80 &&
box.size.height < 80 )
{
if( MAX(box.size.width, box.size.height) > MIN(box.size.width, box.size.height)*30 )
continue;
//drawContours(SrcImage, contours, (int)i, Scalar::all(255), 1, 8);
ellipse(SrcImage, box, Scalar(0,0,255), 1, CV_AA);
ellipse(SrcImage, box.center, box.size*0.5f, box.angle, 0, 360, Scalar(200,255,255), 1, CV_AA);
}
}
imshow("result", SrcImage);
If you focus on ellipse(no other shape), you can treat the value of the pixels of the ellipse as mass of the points.
Then you can calculate the moment of inertial Ixx, Iyy, Ixy to find out the angle, theta, which can rotate a general ellipse back to a canonical form (X-Xc)^2/a + (Y-Yc)^2/b = 1.
Then you can find out Xc and Yc by the center of mass.
Then you can find out a and b by min X and min Y.
--------------- update -----------
This method can apply to filled ellipse too.
More than one ellipse on a single image will fail unless you segment them first.
Let me explain more,
I will use C to represent cos(theta) and S to represent sin(theta)
After rotation to canonical form, the new X is [eq0] X=xC-yS and Y is Y=xS+yC where x and y are original positions.
The rotation will give you min IYY.
[eq1]
IYY= Sum(m*Y*Y) = Sum{m*(xS+yC)(xS+yC)} = Sum{ m(xxSS+yyCC+xySC) = Ixx*S^2 + Iyy*C^2 + Ixy*S*C
For min IYY, d(IYY)/d(theta) = 0 that is
2IxxSC - 2IyySC + Ixy(CC-SS) = 0
2(Ixx-Iyy)/Ixy = (SS-CC)/SC = S/C+C/S = Z+1/Z
While programming, the LHS is just a number, let's said N
Z^2 - NZ +1 =0
So there are two roots of Z hence theta, let's said Z1 and Z2, one will min the IYY and the other will max the IYY.
----------- pseudo code --------
Compute Ixx, Iyy, Ixy for a hollow or filled ellipse.
Compute theta1=atan(Z1) and theta2=atan(Z2)
Put These two theta into eq1 find which is smaller. Then you get theta.
Go back to those non-zero pixels, transfer them to new X and Y by the theta you found.
Find center of mass Xc Yc and min X and min Y by sort().
-------------- by hand -----------
If you need the original equation of the ellipse
Just put [eq0] into the canonical form
You're using terms in an unusual way.
Normally for images, the term "gradient" is interpreted as if the image is a mathematical function f(x,y). This gives us a (df/dx, df/dy) vector in each point.
Yet you're looking at the image as if it's a function y = f(x) and the gradient would be f(x)/dx.
Now, if you look at your image, you'll see that the two interpretations are definitely related. Your ellipse is drawn as a set of contrasting pixels, and as a result there are two sharp gradients in the image - the inner and outer. These of course correspond to the two normal vectors, and therefore are in opposite directions.
Also note that your image has pixels. The gradient is also pixelated. The way your ellipse is drawn, with a single pixel width means that your local gradient takes on only values that are a multiple of 45 degrees:
▄▄ ▄▀ ▌ ▀▄