I have picture on which I need to detect shape. To do this, I used template from which I shelled points, and then try to fit this points to my image for best matching. When I found best matching i need to draw curve around this points.
Problem is that taken points are not in order. (you can see on picture).
How to sort points that curve will be with no "jumping", but smooth.
I tried with stable_sort() but without success.
stable_sort(points.begin(), points.end(), Ordering_Functor_y());
stable_sort(points.begin(), points.end(), Ordering_Functor_x());
Using stable_sort()
For drawing i used this function:
polylines(result_image, &pts, &npts, 1, true, Scalar(0, 255, 0), 3, CV_AA);
Any idea how to solve this problem? Thank you.
Edit:
Here is the code for getting points from template
for (int model_row = 0; (model_row < model.rows); model_row++)
{
uchar *curr_point = model.ptr<uchar>(model_row);
for (int model_column = 0; (model_column < model.cols); model_column++)
{
if (*curr_point > 0)
{
Point& new_point = Point(model_column, model_row);
model_points.push_back(new_point);
}
curr_point += image_channels;
}
}
In this part of code you can see where is the problem of points order. Is any better option how to save points in correct order, that i wont have problems with drawing contour?
Your current approach is to sort either the x or y values, and this is not the proper ordering for drawing the contour.
One approach could the following
Caculate the center of mass of the detected object
Place a polar coordinate system at the center of mass
Calculate direction to all points
Sort points by the direction coordinates
It will better than your current sorting, but might not be perfect.
A more involved approach is to determine the shortest path going through all the detected points. If the points are spread evenly around the contour this approach will locate the contour. The search term for this approach is the travelling salesman problem.
I think you searching for Concave Hull algorithm.
See here:
http://ubicomp.algoritmi.uminho.pt/local/concavehull.html
Java implementation: here
Related
My idea was to implement a simple square detection using openCV & C++ (objc++). I've already extracted the biggest areas of the image like you can see below (the colored ones) but now I'd like to extract the corner points (like TopLeft, TopRight, BottomLeft, BottomRight) of all the areas to afterwards check if the distance between the 4 corners is similar on each of them or the angle between the lines is nearly 45°.
See the images I was talking about:
However - I got to this point where I've tried to extract the areas corner points to get something like this afterwards:
This was my first idea how to get the 4 corner points (See the steps below):
1.calculate the contours center by
for (int i=0; i<contourPoints.size(); i++) {
avgx += contourPoints[i].x;
avgy += contourPoints[i].y;
}
avgx/=contourPoints.size(); // centerx
avgy/=contourPoints.size(); // centery
2.loop trough all contour points to get the points with the highest distance to the center --> Probably the corners if the contour is a square/rectangle
for (int i=0; i<contourPoints.size(); i++) {
dx = abs(avgx - contourPoints[i].x);
dy = abs(avgy - contourPoints[i].y);
dist = sqrt( dx*dx + dy*dy );
distvector.push_back(dist);
}
// sort distvector > distvector and get 4 corners with highest distance to the center -> hopefully the corners.
This procedure was my idea but I'm pretty sure there must be a better way to detect squares and extract it's corner points using just the given contour coordinates.
So any help how to improve my code to get a way better & more efficient detection would be very appreciated. Thanks a million in advance, Tempi.
To give some background to this question, I'm creating a game that needs to know whether the 'Orbit' of an object is within tolerance to another Orbit. To show this, I plot a Torus-shape with a given radius (the tolerance) using the Target Orbit, and now I need to check if the ellipse is within that torus.
I'm getting lost in the equations on Math/Stack exchange so asking for a more specific solution. For clarification, here's an image of the game with the Torus and an Orbit (the red line). Quite simply, I want to check if that red orbit is within that Torus shape.
What I believe I need to do, is plot four points in World-Space on one of those orbits (easy enough to do). I then need to calculate the shortest distance between that point, and the other orbits' ellipse. This is the difficult part. There are several examples out there of finding the shortest distance of a point to an ellipse, but all are 2D and quite difficult to follow.
If that distance is then less than the tolerance for all four points, then in think that equates to the orbit being inside the target torus.
For simplicity, the origin of all of these orbits is always at the world Origin (0, 0, 0) - and my coordinate system is Z-Up. Each orbit has a series of parameters that defines it (Orbital Elements).
Here simple approach:
Sample each orbit to set of N points.
Let points from first orbit be A and from second orbit B.
const int N=36;
float A[N][3],B[N][3];
find 2 closest points
so d=|A[i]-B[i]| is minimal. If d is less or equal to your margin/treshold then orbits are too close to each other.
speed vs. accuracy
Unless you are using some advanced method for #2 then its computation will be O(N^2) which is a bit scary. The bigger the N the better accuracy of result but a lot more time to compute. There are ways how to remedy both. For example:
first sample with small N
when found the closest points sample both orbits again
but only near those points in question (with higher N).
you can recursively increase accuracy by looping #2 until you have desired precision
test d if ellipses are too close to each other
I think I may have a new solution.
Plot the four points on the current orbit (the ellipse).
Project those points onto the plane of the target orbit (the torus).
Using the Target Orbit inclination as the normal of a plane, calculate the angle between each (normalized) point and the argument of periapse
on the target orbit.
Use this angle as the mean anomaly, and compute the equivalent eccentric anomaly.
Use those eccentric anomalies to plot the four points on the target orbit - which should be the nearest points to the other orbit.
Check the distance between those points.
The difficulty here comes from computing the angle and converting it to the anomaly on the other orbit. This should be more accurate and faster than a recursive function though. Will update when I've tried this.
EDIT:
Yep, this works!
// The Four Locations we will use for the checks
TArray<FVector> CurrentOrbit_CheckPositions;
TArray<FVector> TargetOrbit_ProjectedPositions;
CurrentOrbit_CheckPositions.SetNum(4);
TargetOrbit_ProjectedPositions.SetNum(4);
// We first work out the plane of the target orbit.
const FVector Target_LANVector = FVector::ForwardVector.RotateAngleAxis(TargetOrbit.LongitudeAscendingNode, FVector::UpVector); // Vector pointing to Longitude of Ascending Node
const FVector Target_INCVector = FVector::UpVector.RotateAngleAxis(TargetOrbit.Inclination, Target_LANVector); // Vector pointing up the inclination axis (orbit normal)
const FVector Target_AOPVector = Target_LANVector.RotateAngleAxis(TargetOrbit.ArgumentOfPeriapsis, Target_INCVector); // Vector pointing towards the periapse (closest approach)
// Geometric plane of the orbit, using the inclination vector as the normal.
const FPlane ProjectionPlane = FPlane(Target_INCVector, 0.f); // Plane of the orbit. We only need the 'normal', and the plane origin is the Earths core (periapse focal point)
// Plot four points on the current orbit, using an equally-divided eccentric anomaly.
const float ECCAngle = PI / 2.f;
for (int32 i = 0; i < 4; i++)
{
// Plot the point, then project it onto the plane
CurrentOrbit_CheckPositions[i] = PosFromEccAnomaly(i * ECCAngle, CurrentOrbit);
CurrentOrbit_CheckPositions[i] = FVector::PointPlaneProject(CurrentOrbit_CheckPositions[i], ProjectionPlane);
// TODO: Distance from the plane is the 'Depth'. If the Depth is > Acceptance Radius, we are outside the torus and can early-out here
// Normalize the point to find it's direction in world-space (origin in our case is always 0,0,0)
const FVector PositionDirectionWS = CurrentOrbit_CheckPositions[i].GetSafeNormal();
// Using the Inclination as the comparison plane - find the angle between the direction of this vector, and the Argument of Periapse vector of the Target orbit
// TODO: we can probably compute this angle once, using the Periapse vectors from each orbit, and just multiply it by the Index 'I'
float Angle = FMath::Acos(FVector::DotProduct(PositionDirectionWS, Target_AOPVector));
// Compute the 'Sign' of the Angle (-180.f - 180.f), using the Cross Product
const FVector Cross = FVector::CrossProduct(PositionDirectionWS, Target_AOPVector);
if (FVector::DotProduct(Cross, Target_INCVector) > 0)
{
Angle = -Angle;
}
// Using the angle directly will give us the position at th eccentric anomaly. We want to take advantage of the Mean Anomaly, and use it as the ecc anomaly
// We can use this to plot a point on the target orbit, as if it was the eccentric anomaly.
Angle = Angle - TargetOrbit.Eccentricity * FMathD::Sin(Angle);
TargetOrbit_ProjectedPositions[i] = PosFromEccAnomaly(Angle, TargetOrbit);}
I hope the comments describe how this works. Finally solved after several months of head-scratching. Thanks all!
I have a dataset of 500 cv::Point.
For each point, I need to determine if this point is contained in a ROI modelized by a concave polygon.
This polygon can be quite large (most of the time, it can be contained in a bounding box of 100x400, but it can be larger)
For that number of points and that size of polygon, what is the most efficient way to determine if a point is in a polygon?
using the pointPolygonTest openCV function?
building a mask with drawContours and finding if the point is white or black in the mask?
other solution? (I really want to be accurate, so convex polygons and bounding boxes are excluded).
In general, to be both accurate and efficient, I'd go with a two-step process.
First, a bounding box on the polygon. It's a quick and simple matter to see which points are not inside the box. With that, you can discard several points right off the bat.
Secondly, pointPolygonTest. It's a relatively costly operation, but the first step guarantees that you will only perform it for those points that need better accuracy.
This way, you mantain accuracy but speed up the process. The only exception is when most points will fall inside the bounding box. In that case, the first step will almost always fail and thus won't optimise the algorithm, will actually make it slightly slower.
Quite some time ago I had exactly the same problem and used the masking approach (second point of your statement). I was testing this way datasets containing millions of points and found this solution very effective.
This is faster than pointPolygonTest with and without a bounding box!
Scalar color(0,255,0);
drawContours(image, contours, k, color, CV_FILLED, 1); //k is the index of the contour in the array of arrays 'contours'
for(int y = 0; y < image.rows, y++){
const uchar *ptr = image.ptr(y);
for(int x = 0; x < image.cols, x++){
const uchar * pixel = ptr;
if((int) pixel[1] = 255){
//point is inside contour
}
ptr += 3;
}
}
It uses the color to check if the point is inside the contour.
For faster matrix access than Mat::at() we're using pointer access.
In my case this was up to 20 times faster than the pointPolygonTest.
I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds.
The two points clouds are those ones:
keypoints from the kinect (kinect_keypoints).
keypoints from a 3D object (box) (object_keypoints).
I have tried two options:
[1]. Implementation of the algorithm to find rigid transformation.
**1.Calculate the centroid of each point cloud.**
**2.Center the points according to the centroid.**
**3. Calculate the covariance matrix**
cvSVD( &_H, _W, _U, _V, CV_SVD_U_T );
cvMatMul( _V,_U, &_R );
**4. Calculate the rotartion matrix using the SVD descomposition of the covariance matrix**
float _Tsrc[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
-_gc_src.x,-_gc_src.y,-_gc_src.z,1.f }; // 1: src points to the origin
float _S[16] = { _scale,0.f,0.f,0.f,
0.f,_scale,0.f,0.f,
0.f,0.f,_scale,0.f,
0.f,0.f,0.f,1.f }; // 2: scale the src points
float _R_src_to_dst[16] = { _Rdata[0],_Rdata[3],_Rdata[6],0.f,
_Rdata[1],_Rdata[4],_Rdata[7],0.f,
_Rdata[2],_Rdata[5],_Rdata[8],0.f,
0.f,0.f,0.f,1.f }; // 3: rotate the scr points
float _Tdst[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
_gc_dst.x,_gc_dst.y,_gc_dst.z,1.f }; // 4: from scr to dst
// _Tdst * _R_src_to_dst * _S * _Tsrc
mul_transform_mat( _S, _Tsrc, Rt );
mul_transform_mat( _R_src_to_dst, Rt, Rt );
mul_transform_mat( _Tdst, Rt, Rt );
[2]. Use estimateAffine3D from opencv.
float _poseTrans[12];
std::vector<cv::Point3f> first, second;
cv::Mat aff(3,4,CV_64F, _poseTrans);
std::vector<cv::Point3f> first, second; (first-->kineckt_keypoints and second-->object_keypoints)
cv::estimateAffine3D( first, second, aff, inliers );
float _poseTrans2[16];
for (int i=0; i<12; ++i)
{
_poseTrans2[i] = _poseTrans[i];
}
_poseTrans2[12] = 0.f;
_poseTrans2[13] = 0.f;
_poseTrans2[14] = 0.f;
_poseTrans2[15] = 1.f;
The problem in the first one is that the transformation it is not correct and in the second one, if a multiply the kinect point cloud with the resultant matrix, some values are infinite.
Is there any solution from any of these options? Or an alternative one, apart from the PCL?
Thank you in advance.
EDIT: This is an old post, but an answer might be useful to someone ...
Your first approach can work in very specific cases (ellipsoid point clouds or very elongated shapes), but is not appropriate for point clouds acquired by the kinect. And about your second approach, I am not familiar with OpenCV function estimateAffine3D but I suspect it assumes the two input point clouds correspond to the same physical points, which is not the case if you used a kinect point cloud (which contain noisy measurements) and points from an ideal 3D model (which are perfect).
You mentioned that you are aware of the Point Cloud Library (PCL) and do not want to use it. If possible, I think you might want to reconsider this, because PCL is much more appropriate than OpenCV for what you want to do (check the tutorial list, one of them covers exactly what you want to do: Aligning object templates to a point cloud).
However, here are some alternative solutions to your problem:
If your two point clouds correspond exactly to the same physical points, your second approach should work, but you can also check out Absolute Orientation (e.g. Matlab implementation)
If your two point clouds do not correspond to the same physical points, you actually want to register (or align) them and you can use either:
one of the many variants of the Iterative Closest Point (ICP) algorithm, if you know approximately the position of your object. Wikipedia Entry
3D feature points such as 3D SIFT, 3D SURF or NARF feature points, if you have no clue about your object's position.
Again, all these approaches are already implemented in PCL.
I'm trying to find the coordinates of all the shapes that HarrisCorner method marked on my image.
I have it set up so it's marking the correct corners and showing the correct results, but I can't figure out where to find the coordinates after all is said and done.
I need a list of all of the corners that are marked by this algorithm so I can find their area, center of gravity, shape, & size.
Separately I have a list of all of the pixels contained within each shape, so it would be easy for me to match the coordinates with the corresponding shape.
I'm sorry if this is a green question. I've been reading everything I can find. Thank you OpenCV pros!
im = cv.LoadImage("image.jpg")
imgray = cv.LoadImage("image.jpg", cv.CV_LOAD_IMAGE_GRAYSCALE)
cornerMap = cv.CreateMat(im.height, im.width, cv.CV_32FC1)
cv.CornerHarris(imgray,cornerMap,3)
for y in range(0,imgray.height):
for x in range (0, imgray.width):
harris = cv.Get2D(cornerMap, y, x)
if harris[0] >10e-06:
temp = cv.Circle(im, (x,y),2,cv.RGB(115,0,25))
cv.ShowImage('my window', im)
cv.SaveImage("newimage3.jpg",im)
cv.WaitKey()
The corners are the (x,y) coordinates for which your corner-ness test passes:
if harris[0] > 10e-06