From an edge image 8UC1 obtained from Canny operator, I want to go through all white pixels and find their 8-neighboorhood.
As a first step, I apply
findNonZero(edgesFromCanny, nonZeroCoordinates);
to obtain just all white pixels to increase computational time.
The coordinates of those pixels in nonZeroCoordinates are then ordered in a row-wise manner, so that p(x=100, y =1) can be far away from p(x=100, y=2) in the nonZeroCoordinates Mat (column-wise), while p(x=100, y =1) and p(x=101, y =1) are subsequent in nonZeroCoordinates (if they are edges).
How can I (fast) retrieve the 8-neighboorhood of p(x=100, y=1) taking into account, it is an edge, too?
I found a solution using kNN, but I am not sure if this solution does not take too much computation or there can be a simpler one:
vector<Point2f> edgesVec; //Insert all 2D points to this vector
flann::KDTreeIndexParams indexParams;
flann::Index kdtree(Mat(edgesVec).reshape(1), indexParams);
vector<float> query;
query.push_back(i); //Insert the 2D point we need to find neighbours to the query
query.push_back(j); //Insert the 2D point we need to find neighbours to the query
vector<int> indices;
vector<float> dists;
kdtree.radiusSearch(query, indices, dists, 1.5, 8);
Related
I have a set of discrete points and using them, I performed Delaunay's triangulation.
I want to calculate all the edge lengths from a vertex to the neighboring vertices.
How can I do/code this in c++?
I haven't tested the code that you posted, but the problem seems trivial.
In your main function after you draw all the triangles/points, get the list of all the triangles from subdiv with:
vector<Vec6f> triangleList;
subdiv.getTriangleList(triangleList);
(just like in the draw_delaunay(...) function)
Now you just iterate the triangles and compare each point of each triangle to your vertex.
If it's the same point as yours, then you calculate the lengths of edges with two other points of the triangle.
Length here = L2 norm of the vector v = point - your_vertex = Sqrt(v.x^2 + v.y^2).
There may be duplicates of some edges, so if you want to avoid it, just create a set and add all the point there and calculate the norms later.
I have data set 1 and 2. Those have 2D data.
For example,
Data1 :
(x1, y1), (x2, y2), (x3, y3) .... (xn, yn)
Data2 : (x1', y1'), (x2', y2'), .... (xm', ym')
I'd like to compare them using histogram and Earth Mover's Distance(EMD) if possible.
Because I have 2D data, the data should be placed on 2D map, and the height of the histogram on 2D map has the frequency of the data, thus it should be 3D histogram I guess. Even though I success to create example to draw histogram and compare them using 1D data, I failed to try to change it to 2D data. How it works?
For example,
calcHist(&greyImg, 1, channel_numbers, Mat(), histogram1, 1, &number_bins, &channel_ranges);
This code makes tha Image's grayscale intensity(1D data) to histogram. But I could not change it to 2D data.
my Idea is this :
I create cv::Mat Data1Mat, Data2Mat; (Mat size is set as maximum value of x and y)
Then, push the Data1's x values to Mat1's first channel, push y values to second channel. (Same to Data2 and Data2Mat)
For example, for (x1, y1), set
Data1Mat.at(x1,y1)[0] = x1, Data1Mat.at(x1, y1)[1] = y1;
like this.Then create Histogram of them and compare. Do I think correctly?
I think it is more correct to say: histogram of 1D data, of histogram of 2D data.
You need histogram of 2D data.
1D histogram computes number of scalar values hit bin intervals.
2D histogram divides plane by regions and compute number of 2D points
hit each region.
Here computed H,S 2D histogram for an image: Calculate HSV histogram of a coloured image is it different from H-S histogram?
You have near the same problem, but put your x to instead of H, and y instead of S.
C++:
void
convexHull(InputArray points,
OutputArray hull,
bool clockwise=false,
bool returnPoints=true);
The description given on OutputArray hull is as follows:
hull – Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.
So what is integer vector of indices ?
If I use the output array as a vector<vector<int>>, what do I get in it?
Can I print the results?
Answering to your specific questions:
Integer vector of indices (where index starts from 0) are the indices that indicates which points from InputArray points are in the set of convex hull points.
You can use either a vector of integer or vector of points. In the first case, you get the indices that allow to access to the actual point given the input array of points. In the second case, you can read directly the coordinates of the points from the output array.
This question is not entirely clear as you don't mention where you want to print the results. Assuming that you want to show it in an image, you can draw the convex hull with polylines. Specifically (look at cv::polylines for more information):
void cv::polylines (
InputOutputArray img,
InputArrayOfArrays pts,
bool isClosed,
const Scalar & color,
int thickness = 1,
int lineType = LINE_8,
int shift = 0
)
To print the coordinates of the points in the console, assuming that the output vector is a vector of integers, thus indices:
size_t hull_size = hull.size();
for (size_t i = 0; i < hull_size; i++)
{
std::cout << points[hull[i]] << std::endl;
}
I have a 3 channel Mat image, type is CV_8UC3.
I want to compare, in a loop, the intensity value of a pixel with its neighbours and then set 0 or 1 if the neighbour is greater or not.
I can get the intensity calling Img.at<Vec3b>(x,y).
But my question is: how can I compare two Vec3b?
Should I compare pixels value for every channel (BGR or Vec3b[0], Vec3b[1] and Vec3b[2]), and then merge the three channels results into a single Mat object?
Me again :)
If you want to compare (greater or less) two RGB values you need to project the 3-dimensional RGB space onto a plane or axis.
Of course, there are many possibilities to do this, but an easy way would be to use the HSV color space. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5 > 0.0 or 0.5 < 0.0). However, the saturation (S) or the value (V) are appropriate projection functions for your purpose:
If you want to have colored pixels "larger" than monochrome pixels, you will prefer S.
If you want to have lighter pixels larger than darker pixels, you will probably prefer V.
Also any combination of S and V would be a valid projection function, e.g. S+V.
As far as I understand, you want a measure to calculate distance/similarity between two Vec3b pixels. This can be reflected to the general problem of finding distance between two vectors in an n-mathematical space.
One of the famous measures (and I think this is what you're asking for), is the Euclidean distance.
If you are using Opencv then you can simply use:
cv::Vec3b a(1, 1, 1);
cv::Vec3b b(5, 5, 5);
double dist = cv::norm(a, b, CV_L2);
You can refer to this for reading about cv::norm and its options.
Edit: If you are doing this to measure color similarity, it's recommended to use the LAB color space as it's proved that Euclidean distance in LAB space is a good approximation for human perception of colors.
Edit 2: I see what you mean, for this you can get the magnitude of each vector and then compare them, something like this:
double a_magnitude = cv::norm(a, CV_L2);
double b_magnitude = cv::norm(b, CV_L2);
if(a_magnitude > b_magnitude)
// do something
else
// do something else.
I found contours and hull using OpenCV methods(C++) on image. And I want to draw defects points. I found defects points by calling
vector<Vec4i> defects;
convexityDefects(contours, hull, defects);
There are 4 integer number each defect. Which one is x coordinate? I want to get defects points's coordinates. I will draw starter points of black lines which are on hand.
You'll want something like: Point p = contours[defects[d][2]]
I'll quote just the meaningful part of the documentation:
[...] 4-element integer vector: (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect [...]
So the returned values represent indexes in the original contour.
defects[d] represents the d-th contour. Then you take its 3rd member, farthest_pt_index, which is at defects[d][2]. This integer is the index of a point in the original contour that is the farthest from the hull, i.e. the lower arrow head on the drawing. Its coordinates:
Point p = contours[defects[d][2]]
int x = p.x
int y = p.y
And if you want to know how far this point is from the hull, you'll have to divide the 4-th element by 256: float p_distance = defects[d][3] / 256.0
The doc on convexityDefects():
convexityDefects – The output vector of convexity defects. In C++ and the new Python/Java interface each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
So each convexity defect consists of several points, from start_index to end_index in the countour parameter of convexityDefects().