I have a dataset of 500 cv::Point.
For each point, I need to determine if this point is contained in a ROI modelized by a concave polygon.
This polygon can be quite large (most of the time, it can be contained in a bounding box of 100x400, but it can be larger)
For that number of points and that size of polygon, what is the most efficient way to determine if a point is in a polygon?
using the pointPolygonTest openCV function?
building a mask with drawContours and finding if the point is white or black in the mask?
other solution? (I really want to be accurate, so convex polygons and bounding boxes are excluded).
In general, to be both accurate and efficient, I'd go with a two-step process.
First, a bounding box on the polygon. It's a quick and simple matter to see which points are not inside the box. With that, you can discard several points right off the bat.
Secondly, pointPolygonTest. It's a relatively costly operation, but the first step guarantees that you will only perform it for those points that need better accuracy.
This way, you mantain accuracy but speed up the process. The only exception is when most points will fall inside the bounding box. In that case, the first step will almost always fail and thus won't optimise the algorithm, will actually make it slightly slower.
Quite some time ago I had exactly the same problem and used the masking approach (second point of your statement). I was testing this way datasets containing millions of points and found this solution very effective.
This is faster than pointPolygonTest with and without a bounding box!
Scalar color(0,255,0);
drawContours(image, contours, k, color, CV_FILLED, 1); //k is the index of the contour in the array of arrays 'contours'
for(int y = 0; y < image.rows, y++){
const uchar *ptr = image.ptr(y);
for(int x = 0; x < image.cols, x++){
const uchar * pixel = ptr;
if((int) pixel[1] = 255){
//point is inside contour
}
ptr += 3;
}
}
It uses the color to check if the point is inside the contour.
For faster matrix access than Mat::at() we're using pointer access.
In my case this was up to 20 times faster than the pointPolygonTest.
Related
In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]
I want to perform Template matching with mask. In general Template matching can be made faster by converting the image from Spacial domain into Frequency domain. But is there any any method i can apply if i want to perform the same with mask? I'm using opencv c++. Is there any matching function already there in opencv for this task?
My current Approach:
Bitwise Xor Image A & Image B with Mask.
Count the Non-Zero Pixels.
Fill the Resultant matrix with this count.
Search for maxi-ma.
Few parameters I'm guessing now are:
Skip the Tile position if the matches are less than 25%.
Skip the tile position if the matches are less than 25%.
Skip the Tile position if the previous Tile has matches are less than 50%.
My question: is there any algorithm to do this matching already? Is there any mathematical operation which can speed up this process?
With binary images, you can use directly HU-Moments and Mahalanobis distance to find if image A is similar to image B. If the distance tends to 0, then the images are the same.
Of course you can use also Features detectors so see what matches, but for pictures like these, HU Moments or Features detectors will give approximately same results, but HU Moments are more efficient.
Using findContours, you can extract the black regions inside the white star and fill them, in order to have image A = image B.
Other approach: using findContours on your mask and apply the result to Image A (extracting the Region of Interest), you can extract what's inside the star and count how many black pixels you have (the mismatching ones).
I have same requirement and I have tried the almost same way. As in the image, I want to match the castle. The castle has a different shield image and variable length clan name and also grass background(This image comes from game Clash of Clans). The normal opencv matchTemplate does not work. So I write my own.
I follow the ways of matchTemplate to create a result image, but with different algorithm.
The core idea is to count the matched pixel under the mask. The code is following, it is simple.
This works fine, but the time cost is high. As you can see, it costs 457ms.
Now I am working on the optimization.
The source and template images are both CV_8U3C, mask image is CV_8U. Match one channel is OK. It is more faster, but it still costs high.
Mat tmp(matTempl.cols, matTempl.rows, matTempl.type());
int matchCount = 0;
float maxVal = 0;
double areaInvert = 1.0 / countNonZero(matMask);
for (int j = 0; j < resultRows; j++)
{
float* data = imgResult.ptr<float>(j);
for (int i = 0; i < resultCols; i++)
{
Mat matROI(matSource, Rect(i, j, matTempl.cols, matTempl.rows));
tmp.setTo(Scalar(0));
bitwise_xor(matROI, matTempl, tmp);
bitwise_and(tmp, matMask, tmp);
data[i] = 1.0f - float(countNonZero(tmp) * areaInvert);
if (data[i] > matchingDegree)
{
SRect rc;
rc.left = i;
rc.top = j;
rc.right = i + imgTemplate.cols;
rc.bottom = j + imgTemplate.rows;
rcOuts.push_back(rc);
if ( data[i] > maxVal)
{
maxVal = data[i];
maxIndex = rcOuts.size() - 1;
}
if (++matchCount == maxMatchs)
{
Log_Warn("Too many matches, stopped at: " << matchCount);
return true;
}
}
}
}
It says I have not enough reputations to post image....
http://i.stack.imgur.com/mJrqU.png
New added:
I success optimize the algorithm by using key points. Calculate all the points is cost, but it is faster to calculate only server key points. See the picture, the costs decrease greatly, now it is about 7ms.
I still can not post image, please visit: http://i.stack.imgur.com/ePcD9.png
Please give me reputations, so I can post images. :)
There is a technical formulation for template matching with mask in OpenCV Documentation, which works well. It can be used by calling cv::matchTemplate and its source code is also available under the Intel License.
I have picture on which I need to detect shape. To do this, I used template from which I shelled points, and then try to fit this points to my image for best matching. When I found best matching i need to draw curve around this points.
Problem is that taken points are not in order. (you can see on picture).
How to sort points that curve will be with no "jumping", but smooth.
I tried with stable_sort() but without success.
stable_sort(points.begin(), points.end(), Ordering_Functor_y());
stable_sort(points.begin(), points.end(), Ordering_Functor_x());
Using stable_sort()
For drawing i used this function:
polylines(result_image, &pts, &npts, 1, true, Scalar(0, 255, 0), 3, CV_AA);
Any idea how to solve this problem? Thank you.
Edit:
Here is the code for getting points from template
for (int model_row = 0; (model_row < model.rows); model_row++)
{
uchar *curr_point = model.ptr<uchar>(model_row);
for (int model_column = 0; (model_column < model.cols); model_column++)
{
if (*curr_point > 0)
{
Point& new_point = Point(model_column, model_row);
model_points.push_back(new_point);
}
curr_point += image_channels;
}
}
In this part of code you can see where is the problem of points order. Is any better option how to save points in correct order, that i wont have problems with drawing contour?
Your current approach is to sort either the x or y values, and this is not the proper ordering for drawing the contour.
One approach could the following
Caculate the center of mass of the detected object
Place a polar coordinate system at the center of mass
Calculate direction to all points
Sort points by the direction coordinates
It will better than your current sorting, but might not be perfect.
A more involved approach is to determine the shortest path going through all the detected points. If the points are spread evenly around the contour this approach will locate the contour. The search term for this approach is the travelling salesman problem.
I think you searching for Concave Hull algorithm.
See here:
http://ubicomp.algoritmi.uminho.pt/local/concavehull.html
Java implementation: here
I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds.
The two points clouds are those ones:
keypoints from the kinect (kinect_keypoints).
keypoints from a 3D object (box) (object_keypoints).
I have tried two options:
[1]. Implementation of the algorithm to find rigid transformation.
**1.Calculate the centroid of each point cloud.**
**2.Center the points according to the centroid.**
**3. Calculate the covariance matrix**
cvSVD( &_H, _W, _U, _V, CV_SVD_U_T );
cvMatMul( _V,_U, &_R );
**4. Calculate the rotartion matrix using the SVD descomposition of the covariance matrix**
float _Tsrc[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
-_gc_src.x,-_gc_src.y,-_gc_src.z,1.f }; // 1: src points to the origin
float _S[16] = { _scale,0.f,0.f,0.f,
0.f,_scale,0.f,0.f,
0.f,0.f,_scale,0.f,
0.f,0.f,0.f,1.f }; // 2: scale the src points
float _R_src_to_dst[16] = { _Rdata[0],_Rdata[3],_Rdata[6],0.f,
_Rdata[1],_Rdata[4],_Rdata[7],0.f,
_Rdata[2],_Rdata[5],_Rdata[8],0.f,
0.f,0.f,0.f,1.f }; // 3: rotate the scr points
float _Tdst[16] = { 1.f,0.f,0.f,0.f,
0.f,1.f,0.f,0.f,
0.f,0.f,1.f,0.f,
_gc_dst.x,_gc_dst.y,_gc_dst.z,1.f }; // 4: from scr to dst
// _Tdst * _R_src_to_dst * _S * _Tsrc
mul_transform_mat( _S, _Tsrc, Rt );
mul_transform_mat( _R_src_to_dst, Rt, Rt );
mul_transform_mat( _Tdst, Rt, Rt );
[2]. Use estimateAffine3D from opencv.
float _poseTrans[12];
std::vector<cv::Point3f> first, second;
cv::Mat aff(3,4,CV_64F, _poseTrans);
std::vector<cv::Point3f> first, second; (first-->kineckt_keypoints and second-->object_keypoints)
cv::estimateAffine3D( first, second, aff, inliers );
float _poseTrans2[16];
for (int i=0; i<12; ++i)
{
_poseTrans2[i] = _poseTrans[i];
}
_poseTrans2[12] = 0.f;
_poseTrans2[13] = 0.f;
_poseTrans2[14] = 0.f;
_poseTrans2[15] = 1.f;
The problem in the first one is that the transformation it is not correct and in the second one, if a multiply the kinect point cloud with the resultant matrix, some values are infinite.
Is there any solution from any of these options? Or an alternative one, apart from the PCL?
Thank you in advance.
EDIT: This is an old post, but an answer might be useful to someone ...
Your first approach can work in very specific cases (ellipsoid point clouds or very elongated shapes), but is not appropriate for point clouds acquired by the kinect. And about your second approach, I am not familiar with OpenCV function estimateAffine3D but I suspect it assumes the two input point clouds correspond to the same physical points, which is not the case if you used a kinect point cloud (which contain noisy measurements) and points from an ideal 3D model (which are perfect).
You mentioned that you are aware of the Point Cloud Library (PCL) and do not want to use it. If possible, I think you might want to reconsider this, because PCL is much more appropriate than OpenCV for what you want to do (check the tutorial list, one of them covers exactly what you want to do: Aligning object templates to a point cloud).
However, here are some alternative solutions to your problem:
If your two point clouds correspond exactly to the same physical points, your second approach should work, but you can also check out Absolute Orientation (e.g. Matlab implementation)
If your two point clouds do not correspond to the same physical points, you actually want to register (or align) them and you can use either:
one of the many variants of the Iterative Closest Point (ICP) algorithm, if you know approximately the position of your object. Wikipedia Entry
3D feature points such as 3D SIFT, 3D SURF or NARF feature points, if you have no clue about your object's position.
Again, all these approaches are already implemented in PCL.
I have a project to see if two objects (made of about 10,000 triangles each) collide using the brute force collision algorithm, rendered in OpenGL. The two objects are not moving. I will have to translate them to some positions and find e.g. 100 triangle collisions etc.
So far I have written a code that actually checks for line-plane intersection between these two models. If I got everything straight I need to check every edge of every triangle of the first model with the each plane of each triangle of the other model. This actually means 3 'for' loops that take hours to end. I suppose I must have something wrong or got the whole concept misunderstood.
for (int i=0; i<model1_faces.num; i++) {
for (int j=0; j<3; j++) {
x1[j] = model1_vertices[model1_faces[i].v[j]-1].x;
y1[j] = model1_vertices[model1_faces[i].v[j]-1].y;
z1[j] = model1_vertices[model1_faces[i].v[j]-1].z;
}
A.x = x1[0];
A.y = y1[0];
A.z = z1[0];
B.x = x1[1];
B.y = y1[1];
B.z = z1[1];
C.x = x1[2];
C.y = y1[2];
C.z = z1[2];
TriangleNormal = findNormalVector((B-A)*(C-A));
RayDirection = B-A;
for (int j=0; j<model2_faces.num; j++) {
PointOnPlane = model2_vertices[model2_faces[j].v[0]-1]; // Any point of the triangle
system("PAUSE");
float D1 = (A-PointOnPlane)^(TriangleNormal); // Distance from A to the plane of j triangle
float D2 = (B-PointOnPlane)^(TriangleNormal);
if ((D1*D2) >= 0) continue; // Line AB doesn't cross the triangle
if (D1==D2) continue; // Line parallel to the plane
CollisionVect = A + (RayDirection) * (-D1/(D2-D1));
Vector temp;
temp = TriangleNormal*(RayDirection);
if (temp^(CollisionVect-A) < 0) continue;
temp = TriangleNormal*(C-B);
if (temp^(CollisionVect-B) < 0) continue;
temp = TriangleNormal*(A-C);
if (temp^(CollisionVect-A) < 0) continue;
// If I reach this point I had a collision //
cout << "Had collision!!" << endl;
Also I do not know exactly where exactly should this function above be called. In my render function so that it runs continuously while rendering or just once, given the fact that I only need to check for a non-moving objects collision?
I would appreciate some explanation and if you're too busy or bored to see my code, just help me with understanding a bit more this whole concept.
As suggested already, you can use bounding volumes. To make best use of these, you can arrange your bounding volumes in an Octree, in which case the volumes are boxes.
At the outermost level, each bounding volume contains the entire object. So you can test whether the two objects might intersect by comparing their zero-level bounding volumes. Testing for intersection of two boxes where all the faces are axis-aligned is trivial.
The octree will index which faces belong to which subdivisions of the bounding volume. So some faces will of course belong to more than one volume and may be tested multiple times.
The benefit is you can prune away many of the brute-force tests that are guaranteed to fail by the fact that only a handful of your subvolumes will actually intersect. The actual intersection testing is of still brute-force, but is on a small subset of faces.
Brute force collision detection often does not scale, as you have noticed. :) The usual approach is to define a bounding volume that contains your models/shapes and simplifies the intersection calculations. Bounding volumes come in all shapes and sizes depending on your models. They can be spheres, boxes, etc.
In addition to defining bounding volumes, you'll want to detect collision in your update section of code, where you are most likely passing in some delta time. That delta time is often needed to determine how far objects need to move and if a collision occurred in that timeframe.