I am using opencv in order to implement potential fields for a game system.
I have a cv::mat of map size and I seed it with data describing how vulnerable my units are. The matrix is using 32 bit integers and the values range from 0 to about 1200.
I then use cv::filter2D in order to find the best position for building a trurret.
int kernelSize = (turretRange * 2) + 1;
cv::Mat circleKernel = cv::Mat( kernelSize, kernelSize, __potentialDataType, cv::Scalar::all(0) );
cv::circle(circleKernel, cv::Point(turretRange + 1, turretRange + 1), turretRange, 1, -1, 8 );
cv::filter2D( vulnerabilityMap, buildMap, -1, circleKernel, cv::Point(-1,-1) );
I then calculate the min and max value positions of the buildMap, where max should give me the best position for my turret.
double min2, max2;
cv::Point min_loc2, max_loc2;
cv::minMaxLoc(buildMap, &min2, &max2, &min_loc2, &max_loc2 );
What happens is that I get the optimal position in x while the y is turretRange short.
That is, the max_loc2 is (optimal x, optimal y - turretRange)
Any hint on what I am doing wrong would be much appreciated.
Related
I want to use an array as an input to k means algorithm .That array has the values of displacement in x and y direction and is a result of Lucas Kanade optical flow estimation. The code is following :
EDITED :
int number_of_features=150;
// Lucas Kanade optical flow
cvCalcOpticalFlowPyrLK(frame1_1C,frame2_1C,pyramid1,pyramid2,frame1_features,frame2_features,number_of_features,optical_flow_window, 5,optical_flow_found_feature, optical_flow_feature_error,optical_flow_termination_criteria, 0 );
float Dx[150],Dy[150]; // displacement matrices
float Dis[150][2]; // total displacement matrix
int K=2; // clusters selected
Mat bestLabels, centers;
for(int i = 0; i < number_of_features; i++)
{
CvPoint p,q;
p.x = (int) frame1_features[i].x;
p.y = (int) frame1_features[i].y;
q.x = (int) frame2_features[i].x;
q.y = (int) frame2_features[i].y;
//displacements
Dx[i]=p.x-q.x;
Dy[i]=p.y-q.y;
Dis[i][0] = Dx[i];
Dis[i][1] = Dy[i];
}
// k means algorithm
// Creating Mat for Input data
cv::Mat flt_Dis(150, 2, CV_32F, Dis);
cv::kmeans(flt_Dis, K, bestLabels,TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0),3, KMEANS_PP_CENTERS, centers);
I have solved my previous problem , now i want to show the clustered image. I guess bestLabels store the indices for each element , e.g. if it is categorized to 0th or 1st category. Am I right ?? How can I show the clustered image ?
K-means can be implemented to work on integers. You just have to do it.
The result will not be integer though.
I am trying to implement a custom version of Histogram of Oriented Gradients. My gradient kernel is [-1.2 0 1.2]. This kernel has to be applied in x and y directions (along rows and along columns), to find the image gradients in x and y directions Gx and Gy.
In Matlab this would be something like
hx = [-1.2 0 1.2]
hy = hx' %transpose
Gx = imfilter(double(I),hx) %Gx is the gradient along x, I is the image
Gy = imfilter(double(I),hy) %Gy is the gradient along y
How do I do this in OpenCV ? I looked at create createSeparableLinearFilter, but it seems to give some sort of sum of Gx and Gy. I need to find Gx and Gy separately.
I am looking for something like
Ptr<FilterEngine> Fx = createRowFilter(...);
Ptr<FilterEngine> Fy = createColumnFilter(...);
Fx->apply(img, Gx, ...); //Gx is x gradient, Gx and Gy are float or double
Fy->apply(img, Gy, ...); //Gy is y gradient
Of course this can be done by writing my own for loop, visiting every pixel, but I was wondering whether there is any OpenCV way to do this.
I think you are looking for
filter2D
use it each time with a different kernel.
Solution from Mathai:
float kernelY[9] = {0,-1.0,0,0,0,0,0,1.0,0};
float kernelX[9] = {0,0,0,-1.0,0,1.0,0,0,0};
Mat filterY(3, 3, CV_32F, kernelY);
Mat filterX(3, 3, CV_32F, kernelX);
filter2D(img,dsty,-1 ,filterY, Point( -1, -1 ),0, BORDER_DEFAULT );
filter2D(img,dstx,-1 ,filterX, Point( -1, -1 ),0, BORDER_DEFAULT );
Follow this tutorial to make your own custom kernels. I think you need to make an NxN kernel for OpenCV to recognize it properly (basically it will be hx; hx; hx for Gx).
HTH
I have a mesh model in X, Y, Z format. Lets say.
Points *P;
In first step, I want to normalize this mesh into (-1, -1, -1) to (1, 1, 1).
Here normalize means to fit this mesh into a box of (-1, -1, -1) to (1, 1, 1).
then after that I do some processing to normalized mesh, finally i want to revert the dimensions to similar with the original mesh.
step-1:
P = Original Mesh dimensions;
step-2:
nP = Normalize(P); // from (-1, -1, -1) to (1, 1, 1)
step-3:
cnP = do something with (nP), number of vertices has increased or decreased.
step-4:
Original Mesh dimensions = Revert(cnP); // dimension should be same with the original mesh
how can I do that?
I know how easy it can be to get lost in programming and completely miss the simplicity of the underlying math. But trust me, it really is simple.
The most intuitive way to go about your problem is probably this:
determine the minimum and maximum value for all three coordinate axes (i.e., x, y and z). This information is contained by the eight corner vertices of your cube. Save these six values in six variables (e.g., min_x, max_x, etc.).
For all points p = (x,y,z) in the mesh, compute
q = ( 2.0*(x-min_x)/(max_x-min_x) - 1.0
2.0*(y-min_y)/(max_y-min_y) - 1.0
2.0*(z-min_z)/(max_z-min_z) - 1.0 )
now q equals p translated to the interval (-1,-1,-1) -- (+1,+1,+1).
Do whatever you need to do on this intermediate grid.
Convert all coordinates q = (xx, yy, zz) back to the original grid by doing the inverse operation:
p = ( (xx+1.0)*(max_x-min_x)/2.0 + min_x
(yy+1.0)*(max_y-min_y)/2.0 + min_y
(zz+1.0)*(max_z-min_z)/2.0 + min_z )
Clean up any mess you've made and continue with the rest of your program.
This is so easy, it's probably a lot more work to find out which library contains these functions than it is to write them yourself.
It's easy - use shape functions. Here's a 1D example for two points:
-1 <= u <= +1
x(u) = x1*(1-u)/2.0 + x2*(1+u)/2.0
x(-1) = x1
x(+1) = x2
You can transform between coordinate systems using the Jacobean.
Let's see what it looks like in 2D:
-1 <= u <= =1
-1 <= v <= =1
x(u, v) = x1*(1-u)*(1-v)/4.0 + x2*(1+u)*(1-v)/4.0 + x3*(1+u)*(1+v)/4.0 + x4*(1-u)*(1+v)/4.0
y(u, v) = y1*(1-u)*(1-v)/4.0 + y2*(1+u)*(1-v)/4.0 + y3*(1+u)*(1+v)/4.0 + y4*(1-u)*(1+v)/4.0
I've implemented a Sobel Edge Detector and had some questions about computing edge orientations.
I'm using this function to compute edge intensities after having done the sobel kernel convolution.
Gxy = sqrt( pow(Gx, 2) + pow(Gy,2) )
Where Gx is sum of the convolution for the sobel kernel in the X direction and Gy is sum of the convolution for the sobel kernel in the Y direction. (note the sobel kernel in the X and Y direction are different kernels)
Y kernel:
1 2 1
0 0 0
-1 -2 -1
X kernel:
-1 0 1
-2 0 2
-1 0 1
when I try to compute the edge orientation (theta is in degrees), I'm using the following rules:
if Gy == 0 and Gx == 0, then theta = 0
if Gy != 0 and Gx == 0, then theta = 90
otherwise, theta = (arctan( Gy / Gx ) * 180) / PI
all my documentation is telling me the angles should be > 0 and < 360 and I continue to get edges with negative value orientations. Is there something I'm doing incorrectly when computing theta or my convolution? Or should i just add 360 or 180 to negative theta values?
thanks in advance,
It's hard to answer your question precisely because you haven't mentioned exactly how you calculate the arctan function.
For example, if you're using the standard library's atan function, then negative angles are to be expected.
Furthermore, you'll notice that atan with a single argument can only ever return values in the first and fourth quadrants (because, for example, tan 45 == tan 225 when using degrees).
If you really want an angle in one of the four quadrants (you should really ask yourself if this matters for your application), then have a look at atan2.
If you are using OpenCV, C++ you could do the following:
Mat gray = some grayscale image;
Calculate edge gradients in x/y axis'
Mat dx(gray.rows, gray.cols, CV_16SC1);
Mat dy(gray.rows, gray.cols, CV_16SC1);
int aperture_size=3;
Sobel(gray, dx, CV_32FC1, 1, 0, aperture_size, 1, 0, BORDER_REPLICATE);
Sobel(gray, dy, CV_32FC1, 0, 1, aperture_size, 1, 0, BORDER_REPLICATE);
Calculate the angle and magnitude using OpenCV's cartToPolar
Mat Mag(gray.size(), CV_32FC1);
Mat Angle(gray.size(), CV_32FC1);
cartToPolar(dx, dy, Mag, Angle, true);
I'm looking for an efficient way to convert axes coordinates to pixel coordinates for multiple screen resolutions.
For example if had a data set of values for temperature over time, something like:
int temps[] = {-8, -5, -4, 0, 1, 0, 3};
int times[] = {0, 12, 16, 30, 42, 50, 57};
What's the most efficient way to transform the dataset to pixel coordinates so I could draw a graph on a 800x600 screen.
Assuming you're going from TEMP_MIN to TEMP_MAX, just do:
y[i] = (int)((float)(temps[i] - TEMP_MIN) * ((float)Y_MAX / (float)(TEMP_MAX - TEMP_MIN)));
where #define Y_MAX (600). Similarly for the x-coordinate. This isn't tested, so you may need to modify it slightly to deal with the edge-case (temps[i] == TEMP_MAX) properly.
You first need to determine the maximum and minimum values along each axis. Then you can do:
x_coord[i] = (x_val[i] - x_max) * X_RES / (x_max - x_min);
...and the same for Y. (Although you will probably want to invert the Y axis).