I'm looking for an efficient way to convert axes coordinates to pixel coordinates for multiple screen resolutions.
For example if had a data set of values for temperature over time, something like:
int temps[] = {-8, -5, -4, 0, 1, 0, 3};
int times[] = {0, 12, 16, 30, 42, 50, 57};
What's the most efficient way to transform the dataset to pixel coordinates so I could draw a graph on a 800x600 screen.
Assuming you're going from TEMP_MIN to TEMP_MAX, just do:
y[i] = (int)((float)(temps[i] - TEMP_MIN) * ((float)Y_MAX / (float)(TEMP_MAX - TEMP_MIN)));
where #define Y_MAX (600). Similarly for the x-coordinate. This isn't tested, so you may need to modify it slightly to deal with the edge-case (temps[i] == TEMP_MAX) properly.
You first need to determine the maximum and minimum values along each axis. Then you can do:
x_coord[i] = (x_val[i] - x_max) * X_RES / (x_max - x_min);
...and the same for Y. (Although you will probably want to invert the Y axis).
Related
I am using opencv in order to implement potential fields for a game system.
I have a cv::mat of map size and I seed it with data describing how vulnerable my units are. The matrix is using 32 bit integers and the values range from 0 to about 1200.
I then use cv::filter2D in order to find the best position for building a trurret.
int kernelSize = (turretRange * 2) + 1;
cv::Mat circleKernel = cv::Mat( kernelSize, kernelSize, __potentialDataType, cv::Scalar::all(0) );
cv::circle(circleKernel, cv::Point(turretRange + 1, turretRange + 1), turretRange, 1, -1, 8 );
cv::filter2D( vulnerabilityMap, buildMap, -1, circleKernel, cv::Point(-1,-1) );
I then calculate the min and max value positions of the buildMap, where max should give me the best position for my turret.
double min2, max2;
cv::Point min_loc2, max_loc2;
cv::minMaxLoc(buildMap, &min2, &max2, &min_loc2, &max_loc2 );
What happens is that I get the optimal position in x while the y is turretRange short.
That is, the max_loc2 is (optimal x, optimal y - turretRange)
Any hint on what I am doing wrong would be much appreciated.
1, 0, 0, 0,
0, cos(theta), -sin(theta), 0,
0, sin(theta), cos(theta), 0,
0, 0, 0, 1;
I'm trying to create a 'swinging' animation using a rectangular prism. The animation is very basic: The prism is going to swing back and forth, like the arms of this robot toy. I need to use the above matrix.
I just need help figuring out a series of values for theta that can be plugged into this matrix in order to cause the rectangular prism it will be applied to to swing back and forth, like in the image linked to above.
You may want to use lerping (linear-interpolation) to get a smooth animation. Its hard to tell from the image but min/max pair of -35 and +35 degrees might do the trick.
Edit: Lerping
Using a value between 0 and 1 with a small increment will give you incremental positions inbetween your min and max values, called a and b in the formula below...
0 >= t <= 1
x = b * t + (1 - t) * a
I would like to calculate the corner points or contours of the star in this in a Larger image. For that I'm scaling down the size to a smaller one & I'm able to get this points clearly. Now How to map this point in original image? I'm using opencv c++.
Consider a trivial example: the image size is reduced exactly by half.
So, the cartesian coordinate (x, y) in the original image becomes coordinate (x/2, y/2) in the reduced image, and coordinate (x', y') in the reduced image corresponds to coordinate (x*2, y*2) in the original image.
Of course, fractional coordinates get typically rounded off, in a reduced scale image, so the exact mapping is only possible for even-numbered coordinates in this example's original image.
Generalizing this, if the image's width is scaled by a factor of w horizontally and h vertically, coordinate (x, y) becomes coordinate(x*w, y*h), rounded off. In the example I gave, both w and h are 1/2, or .5
You should be able to figure out the values of w and h yourself, and be able to map the coordinates trivially. Of course, due to rounding off, you will not be able to compute the exact coordinates in the original image.
I realize this is an old question. I just wanted to add to Sam's answer above, to deal with "rounding off", in case other readers are wondering the same thing I faced.
This rounding off becomes obvious for even # of pixels across a coordinate axis. For instance, along a 1-D axis, a point demarcating the 2nd quartile gets mapped to an inaccurate value:
axis_prev = [0, 1, 2, 3]
axis_new = [0, 1, 2, 3, 4, 5, 6, 7]
w_prev = len(axis_prev) # This is an axis of length 4
w_new = len(axis_new) # This is an axis of length 8
x_prev = 2
x_new = x_prev * w_new / w_prev
print(x_new)
>>> 4
### x_new should be 5
In Python, one strategy would be to linearly interpolate values from one axis resolution to another axis resolution. Say for the above, we wish to map a point from the smaller image to its corresponding point of the star in the larger image:
import numpy as np
from scipy.interpolate import interp1d
x_old = np.linspace(0, 640, 641)
x_new = np.linspace(0, 768, 769)
f = interp1d(x_old, x_new)
x = 35
x_prime = f(x)
I have a mesh model in X, Y, Z format. Lets say.
Points *P;
In first step, I want to normalize this mesh into (-1, -1, -1) to (1, 1, 1).
Here normalize means to fit this mesh into a box of (-1, -1, -1) to (1, 1, 1).
then after that I do some processing to normalized mesh, finally i want to revert the dimensions to similar with the original mesh.
step-1:
P = Original Mesh dimensions;
step-2:
nP = Normalize(P); // from (-1, -1, -1) to (1, 1, 1)
step-3:
cnP = do something with (nP), number of vertices has increased or decreased.
step-4:
Original Mesh dimensions = Revert(cnP); // dimension should be same with the original mesh
how can I do that?
I know how easy it can be to get lost in programming and completely miss the simplicity of the underlying math. But trust me, it really is simple.
The most intuitive way to go about your problem is probably this:
determine the minimum and maximum value for all three coordinate axes (i.e., x, y and z). This information is contained by the eight corner vertices of your cube. Save these six values in six variables (e.g., min_x, max_x, etc.).
For all points p = (x,y,z) in the mesh, compute
q = ( 2.0*(x-min_x)/(max_x-min_x) - 1.0
2.0*(y-min_y)/(max_y-min_y) - 1.0
2.0*(z-min_z)/(max_z-min_z) - 1.0 )
now q equals p translated to the interval (-1,-1,-1) -- (+1,+1,+1).
Do whatever you need to do on this intermediate grid.
Convert all coordinates q = (xx, yy, zz) back to the original grid by doing the inverse operation:
p = ( (xx+1.0)*(max_x-min_x)/2.0 + min_x
(yy+1.0)*(max_y-min_y)/2.0 + min_y
(zz+1.0)*(max_z-min_z)/2.0 + min_z )
Clean up any mess you've made and continue with the rest of your program.
This is so easy, it's probably a lot more work to find out which library contains these functions than it is to write them yourself.
It's easy - use shape functions. Here's a 1D example for two points:
-1 <= u <= +1
x(u) = x1*(1-u)/2.0 + x2*(1+u)/2.0
x(-1) = x1
x(+1) = x2
You can transform between coordinate systems using the Jacobean.
Let's see what it looks like in 2D:
-1 <= u <= =1
-1 <= v <= =1
x(u, v) = x1*(1-u)*(1-v)/4.0 + x2*(1+u)*(1-v)/4.0 + x3*(1+u)*(1+v)/4.0 + x4*(1-u)*(1+v)/4.0
y(u, v) = y1*(1-u)*(1-v)/4.0 + y2*(1+u)*(1-v)/4.0 + y3*(1+u)*(1+v)/4.0 + y4*(1-u)*(1+v)/4.0
I'm trying to calculate the cameras position for an image. I have 2 images of a rubiks cube. The first image is considered to be the base image and the next image is the image after the camera has moved. So for the first image I assume that the camera is at (0,0,0). On this image I then identify the 4 corners of the front face of the rubiks cube as shown here (4 corners identified by the 4 blue circles).
Then for the next image (after camera movement), I identify the same face of the rubiks cube as show here
So by assuming the first image as the base image, does anyone know if/how i can calculate how much the camera has moved for image 2 as shown here:
I would suggest you use OpenCV for this. I also think, this question would be more suited to StackOverflow.
The textbook on this subject would be "Multiple-View Geometry" by Hartley and Zisserman. http://www.robots.ox.ac.uk/~vgg/hzbook/ (There is a sample chapter on the Fundamental Matrix on that website.)
Basically, first find the Fundamental Matrix, then by knowing the intrinsic parameters of the camera, find a solution to the position.
Fundamental Matrix: http://en.wikipedia.org/wiki/Fundamental_matrix_%28computer_vision%29
Intrinsic Parameters: Stuff like the focal length and where the principal point is on the image plane. If you have F, then E = K^t * F * K, if K is the intrinsic matrix and the same for both images.
How to find a solution to the camera position: http://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E
Algorithm
This is how I would do it in OpenCV. I have done this before, so it ought to work.
1. Run Feature Detection and Detector Extractor on both images.
2. Match Features.
3. Use F = cv::findFundamentalMatrix with Ransac.
4. E = K.t() * F * K. // K needs to be found beforehand.
5. Do SingularValueDecomposition of E such that E = U * S * V.t()
6. R = U * W.inv() * V.t() // W = [[0, -1, 0], [1, 0, 0], [0, 0, 1]]
7. Tx = V * Z * V.t() // Z = [[0, -1, 0], [1, 0, 0], [0, 0, 0]]
8. get t from Tx (matrix version of cross product)
9. Find the correct solution. R.t() and -t are possiblities.
10. Get overall scale by knowing the length of the size of the Rubrik's cube.
Alternative Solutions
I am certain that a more straightforward approach can also work. The benefit of this approach is that no human input is needed (unsupervised). This is not true for the optional step 10 (determining scale).
A different solution would exploit the knowledge of the geometry of the Rubrik's cube. For example, six (5.5) points are needed to estimate the position of the camera, if the point's 3D position is known.
Unfortunatly, I am not aware of any software that does this for you automatically.
So here is the alternative algorithm:
Write down the coordinates of the corners of the cube as (X_i, Y_i, Z_i), and possibly also points with other knowable positions.
Mark the corresponding points u_i = (x_i, y_i).
For every correspondence create two lines in a matrix A.
(X_i, Y_i, Z_i, 1, 0, 0, 0, 0, -x_iX_i, -x_iY_i, -x_iZ_i -x_i)
(0, 0, 0, 0, X_i, Y_i, Z_i, 1, -y_iX_i, -y_iY_i, -y_iZ_i -y_i)
Then find p such that Ap = 0. I.e. p is the right kernel of A, or the least-squared solution to Ap=0.
De-flatten p, to create a 3x4 matrix. P.