I'm trying to reproject a lat/lon coordinate in the WGS84 system to an UTM one in the SIRGAS 2000 coord system.
Using GDAL, I started with changing a known utm coordinate to the lat/lon counterpart in the same coordinate system (29 N), just to check that I wrote the right code (I'm omitting error-checking here):
OGRSpatialReference monUtm;
monUtm.SetWellKnownGeogCS("WGS84");
monUtm.SetUTM(29, true);
OGRSpatialReference monGeo;
monGeo.SetWellKnownGeogCS("WGS84");
OGRCoordinateTransformation* coordTrans = OGRCreateCoordinateTransformation(&monUtm, &monGeo);
double x = 621921.3413490148;
double y = 4794536.070196861;
int reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print the coords.
delete coordTrans;
coordTrans = OGRCreateCoordinateTransformation(&monGeo, &monUtm);
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, Print the coords.
delete coordTrans;
The coordinates 621921.3413490148, 4794536.070196861 correspond to the Moncelos region in northern Galicia. The forth-and-back transformation seems to work right: the lat/lon coordiantes are the correct ones and when projecting back to UTM, I get the same as the originals:
UTM: 621921.34135 , 4794536.0702
Lat/lon: 43.293779579 , -7.4970160261
Back to UTM: 621921.34135 , 4794536.0702
Now, reprojecting from WGS84 lat/long to SIRGAS 2000 UTM:
// Rodovia dos Tamoios, Brazil:
// - UTM -> 23 S
// - WGS 84 -> EPSG:32723
// - SIRGAS 2000 -> EPSG:31983
OGRSpatialReference wgs;
wgs.SetWellKnownGeogCS("WGS84");
OGRSpatialReference sirgas;
sirgas.importFromEPSG(31983);
coordTrans = OGRCreateCoordinateTransformation(&wgs, &sirgas);
double x = -23.57014667;
double y = -45.49159617;
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print results
delete coordTrans;
coordTrans = OGRCreateCoordinateTransformation(&sirgas, &wgs);
reprojected = coordTrans->Transform(1, &x, &y);
// If OK, print results.
this doesn't give the same results:
WGS84 Lat/Lon input: -23.57014667 , -45.49159617
SIRGAS 2000 UTM output: 2173024.0216 , 4734004.2131
Back to WGS84 Lat/Lon: -23.570633824 , -45.491627598
As you can see, the original WGS84 lat/lon and the back-to_WGS84 lat/lon coords aren't exactly the same, unlike the first test case. Also, the UTM x-coord has 7 digits (I thought it was limited to 6 (?) ).
In Google Maps, we can see that there's a difference of 27 meters between the two points (original point is represented by a circle. My "back-reprojected" point is represented by a dagger).
Finally, the question: am I doing the reprojection right? If so, why is there a 27 meter difference between reprojections in the second test case?
The problem is that you need to swap your axis order to use Cartesian X/Y space or Lon/Lat, and not "Lat/Lon" order.
Setting this should work.
double x = -45.49159617; // Lon
double y = -23.57014667; // Lat
The difference you saw from your round-trip conversion was from projecting outside the bounds for the UTM zone due to a swapped axis order.
Related
I am taking the Computer Vision course, and I had some problems while doing some exercise:
I have the intrinsic matrix K, and extrinsic matrix [R|t] of a camera as followings,
K =
478.989 2.67423 405.437
0 476.472 306.35
0 0 1
[R|t] =
0.681951 -0.00771052 -0.734232 -46.1881
-0.344648 0.882047 -0.331892 -42.4157
0.645105 0.479386 0.598855 118.637
the real world coordination is shown in the picture
I want to calculate "camera position relative to World coordinate",
and the answer is supposed to be
[X, Y, Z] = [74.18, 69.421, 50.904]
How can I get the answer? It just took me a lot of time, but I can not figure it out.
This OpenCV document details how to convert from world to camera co-ordinates.
x = K[R T]X
where x is 2D image co-ordinates and X is 3D world co-ordinate. Using above what you want is X which is nothing but:
X = inverse(K[R T]) * x
Now, put your values of x (u, v, 1) and you should get the value of X which is your required 3D co-ordinate.
I currently have an agent in a map, whose position is known as myPos=(myX,myY), but whose orientation myOri=(oriX,oriY) is unknown. I also see a landmark at position lm=(lmX,lmY) and I have both Cartesian and polar coordinates from my point of view to the landmark, as relLM=(relX,relY) and polLM=(r,theta), respectively.
My goal is to find how my orientation vector is related with the X and Y axis, as XX=(xX, xY) and YY=(yX, yY). Assume for the following examples that X grows to the right and Y grows upwards, and that an agent with 0 rotation follows the X axis (so an agent looking right has XX=(1,0) and YY=(0,1). This follows from the intuition where 0 angle rotation is on the X axis, PI/2 rotation is on the Y, PI is on -X, 3PI/2 is on -Y and 2PI is X.
Example) If myOri=(1,1) (agent is facing top right), then XX=(1, -1) (since the X axis is top right to him) and YY=(1, 1) (the Y axis is top left). In the picture below, X and Y are shown in red and green. My agent is in blue and the landmark in pink. Hence, our initial data are myPos=(0,-2), lm=(0,-1), relLM=(~0.7,~0.7).
By knowing myPos and lmPos, as well as relLM, this should be possible. However, I'm having trouble finding the appropriate vectors. What is the correct algorithm?
bool someFunction(Landmark *lm, Vector2f myPos, Vector2f *xx, Vector2f *yy){
// Vector from agent to landmark
Vector2f agentToLandmark(lm->position.x - myPos.x,
lm->position.y - myPos.y);
// Vector from agent to landmark, regarding agent's orientation
Vector2f agentFacingLandmark = lm->relPosition.toCartesian();
// Set the XX and YY values
// how?
}
My problem is actually in 3D, but using 2D makes the problem easier to explain.
Finding myOri
Since relLM is lm relative to myOri, lm + relLM must be in myPos + µ * myOri. Thus lm + relLM - myPos = myOri * µ. Since µ > 0 must be given in this case, and myOri only needs to indicate a direction, it's sufficient to choose an arbitrary µ > 0.
Finding xx and yy
I think your definition of xx is simply a vector representing the x-axis from the POV of the agent. And same for yy and the y-axis. This can easily be achieved. The angle between myOri and the x-axis is equal to the angle between the x-axis and xx, thus simply mirror myOri at the x-axis and you got xx. So xx = (myOri.x , myOri.y * (-1)). The angle between myOri and the y-axis is equal to the angle between myOri and yy, so yy = myOri.
Note that this is only a guess on what you mean.
Might be that I've misunderstood something. Just notify me if that's the case.
In a geodetic coordinate system (wgs84), i have a pair of (latitude,longitude) say (45,50) and (60,20). Also i am said that a new pair of latitude,longitude lies along the line joining these two and at an offset of say 0.1 deg lat from (45,50) i.e. (45.1, x). How do i find this new point? What i tried was to apply the straight line equation
y = mx+c
m = (lat1 - lat2)/ long1-long2)
c = lat1 - m * long1
but that seemed to give wrong results.
Your problem is the calculation of m. You have turned it around!
The normal formula is:
a = (y1 - y2) / (x1 - x2)
so in your case it is:
m = (long2 -long1) / (lat1 - lat2)
so you'll get m = -2
And you also turned the calculation of c around.
Normal is:
b = y1 - a * x1
so you should do:
c = long1 - m * lat1
So you'll get c = 140.
The formula is:
long = -2 * lat + 140
Another way to think about it is given below. The result is the same, of cause.
The surface-line between two coordinates is not a straight line. It is a line drawn on the surface of a round object, i.e. earth. It will be a circle around the earth.
However all coordinates on that line will still go through a straight line.
That is because the coordinate represents the angles of a vector from center of earth to the point you are looking at. The two angles are compared to Equator (latitude) and compared to Greenwich (longitude).
So you need to setup a formula describing all coordinates for that line.
In your case the latitude goes from 45 to 60, i.e. increases by 15.
Your longitude goes from 50 to 20, i.e. decreses by 30.
So your formula will be:
(lat(t), long(t)) = (45, 50) + (15*t, -30*t) for t in [0:1]
Now you can calculate the value of t that will hit (45.1, x) and afterwards you can calculate x.
The equations you use describe a straight line in an 2D cartesian coordinate system.
Longitude and latitude describe a point in a spherical coordinate system.
A spherical coordinate system is not cartesian.
A similar question was answered here.
I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.
I have a point in 3D space and two angles, I want to calculate the resulting line from this information. I have found how to do this with 2D lines, but not 3D. How can this be calculated?
If it helps: I'm using C++ & OpenGL and have the location of the user's mouse click and the angle of the camera, I want to trace this line for intersections.
In trig terms two angles and a point are required to define a line in 3d space. Converting that to (x,y,z) is just polar coordinates to cartesian coordinates the equations are:
x = r sin(q) cos(f)
y = r sin(q) sin(f)
z = r cos(q)
Where r is the distance from the point P to the origin; the angle q (zenith) between the line OP and the positive polar axis (can be thought of as the z-axis); and the angle f (azimuth) between the initial ray and the projection of OP onto the equatorial plane(usually measured from the x-axis).
Edit:
Okay that was the first part of what you ask. The rest of it, the real question after the updates to the question, is much more complicated than just creating a line from 2 angles and a point in 3d space. This involves using a camera-to-world transformation matrix and was covered in other SO questions. For convenience here's one: How does one convert world coordinates to camera coordinates? The answers cover converting from world-to-camera and camera-to-world.
The line can be fathomed as a point in "time". The equation must be vectorized, or have a direction to make sense, so time is a natural way to think of it. So an equation of a line in 3 dimensions could really be three two dimensional equations of x,y,z related to time, such as:
x = ax*t + cx
y = ay*t + cy
z = az*t + cz
To find that set of equations, assuming the camera is at origin, (0,0,0), and your point is (x1,y1,z1) then
ax = x1 - 0
ay = y1 - 0
az = z1 - 0
cx = cy = cz = 0
so
x = x1*t
y = y1*t
z = z1*t
Note: this also assumes that the "speed" of the line or vector is such that it is at your point (x1,y1,z1) after 1 second.
So to draw that line just fill in the points as fine as you like for as long as required, such as every 1/1000 of a second for 10 seconds or something, might draw a "line", really a series of points that when seen from a distance appear as a line, over 10 seconds worth of distance, determined by the "speed" you choose.