I am working on app in which i am first sending user Latitude and Longitude to the server . Server also has some previously stored coordinates, Now the app has to check if the user coordinate is within a radius of 1km or 2km of coordinates server already have.
One approach i found is of taking initial coordinate as circle center but it is not accurate.
Any suggestion which algorithm i should follow and source which can help me understand this.
R = 6378.1; // radius of the Earth (km)
dlat = deg2rad( $lat1-$lat2 ) // lattitude diference in radians
dlang = deg2rad ( $lang1 - $lang2 ) //longitude diference in radian
The Radius of the Earth is much bigger than 1-2 km , so we can use Pythagoras theorm, and we made 0.03 % accuracy ~ 0.3 m.
distance = R * sqrt( $dlat^2 + $dlang^2 );
// same: distance = sqrt( ($dlat*R)^2 + ($dlang*R)^2) )
This may help you to understand:
http://en.wikipedia.org/wiki/Spherical_coordinate_system
Spherical coordinate system not the same as geolocation coordinate system, but to understand what you need, i think the below link has a good explain.
If you need more accurancy:
See http://en.wikipedia.org/wiki/Great-circle_distance at Worked example section
When the distances only a few kilometers, the accuracy difference is small, but the computing time is much better when use the formula I wrote.
Related
For a university project I am currently working on, I have to create a point cloud by reading images from this dataset. These are basically video frames, and for each frame there is an rgb image along with a corresponding depth image.
I am familiar with the equation z = f*b/d, however I am unable to figure out how the data should be interpreted. Information about the camera that was used to take the video is not provided, and also the project states the following:
"Consider a horizontal/vertical field of view of the camera 48.6/62
degrees respectively"
I have little to no experience in computer vision, and I have never encountered 2 fields of view being used before. Assuming I use the depth from the image as is (for the z coordinate), how would I go about calculating the x and y coordinates of each point in the point cloud?
Here's an example of what the dataset looks like:
Yes, it's unusual to specify multiple fields of view. Given a typical camera (squarish pixels, minimal distortion, view vector through the image center), usually only one field-of-view angle is given -- horizontal or vertical -- because the other can then be derived from the image aspect ratio.
Specifying a horizontal angle of 48.6 and a vertical angle of 62 is particularly surprising here, since the image is a landscape view, where I'd expect the horizontal angle to be greater than the vertical. I'm pretty sure it's a typo:
When swapped, the ratio tan(62 * pi / 360) / tan(48.6 * pi / 360) is the 640 / 480 aspect ratio you'd expect, given the image dimensions and square pixels.
At any rate, a horizontal angle of t is basically saying that the horizontal extent of the image, from left edge to right edge, covers an arc of t radians of the visual field, so the pixel at the center of the right edge lies along a ray rotated t / 2 radians to the right from the central view ray. This "righthand" ray runs from the eye at the origin through the point (tan(t / 2), 0, -1) (assuming a right-handed space with positive x pointing right and positive y pointing up, looking down the negative z axis). To get the point in space at distance d from the eye, you can just normalize a vector along this ray and multiply by it by d. Assuming the samples are linearly distributed across a flat sensor, I'd expect that for a given pixel at (x, y) you could calculate its corresponding ray point with:
p = (dx * tan(hfov / 2), dy * tan(vfov / 2), -1)
where dx is 2 * (x - width / 2) / width, dy is 2 * (y - height / 2) / height, and hfov and vfov are the field-of-view angles in radians.
Note that the documentation that accompanies your sample data links to a Matlab file that shows the recommended process for converting the depth images into a point cloud and distance field. In it, the fields of view are baked with the image dimensions to a constant factor of 570.3, which can be used to recover the field of view angles that the authors believed their recording device had:
atan(320 / 570.3) * (360 / pi / 2) * 2 = 58.6
which is indeed pretty close to the 62 degrees you were given.
From the Matlab code, it looks like the value in the image is not distance from a given point to the eye, but instead distance along the view vector to a perpendicular plane containing the given point ("depth", or basically "z"), so the authors can just multiply it directly with the vector (dx * tan(hfov / 2), dy * tan(vfov / 2), -1) to get the point in space, skipping the normalization step mentioned earlier.
I have geometries in a Postgis database with GeoDjango's default SRID, WGS84, and have found lookups directly in degrees to be much faster than in kilometres, because the database can skip the projections I would think.
Basically, Place.objects.filter(location__distance__lte=(point, D(km=10))) is several orders of magnitude slower than Place.objects.filter(location__dwithin=(point, 10)) as the first query produces a full scan of the table. But sometimes I need to lookup places with a distance threshold in kilometres.
Is there a somewhat precise way to convert the 10 km to degrees for the query?
Maybe another equivalent lookup with the same performance that I should be using instead?
You have several approaches to deal with your problem, here are two of them:
If you do not care much about precision you could use dwithin and use a naive meter to degree conversion degree(x meters) -> x / 40000000 * 360. You would have nearly exact results nearby the equator, but as you go north or south your distance would get shrink (shit we are living on a sphere). Imagine a region that is a circle in the beginning and shrinks to a infinite narrow elipse approaching one of the poles.
If you care about precision you can use:
max_distance = 10000 # distance in meter
buffer_width = max_distance / 40000000. * 360. / math.cos(point.y / 360. * math.pi)
buffered_point = point.buffer(buffer_width)
Place.objects.filter(
location__distance__lte=(point, D(m=max_distance)),
location__overlaps=buffered_point
)
The basic idea is to query for a all points that are within a circle around your point in degree. This part is very performant as the circle is in degreee and the geo index can be used. But the circle is sometimes a bit too big, so we left the filter in meters to filter out places that may be a bit farer away than the allowed max_distance.
A small update to the answer of frankV.
max_distance = 10000 # distance in meter
buffer_width = max_distance / 40000000. * 360. / math.cos(point.y / 360. * math.pi)
buffered_point = point.buffer(buffer_width)
Place.objects.filter(
location__distance__lte=(point, D(m=max_distance)),
location__intersects=buffered_point
)
I found the __overlaps doesn't work with postgresql and a point, but __intersects does.
To be sure it helps your query to speed-up, check the explain plan of the query (queryset.query to get the used query.)
I try to get the Z-angle rotation of a GY-521 ( MPU 6050 ) but the angle rise static also if i don't move the sensor. Is there a way to "filter" the angle that he is correct for my case ?
Code:
float accel_z = accel_t_gyro.value.z_accel;
float accel_angle_z = 0;
float gyro_angle_z = gyro_z*dt + get_last_z_angle();
float unfiltered_gyro_angle_z = gyro_z*dt + get_last_gyro_z_angle();
float alpha = 0.96;
float angle_z = alpha*gyro_angle_z + (1.0 - alpha)*accel_angle_z;
output when sensor not moved:
x = x-roatation-angle
z = z-roatation- angel
x z
8.37 4.24
8.35 4.22
8.33 4.21
8.32 4.19
8.31 4.18
8.29 4.17
8.28 4.15
8.26 4.14
8.25 4.12
8.23 4.12
8.22 4.10
8.21 4.09
8.19 4.07
8.18 4.05
Timo, I saw that you found a partial solution to your problem (quote: "there is no gravity that hold the gyroscope in the right position"). I would like to elaborate because there are ways to get reasonably good results from a gyroscope only. I don't know if it will be good enough for your application. Here are the results I got using the same hardware as you: http://robokitchen.tumblr.com/post/68625623585/imu-with-gyroscope-only-i-tested-the-three-axes. In this video I am not using the accelerometer at all.
There are two points that could help you get better results:
1) Unless your sensor remains totally horizontal you cannot get the z-axis rotation using this formula:
float gyro_angle_z = gyro_z*dt + get_last_z_angle();
To explain why, imagine this: if your gyroscope turned 90 degrees around the x axis then 90 degrees around the z axis, you would not obtain the same result as if it only turned 90 degrees around the z axis. This means that you should compute the rotation around all the axes and then extract the z-axis rotation. If you don't use the x and y axes your results will be incorrect (unless your sensor remains totally horizontal).
One efficient way to compute the rotation around all the axes is to use quaternions. There is a lot to say about quaternions, so I let you do your own research and ask more questions if required.
2) After using all the 3 axes there will still be a static drift. This is because the gyroscope will not read a perfect 0 even when it does not move. To improve the situation, you could calibrate the gyroscope. An easy way to calibrate is to read a large number (100+) of values when the gyroscope is not moving, calculate the average drift, and subtract it from your measures when the gyroscope is moving. There will still be a slight drift after this, but not as bad as what you may be experiencing now.
Here are the best results I got, using the MPU-6050 gyroscope and accelerometer: http://robokitchen.tumblr.com/post/68626551378/imu-with-gyroscope-and-accelerometer-i-ran-the. As you already know the accelerometer helps stabilise the x and y axes but not the z axis. However the z axis drift is low and remains useable for many applications. For instance there are several quadcopters which do not use a magnetometer and suffer from z axis drift and yet remain useable.
NOTICE: I have edited the question below which is more relevant to my real issue than the text right below, you can skip this if you but I'll leave it here for historic reasons.
To see if I get this right, a float in C is the same as a value in radians right? I mean, 360º = 6.28318531 radians and I just noticed on my OpenGL app that a full rotation goes from 0.0 to 6.28, which seems to add up correctly. I just want to make sure I got that right.
I'm using a float (let's call it anglePitch) from 0.0 to 360.0 (it's easier to read in degrees and avoids casting int to float all the time) and all the code I see on the web uses some kind of DEG2RAD() macro which is defined as DEG2RAD 3.141593f / 180. In the end it would be something like this:
anglePitch += direction * 1; // direction will be 1 or -1
refY = tan(anglePitch * DEG2RAD);
This really does a full rotation but that full rotation will be when anglePitch = 180 and anglePitch * DEG2RAD = 3.14, but a full rotation should be 360|6.28. If I change the macro to any of the following:
#define DEG2RAD 3.141593f / 360
#define DEG2RAD 3.141593f / 2 / 180
It works as expected, a full rotation will happen when anglePitch = 360.
What am I missing here and what should I use to properly convert angles to radians/floats?
IMPORTANT EDIT (REAL QUESTION):
I understand now the code I see everywhere on the web about DEG2RAD, I'm just too dumb at math (yeah, I know, it's important when working with this kind of stuff). So I'm going to rephrase my question:
I have now added this to my code:
#define PI 3.141592654f
#define DEG2RAD(d) (d * PI / 180)
Now, when working the pitch/yawn angles in degrees, which are floats, once again, to avoid casting all the time, I just use the DEG2RAD macro and the degree value will be correctly converted to radians. These values will be passed to sin/cos/tan functions and will return the proper values to be used in GLUT camera.
Now the real question, where I was really confused before but couldn't explain myself better:
angleYaw += direction * ROTATE_SPEED;
refX = sin(DEG2RAD(angleYaw));
refZ = -cos(DEG2RAD(angleYaw));
This code will be executed when I press the LEFT/RIGHT keys and the camera will rotate in the Y axis accordingly. A full rotation goes from 0º to 360º.
anglePitch += direction * ROTATE_SPEED;
refY = tan(DEG2RAD(anglePitch));
This is similar code and will be executed when I press the UP/DOWN keys and the camera will rotate in the X axis. But in this situation, a full rotation goes from 0º to 180º degrees and that's what's really confusing me. I'm sure it has something to do with the tangent function but I can't get my head around it.
Is there way I could use sin/cos (as I do in the yawn code) to achieve the same rotation? What is the right way, the most simple code I can add/fix and what makes more sense to create a full pitch rotation from 0º to 360º?
360° = 2 * Pi, Pi = 3.141593…
Radians are defined by the arc length of an angle along a circle of radius 1. The circumfence of a circle is 2*r*Pi, so one full turn on a unit circle has an arc length of 2*Pi = 6.28…
The measure of angles in degrees stem from the fact, that by aligning 6 equilateral triangles you span a full turn. So we have 6 triangles, each making up a 6th of the turn, so the old babylonians divided a circle into pieces of 1/(6*6) = 1/36, and to further refine it this was subdivded by 10. That's why we ended up with 360° in a full circle. This number is arbitrarily choosen, though.
So if there are 2*Pi/360° this makes Pi/180° = 3.141593…/180° which is the conversion factor from degrees to radians. The reciprocal, 180°/Pi = 180/3.141593…
Why on earth the old OpenGL function glRotate and GLU's gluPerspective used degrees instead of radians I cannot fathom. From a mathematical point of view only radians make sense. Which I think is most beautifully demonstrated by Euler's equation
e^(i*Pi) - 1 = 0
There you have it, all the important numbers of mathematics in one single equation. What's this got to do with angles? Well:
e^(i*alpha) = cos(alpha) + i * sin(alpha), alpha is in radians!
EDIT, with respect to modified question:
Your angles being floats is all fine. Why would you even think degress being integers I cannot understand. Normally you don't have to define PI yourself, it comes predefined in math.h, usually called M_PI, M_2PI, and M_PI2 for Pi, 2*Pi and Pi/2. You also should change your macro, the way it's written now can create strange effects.
#define DEG2RAD(d) ( (d) * M_PI/180. )
GLUT has no camera at all. GLUT is a rather dumb OpenGL framework I recommend not using. You probably refer to gluLookAt.
Those obstacles out of the way let's see what you're doing there. Remember that trigonometric functions operate on the unit circle. Let the angle 0 point towards the right and angles increment counterclockwise. Then sin(a) is defined as the amount of rightwards and cos(a) and the amount of forwards to reach the point at angle a on the unit circle. This is what the refX and refZ are getting assigned to.
refY however makes no sense written that way. tan = sin/cos so as we approach n*pi/2 (i.e. 90°) it diverges to +/- infinity. At least it explains your pi/180° cyclic range, because that's the period of tan.
I was first thinking that tan may have been used to normalize the direction vector, but didn't make sense either. The factor would have been 1./sqrt(sin²(Pitch) + 1)
I double checked: using tan there does the right thing.
EDIT2: I don't see where your problem is: The pitch angle is -90° to +90°, which makes perfect sense. Go get yourself a globe (of the earth): The east-west coordinates (longitude) go from -180° to +180°, the south-north coordinate (latitude) goes -90° to +90°. Think about it: Any larger coordinate range would create ambiguities.
The only good suggestion I offer you is: Grab some math text book and bend your mind around spherical coordinates! Sorry to tell you that way. Whatever you have works perfectly fine, you just need to understand sperical geometry.
You're using the terms Yaw and Pitch. Those are normally used in Euler angles. Now unfortunately Euler angles, which compelling at first, cause serious trouble later on (like gimbal lock). You should not use them at all. It may also be a good idea if you used some pencil/sticks/whatever to decompose the rotations you're intending with your hands to understand their mechanics.
And by the way: There are also non-integer degrees. Just hop over to http://maps.google.com to see them in action (just select some place and let http://maps.google.com give you the link to it).
'float' is a type, like int or double. radians and degrees are units of measure, both of which can be represented with any precision you want. i.e., there's no reason you can't have 22.5 degrees, and keep that value in a float.
a full rotation in radians is 2*pi, about 6.283, whereas a full rotation in degrees is 360. You can convert between them by dividing out the starting unit's full circle, then multiplying by the desired unit's full circle.
for example, to get from 90 degrees to radians, first divide out the degrees. 90 over 360 is 0.25 (note this value is in 'revolutions'). Now multiply that 0.25 by 6.283 to arrive at 1.571 radians.
follow up
the reason you're seeing your pitch cycle twice as fast as it should is precisely because you're using tan(pitch) to compute the Y component. What you should have is that the Y component depends on sin(pitch). i.e., try changing
refY = tan(DEG2RAD(anglePitch));
to
refY = sin(DEG2RAD(anglePitch));
a technical detail: the numbers that go into the look matrix should all be in the range of -1 to +1, and if you were to inspect the values you're feeding to refY, and run your pitch outside of -45 to +45 degrees, you'd see the problem; tan() runs off to infinity at +/-90 degrees.
also, note that casting a value from int to float in no sense converts between degrees and radians. casting just gives you the nearest equivalent value in the new storage type. for example, if you cast the integer 22 to floating point, you get 22.0f, whereas if you cast 33.3333f to type int, you'd be left with 33. when working with angles, you really should just stick with floating point, unless you're constrained by working with an embedded processor or something. this is especially important with radians, where whole number increments represent leaps of (about) 57.3 degrees.
Assuming that your ref components are intended to be used as your look-at vector, I think what you need is
refY = sin(DEG2RAD(anglePitch));
XZfactor = cos(DEG2RAD(anglePitch));
refX = XZfactor*sin(DEG2RAD(angleYaw));
refZ = -XZfactor*cos(DEG2RAD(angleYaw));
This one has been eating up my entire night, and I'm finally throwing up my hands for some assistance. Basically, it's fairly straightforward to calculate the Pitch and Yaw from the View Matrix right after you do a camera update:
D3DXMatrixLookAtLH(&m_View, &sCam.pos, &vLookAt, &sCam.up);
pDev->SetTransform(D3DTS_VIEW, &m_View);
// Set the camera axes from the view matrix
sCam.right.x = m_View._11;
sCam.right.y = m_View._21;
sCam.right.z = m_View._31;
sCam.up.x = m_View._12;
sCam.up.y = m_View._22;
sCam.up.z = m_View._32;
sCam.look.x = m_View._13;
sCam.look.y = m_View._23;
sCam.look.z = m_View._33;
// Calculate yaw and pitch and roll
float lookLengthOnXZ = sqrtf( sCam.look.z^2 + sCam.look.x^2 );
fPitch = atan2f( sCam.look.y, lookLengthOnXZ );
fYaw = atan2f( sCam.look.x, sCam.look.z );
So my problem is: there must be some way to obtain the Roll in radians similar to how the Pitch and Yaw are being obtained at the end of the code there. I've tried several dozen algorithms that seemed to make sense to me, but none of them gave quite the desired result. I realize that many developers don't track these values, however I am writing some re-centering routines and setting clipping based on the values, and manual tracking breaks down as you apply mixed rotations to the View Matrix. So, does anyone have the formula for getting the Roll in radians (or degrees, I don't care) from the View Matrix?
Woohoo, got it! Thank you stacker for the link to the Euler Angles entry on Wikipedia, the gamma formula at the end was indeed the correct solution! So, to calculate Roll in radians from a "vertical plane" I'm using the following line of C++ code:
fRoll = atan2f( sCam.up.y, sCam.right.y ) - D3DX_PI/2;
As Euler pointed out, you're looking for the arc tangent of the Y (up) matrix's Z value against the X (right) matrix's Z value - though in this case I tried the Z values and they did not yield the desired result so I tried the Y values on a whim and I got a value which was off by +PI/2 radians (right angle to the desired value), which is easily compensated for. Finally I have a gryscopically accurate and stable 3D camera platform with self-correcting balance. =)
It seems the Wikipedia article about Euler angles contains the formula you're lookin for at the end.