I have a problem in my 3D Direct X Map Application (aka Google Earth)
All coordinates are geocentric (XYZ) in meters.
The maps are drawn above earth ellipsoid.
Everything works nice, except one issue.
When I go to very close zoom (about 1 meter per pixel), the D3DXVec3Unproject function returns inaccurate results. I have an error of 10-30 meters and cannot make map panning.
A lot of investigations isolate problem as follows:
D3DXVECTOR3 vScreen, vBack;
D3DXVec3Project( &vScreen, &m_vecAt, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
D3DXVec3Unproject( &vBack, &vScreen, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
m_vecAt is a looking point.
D3DXVec3Project returns correct vScreen - exactly center of window (+/- 0.5 pixel) with vScreen.z = 0.99, but vBack differs from m_vecAt by tens meters.
Sign again, that everything else works, so I assume, that all matrices and other data are calculated correctly.
The only one reason, that I can think about, is a not enough calculation precision of single float point arithmetic.
Please help.
Thank you guys.
I solved the problem. The near plane of projection matrix was too close to camera.
Precision of float might be one reason but another issue you should investigate is the accuracy of the ellipsoid or geoid model you are using. Errors at the range of 10~40 meters is expected in most usual models. More precise models are available but they are significantly more complex. Wikipedia article has some good links to follow.
It can't be the geoid given that the example code is generic 3D projection code which doesn't know anything about it. It must be floating point inaccuracy. Is it particularly bad for points towards the back of screen space (ie z ~ 1)? Since you're projecting a frustum to a cube, the back of the projected volume covers more world space, so any inaccuracy is more pronounced.
Related
I'm currently learning about shadow mapping from here.
I'm trying to figure why shadow acne is happening at-all, I've tried looking at every video/resource I found online, some relate the issue to floating point rounding errors, other say it's about the size of the shadow map and if it could be infinitely large then this problem wouldn't occur,but none of the explanation I've seen connects as to why I see this "trippy line pattern":
Some explanations even include some sort of abstract drawing that can visually relate to the phenomenon on screen but really don't explain well the underlying reason this happens:
I've been scratching my head about this looking everywhere for a whole day for a suffice explanation.I'm basically looking for a new explanation to this phenomenon, hopefully a clear and friendly one.
Also I would like to know after hopefully understanding the cause of this, how biasing/face culling helps solve this problem, and how they trade this problem for peter penning problem, as I think there's no clear explanation to the why this happens instead of this.
Shadow Acnee can come from different causes.
The first one is a problem with the precision of your shadow map and the actual depth you compute in your shader. The depth you store in your depth map is mapped from [near, far] to [0, 1].
You go from a linearized floating point value of 32 bits in the [near, far] range to a non linearized depth (more precision the closer you get from 0) in the [0, 1] range stored in maybe 24 bits. To put it an other way some depth value you compute in your fragment shader will be mapped to the same "texture color" wich will cause the depth test to fail.
For example with the given formula
with near = 1 and far = 1000
F(700) = 0.99957, F(701) = 0.99948 ~= 0.9995
Ence if thoose 2 values are mapped to 0.9995 in the depth map due to precision error.
When you will compute the depth test the one of them will fail cause 0.9995 < 0.9957.
The other problem, can come from a shadowMap being too small for your your actual camera point of view (perspective aliasing).
As you can see in this pictures the side of the tree takes more place in the camera point of view than in the sadowMap. There are more pixels that covers the side of the tree in the screen than in the shadowMap, ence some pixels you compute in the camera point of view will use the same depth information in the lightPoint of view, some tests might point to the same shadow map pixel and will fai. d > ds
Adding a bias in both cases, will remove the floating point error, or will compensate the error when 2 depth test point to the same shadowMap pixel. Adding a bias can be seen as margin of error. With the the bias you accept some tests that were rejected before
I have a cube with known vertices position (bottom back left vertice, top front right, etc...). I also have a known center, and a known point. I want to express that point as a ratio of X/Y/Z of the lattices of that cube to use as a trilinear interpolation. So, for a given point, it has an X/Y/Z ratios that can be then be used to recalculate new positions when the cube is deformed.
So, my question is : What is the method to find the inverse trilinear interpolation to find out a point's ratio? Basically, from http://en.wikipedia.org/wiki/Trilinear_interpolation, I want a mean to find xd, yd and zd from a known point, and a deformed cube.
The closest I have come to finding a solution is http://www.grc.nasa.gov/WWW/winddocs/utilities/b4wind_guide/trilinear.html but I am not sure I understand it properly. Reading it, it seems that they guesstimate it a/b/g to be at 0, then run it through their formula, hopes it gets the proper answer. If not, calculate a delta from the expected vs received solution. If after 20 runs they don't have a good enough solution, they simply give up.
Sounds like an easy problem if you know how to relate cube displacements to a strain measure.
You don't say if the deformations are small or large.
The most general solution would be to assume a large strain measure based on original, undefored geometry: Green-Lagrange large strain tensor.
You can substitute the interpolation functions into the strain displacement relations to get what you want.
I have a bunch of points lying on a vertical plane. In reality this plane
should be exactly vertical. But, when I visualize the point cloud, there is a
slight inclination (nearly 2 degrees) from the verticality. At the moment, I can calculate
this inclination only. Concerning other errors, I assume there are no
shifts or something like that.
So, I want to update coordinates of my point data so that they lie on the vertical plane. I think, I should do some kind of transformation. It may be only via rotation along X-axis. Not sure what it would be.
I guess, you understood my question. Honestly, I am poor at
mathematics. So, please let me know how to update my point coordinates
to lie on the exact vertical plane.
Note: AS I am implementing this in c++ and there are many programmers who have sound knowledge on these things, I am posting this question under c++.
UPDATES
If I say exactly what I have done so far;
I have point cloud data representing a vertical object + its surroundings things. (The data is collected by a moving scanner and may have axes deviations from the correct world axes). The problem is, I cannot say exactly that there is an error on my data or not. Therefore, I checked this with a vertical planar object (which is the dominated object in my data as well). In reality that plane is truly vertical. But, when I fit a plane by removing outliers, then that plane is not truly vertical and has nearly 2 degree inclination. Therefore, I am suspecting that my data has some error. So I want to update all my point clouds (including points on the plane and points which represent other objects) in a way to lay that particular planar points exactly on the vertical plane. Then, I guess, all the points will be updated into their correct positions as in the reality. That is all (x,y,z) coordinates should be updated.
As an example please refer the below figure.
left-represents original point cloud (as you can see, points themselves are not vertical) and back line tells the vertical plane which I fitted and red is the zenith line. as you can see, there is an inclination of the vertical plane.
So, I want to update whole my data in the right figure. then, after updating if i fit a plane again (removing outliers), then it is exactly parallel to the zenith line. please help me.
I may be able to help you out, considering I worked with planes recently. First of all, how come the points aren't coplanar from the get go? I'd make the points coplanar in the first place instead of them being at an inclination (from what origin?), and then having to fix them. Also, having the points be coplanar on your first go would increase efficiency.
Sorry if this is the answer you're not looking for, but I need more information before I can help you out. Also, 3D math is hard. If you work with it enough, it starts to get pounded into your head, where you will NEVER forget it, especially if you went through the headaches I had to go through.
I did a bit of thinking on it, and since you want to rotate along the x-axis, your rotation will be done on the xz-plane, which means we can make this a 2D problem. After doing a bit of research on Wikipedia, this may be your solution.
new z = ((x - intended x) * sin(angle)) + (z * cos(angle)) + intended x
What I'm doing here is subtracting our intended x value from our current x value, so that we make (intended x, 0) our point of origin to rotate around. After the point is rotated, I add (intended x, 0) back to our coordinate so that we get the correct result.
Depending on where you got your points from (some kind of measurement, I guess) and what you want to do with them, there are several different things you could do with your data.
The search keyword "regression plane" might help - there are several ways of finding planes approximating point clouds, and several ways to "snap" points to planes.
Edit: You want to apply a rotation around the axis defined by the cross product of the normal vector on your regression plane and the normal of your desired plane, and a point your choice. From your illustration I take it that you probably want the bottom of your vertical planar object to be the point of reference for the rotation.
So you've got your point of reference, you now the axis around which you want to rotate, and the angle. All you need to do is:
Translation (to get to your point of reference)
Rotation
I read your question again, and hopefully this answer will help you out. If there's anything else I need to know, please tell me.
Now, In order to rotate anything, there must be a center point to rotate around. Now you've already been able to detect the angle of inclination, so now we need a formula for rotating a point a certain angle around an origin. In addition, since this problem only occurs on a 2D plane, we can use this basic formula to readjust the points. For any two axis x and y:
Theta is the angle that you will rotate around in a counter-clockwise direction. x' and y' are your new points. x.origin and y.origin are the coordinates for the point you will be going around. Now I don't know if my math is 100% correct on this but if it's not, hopefully you can change a thing or two and it will work.
Does the scale of a scene matter in OpenGL? By scale I mean, for instance, drawing a 1 unit cube but setting the camera position 1000 pixels away vs setting the camera 100 pixels away from a 0.1 unit cube? I guess, firstly, am I correct in thinking this would yield the same results visually? And if so, does anyone know how either would effect performance, if at all?
My hunch would be that it would have no effect but a more definitive answer would be nice.
It doesn't matter except for imprecisions using floating point arithmetic. So try not to use super small or large numbers.
If you mean 2D this may not matter. Image created both ways may look the same.
"Setting the camera" actually only changes the transformation matrix the vertices are multiplied by, so after the transformation is applied, the vertex positions are the same. There might be minor differences resulting from the imprecision of floating-point values.
If the camera has a constant near and far clipping distance, the resulting values in depth buffer will differ, and one of the cubes might get outside of the clipping plane range, which would make it appear different / not at all.
You should know that floats have two modes: normalized and unnormalized mode. For values below 0, you will have more bits and more precision. So, it is better if you keep everything between 0 and one.
Also, looking to how the exp/mantissa works, the difference between numbers when a single bit changes, is bigger when you move away from zero, and things will start to make popping when you move very far from origin.
Yes, it matters. Not because of OpenGL, but because of the nature of floating point operations - floating point precision degrades at larger magnitudes. Try and normalize your values such that you have values closer to zero. That is, don't make a planet object 1,000,000 units wide just because you want a planet that's 1,000 KM in diameter.
For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?
There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).
Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).
The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.