I'm currently learning about shadow mapping from here.
I'm trying to figure why shadow acne is happening at-all, I've tried looking at every video/resource I found online, some relate the issue to floating point rounding errors, other say it's about the size of the shadow map and if it could be infinitely large then this problem wouldn't occur,but none of the explanation I've seen connects as to why I see this "trippy line pattern":
Some explanations even include some sort of abstract drawing that can visually relate to the phenomenon on screen but really don't explain well the underlying reason this happens:
I've been scratching my head about this looking everywhere for a whole day for a suffice explanation.I'm basically looking for a new explanation to this phenomenon, hopefully a clear and friendly one.
Also I would like to know after hopefully understanding the cause of this, how biasing/face culling helps solve this problem, and how they trade this problem for peter penning problem, as I think there's no clear explanation to the why this happens instead of this.
Shadow Acnee can come from different causes.
The first one is a problem with the precision of your shadow map and the actual depth you compute in your shader. The depth you store in your depth map is mapped from [near, far] to [0, 1].
You go from a linearized floating point value of 32 bits in the [near, far] range to a non linearized depth (more precision the closer you get from 0) in the [0, 1] range stored in maybe 24 bits. To put it an other way some depth value you compute in your fragment shader will be mapped to the same "texture color" wich will cause the depth test to fail.
For example with the given formula
with near = 1 and far = 1000
F(700) = 0.99957, F(701) = 0.99948 ~= 0.9995
Ence if thoose 2 values are mapped to 0.9995 in the depth map due to precision error.
When you will compute the depth test the one of them will fail cause 0.9995 < 0.9957.
The other problem, can come from a shadowMap being too small for your your actual camera point of view (perspective aliasing).
As you can see in this pictures the side of the tree takes more place in the camera point of view than in the sadowMap. There are more pixels that covers the side of the tree in the screen than in the shadowMap, ence some pixels you compute in the camera point of view will use the same depth information in the lightPoint of view, some tests might point to the same shadow map pixel and will fai. d > ds
Adding a bias in both cases, will remove the floating point error, or will compensate the error when 2 depth test point to the same shadowMap pixel. Adding a bias can be seen as margin of error. With the the bias you accept some tests that were rejected before
Related
I have a cube with known vertices position (bottom back left vertice, top front right, etc...). I also have a known center, and a known point. I want to express that point as a ratio of X/Y/Z of the lattices of that cube to use as a trilinear interpolation. So, for a given point, it has an X/Y/Z ratios that can be then be used to recalculate new positions when the cube is deformed.
So, my question is : What is the method to find the inverse trilinear interpolation to find out a point's ratio? Basically, from http://en.wikipedia.org/wiki/Trilinear_interpolation, I want a mean to find xd, yd and zd from a known point, and a deformed cube.
The closest I have come to finding a solution is http://www.grc.nasa.gov/WWW/winddocs/utilities/b4wind_guide/trilinear.html but I am not sure I understand it properly. Reading it, it seems that they guesstimate it a/b/g to be at 0, then run it through their formula, hopes it gets the proper answer. If not, calculate a delta from the expected vs received solution. If after 20 runs they don't have a good enough solution, they simply give up.
Sounds like an easy problem if you know how to relate cube displacements to a strain measure.
You don't say if the deformations are small or large.
The most general solution would be to assume a large strain measure based on original, undefored geometry: Green-Lagrange large strain tensor.
You can substitute the interpolation functions into the strain displacement relations to get what you want.
I was interested in how to draw a line with a specific width (or multiple lines) using a fragment shader. I stumbled on the this post which seems to explain it.
The challenge I have is understanding the logic behind it.
A couple of questions:
Our coordinate space in this example is (0.0-1.0,0.0-1.0), correct?
If so, what is the purpose of the "uv" variable. Since thickness is 500, the "uv" variable will be very small. Therefore the distances from it to pont 1 and 2 (stored in the a and b variables)?
Finally, what is the logic behind the h variable?
i will try to answer all of your questions one by one:
1) Yes, this is in fact correct.
2) It is common in 3d computer graphics to express coordinates(within certain boundaries) with floating-point values between 0 and 1(or between -1 and 1). First of all, this makes it quite easy to decide whether a given value crosses said boundary or not, and abstracts away from a concept of "pixel" being a discrete image unit; furthermore this common practise can be found pretty much everywhere else(think of device coordinates or texture coordinates)
Don't be afraid that values that you are working with are less than one; in fact, in computer graphics you usually deal with floating-point arithmetics, and FLOAT types are quite good at expressing Real values line around the "1" point.
3) The formula give for h consists of 2 parts: the square-root part, and the 2/c coefficient. The square root part should be well known from scholl math classes - this is Heron formula for the area of a triangle(between a,b,c). 2/c extracts the height of the said triangle, which is stored in h and is also the distance between point uv and the "ground line" of the triangle. This distance is then used to decide, where is uv in relation to the line p1-p2.
Does the scale of a scene matter in OpenGL? By scale I mean, for instance, drawing a 1 unit cube but setting the camera position 1000 pixels away vs setting the camera 100 pixels away from a 0.1 unit cube? I guess, firstly, am I correct in thinking this would yield the same results visually? And if so, does anyone know how either would effect performance, if at all?
My hunch would be that it would have no effect but a more definitive answer would be nice.
It doesn't matter except for imprecisions using floating point arithmetic. So try not to use super small or large numbers.
If you mean 2D this may not matter. Image created both ways may look the same.
"Setting the camera" actually only changes the transformation matrix the vertices are multiplied by, so after the transformation is applied, the vertex positions are the same. There might be minor differences resulting from the imprecision of floating-point values.
If the camera has a constant near and far clipping distance, the resulting values in depth buffer will differ, and one of the cubes might get outside of the clipping plane range, which would make it appear different / not at all.
You should know that floats have two modes: normalized and unnormalized mode. For values below 0, you will have more bits and more precision. So, it is better if you keep everything between 0 and one.
Also, looking to how the exp/mantissa works, the difference between numbers when a single bit changes, is bigger when you move away from zero, and things will start to make popping when you move very far from origin.
Yes, it matters. Not because of OpenGL, but because of the nature of floating point operations - floating point precision degrades at larger magnitudes. Try and normalize your values such that you have values closer to zero. That is, don't make a planet object 1,000,000 units wide just because you want a planet that's 1,000 KM in diameter.
I have a problem in my 3D Direct X Map Application (aka Google Earth)
All coordinates are geocentric (XYZ) in meters.
The maps are drawn above earth ellipsoid.
Everything works nice, except one issue.
When I go to very close zoom (about 1 meter per pixel), the D3DXVec3Unproject function returns inaccurate results. I have an error of 10-30 meters and cannot make map panning.
A lot of investigations isolate problem as follows:
D3DXVECTOR3 vScreen, vBack;
D3DXVec3Project( &vScreen, &m_vecAt, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
D3DXVec3Unproject( &vBack, &vScreen, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
m_vecAt is a looking point.
D3DXVec3Project returns correct vScreen - exactly center of window (+/- 0.5 pixel) with vScreen.z = 0.99, but vBack differs from m_vecAt by tens meters.
Sign again, that everything else works, so I assume, that all matrices and other data are calculated correctly.
The only one reason, that I can think about, is a not enough calculation precision of single float point arithmetic.
Please help.
Thank you guys.
I solved the problem. The near plane of projection matrix was too close to camera.
Precision of float might be one reason but another issue you should investigate is the accuracy of the ellipsoid or geoid model you are using. Errors at the range of 10~40 meters is expected in most usual models. More precise models are available but they are significantly more complex. Wikipedia article has some good links to follow.
It can't be the geoid given that the example code is generic 3D projection code which doesn't know anything about it. It must be floating point inaccuracy. Is it particularly bad for points towards the back of screen space (ie z ~ 1)? Since you're projecting a frustum to a cube, the back of the projected volume covers more world space, so any inaccuracy is more pronounced.
For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?
There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).
Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).
The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.