Is scale important in OpenGL? - c++

Does the scale of a scene matter in OpenGL? By scale I mean, for instance, drawing a 1 unit cube but setting the camera position 1000 pixels away vs setting the camera 100 pixels away from a 0.1 unit cube? I guess, firstly, am I correct in thinking this would yield the same results visually? And if so, does anyone know how either would effect performance, if at all?
My hunch would be that it would have no effect but a more definitive answer would be nice.

It doesn't matter except for imprecisions using floating point arithmetic. So try not to use super small or large numbers.

If you mean 2D this may not matter. Image created both ways may look the same.

"Setting the camera" actually only changes the transformation matrix the vertices are multiplied by, so after the transformation is applied, the vertex positions are the same. There might be minor differences resulting from the imprecision of floating-point values.
If the camera has a constant near and far clipping distance, the resulting values in depth buffer will differ, and one of the cubes might get outside of the clipping plane range, which would make it appear different / not at all.

You should know that floats have two modes: normalized and unnormalized mode. For values below 0, you will have more bits and more precision. So, it is better if you keep everything between 0 and one.
Also, looking to how the exp/mantissa works, the difference between numbers when a single bit changes, is bigger when you move away from zero, and things will start to make popping when you move very far from origin.

Yes, it matters. Not because of OpenGL, but because of the nature of floating point operations - floating point precision degrades at larger magnitudes. Try and normalize your values such that you have values closer to zero. That is, don't make a planet object 1,000,000 units wide just because you want a planet that's 1,000 KM in diameter.

Related

Convert arbitrary grid of subpixel points to raster image

I hope you are doing well. I am stuck at one part of a visual effect program in C++, and wanted to ask for help.
I have an array of colors at random positions on an image. There can be any number of these "subpixels" that fall over top of any given pixel. The subpixels that overlap a pixel can be at any position within the pixel, since they're distributed randomly throughout the image. All I have access to is their position on the image and their color, which represents what the color should be at that precise subpixel point on the image.
I need to determine what color to make each pixel of the image. In other words, I need to interpolate what the color should be at the centre of each pixel.
Here is a diagram with an example of this on a 5x5 image:
I need to go from this:
To this:
If it aids your understanding, you can think of the first image as a series of random points whose color values were calculated using bilinear interpolation on the second image.
I am writing this in C++, and ideally it will be as fast as possible, but I welcome contributions in any language or just explained with symbols or words. It should be as accurate as possible, but I also welcome solutions that are slightly inaccurate in favour of performance or simplicity.
Please let me know if you need clarification on the problem.
Thank you.
I ended up finding quite a decent solution which, while it doesn't find the absolutely 100% technically correct color for each pixel, was more than good enough and acceptably fast, especially when I added multithreading.
I first create a vector for each pixel/cell that contains pointers to subpixels (points with known colors). When I create a subpixel, I add a pointer to it to the vector representing the pixel/cell that it overlaps and to each of the vectors representing pixels/cells directly adjacent to the pixel/cell that that it overlaps.
Then, I split each pixel/cell into n sub-cells (I found 8 works well). This is not as expensive as you might imagine, because I only have to calculate & compare the distance for those subpixels that are in that pixel/cell's subpixel pointer vector. For each sub-cell, I calculate which subpixel is the closest to its centre. That subpixel's color then contributes 1/nth of the color for that pixel/cell.
I found it was important to add the subpixel pointers to adjacent cell/pixel vectors, so that each sub-cell can take into account subpixels from adjacent pixels/cells. This even makes it produce a reasonable color when there are pixels/cells that have no subpixels overlapping them (as long as the neighboring pixels/cells do).
Thanks for all the comments so far; any ideas about how to speed this up would be appreciated as well.

How to get ratios from a deformed cube for a trilinear interpolation?

I have a cube with known vertices position (bottom back left vertice, top front right, etc...). I also have a known center, and a known point. I want to express that point as a ratio of X/Y/Z of the lattices of that cube to use as a trilinear interpolation. So, for a given point, it has an X/Y/Z ratios that can be then be used to recalculate new positions when the cube is deformed.
So, my question is : What is the method to find the inverse trilinear interpolation to find out a point's ratio? Basically, from http://en.wikipedia.org/wiki/Trilinear_interpolation, I want a mean to find xd, yd and zd from a known point, and a deformed cube.
The closest I have come to finding a solution is http://www.grc.nasa.gov/WWW/winddocs/utilities/b4wind_guide/trilinear.html but I am not sure I understand it properly. Reading it, it seems that they guesstimate it a/b/g to be at 0, then run it through their formula, hopes it gets the proper answer. If not, calculate a delta from the expected vs received solution. If after 20 runs they don't have a good enough solution, they simply give up.
Sounds like an easy problem if you know how to relate cube displacements to a strain measure.
You don't say if the deformations are small or large.
The most general solution would be to assume a large strain measure based on original, undefored geometry: Green-Lagrange large strain tensor.
You can substitute the interpolation functions into the strain displacement relations to get what you want.

Fragment shader - drawing a line?

I was interested in how to draw a line with a specific width (or multiple lines) using a fragment shader. I stumbled on the this post which seems to explain it.
The challenge I have is understanding the logic behind it.
A couple of questions:
Our coordinate space in this example is (0.0-1.0,0.0-1.0), correct?
If so, what is the purpose of the "uv" variable. Since thickness is 500, the "uv" variable will be very small. Therefore the distances from it to pont 1 and 2 (stored in the a and b variables)?
Finally, what is the logic behind the h variable?
i will try to answer all of your questions one by one:
1) Yes, this is in fact correct.
2) It is common in 3d computer graphics to express coordinates(within certain boundaries) with floating-point values between 0 and 1(or between -1 and 1). First of all, this makes it quite easy to decide whether a given value crosses said boundary or not, and abstracts away from a concept of "pixel" being a discrete image unit; furthermore this common practise can be found pretty much everywhere else(think of device coordinates or texture coordinates)
Don't be afraid that values that you are working with are less than one; in fact, in computer graphics you usually deal with floating-point arithmetics, and FLOAT types are quite good at expressing Real values line around the "1" point.
3) The formula give for h consists of 2 parts: the square-root part, and the 2/c coefficient. The square root part should be well known from scholl math classes - this is Heron formula for the area of a triangle(between a,b,c). 2/c extracts the height of the said triangle, which is stored in h and is also the distance between point uv and the "ground line" of the triangle. This distance is then used to decide, where is uv in relation to the line p1-p2.

D3DXVec3Unproject is inaccurate in C++

I have a problem in my 3D Direct X Map Application (aka Google Earth)
All coordinates are geocentric (XYZ) in meters.
The maps are drawn above earth ellipsoid.
Everything works nice, except one issue.
When I go to very close zoom (about 1 meter per pixel), the D3DXVec3Unproject function returns inaccurate results. I have an error of 10-30 meters and cannot make map panning.
A lot of investigations isolate problem as follows:
D3DXVECTOR3 vScreen, vBack;
D3DXVec3Project( &vScreen, &m_vecAt, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
D3DXVec3Unproject( &vBack, &vScreen, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
m_vecAt is a looking point.
D3DXVec3Project returns correct vScreen - exactly center of window (+/- 0.5 pixel) with vScreen.z = 0.99, but vBack differs from m_vecAt by tens meters.
Sign again, that everything else works, so I assume, that all matrices and other data are calculated correctly.
The only one reason, that I can think about, is a not enough calculation precision of single float point arithmetic.
Please help.
Thank you guys.
I solved the problem. The near plane of projection matrix was too close to camera.
Precision of float might be one reason but another issue you should investigate is the accuracy of the ellipsoid or geoid model you are using. Errors at the range of 10~40 meters is expected in most usual models. More precise models are available but they are significantly more complex. Wikipedia article has some good links to follow.
It can't be the geoid given that the example code is generic 3D projection code which doesn't know anything about it. It must be floating point inaccuracy. Is it particularly bad for points towards the back of screen space (ie z ~ 1)? Since you're projecting a frustum to a cube, the back of the projected volume covers more world space, so any inaccuracy is more pronounced.

Does OpenGL care if I work with large numbers?

For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?
There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).
Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).
The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.