Does OpenGL care if I work with large numbers? - opengl

For example, I could relate the positions where I draw my entities to the screen before sending them to OpenGL - or: I give OpenGL the direct coordinates, relative to the game world, which are of course in a much larger scale.
Is drawing with large vertex coordinates slow?

There aren't speed issues, but you will find that you get accuracy issues as the differences between your numbers will be very small in relation to the size of the numbers.
For example, if you model is centred at (0,0,0) and is 10 x 10 x 10 then two points at opposite sides of your model are at (-5,-5,-5) and (5,5,5). If that same model is centred at (1000,1000,1000) then your points are (995,995,995) and (1005,1005,1005).
While the absolute difference is the same, the difference relative to the coordinates isn't.
This could lead to rounding errors and display artefacts.
You should arrange the coordinates of models so that the centre of the model is at (0,0,0).

Speed of larger values will be the same as smaller values of the same data type. But if you want more than the normal 32bit precision and enable something like GL_EXT_vertex_attrib_64bit, then there may be performance penalties for that.
Large values are allowed, but one extra thing to watch out for other than the standard floating point precision issues is the precision of the depth buffer.
The depth precision is non-linearly distributed. So it is more precise close up and less precise for things far away. So if you want to be able to use a wide range of values for vertex coordinates, be aware that you may see z-fighting artifacts if you intend to render giant objects very far away from the camera very small things close to the camera. Setting your near and far clip planes as close as possible to your actual scene extents will help but may not be enough to fix the problem. See http://www.opengl.org/resources/faq/technical/depthbuffer.htm, especially the last two points for more details.
Also, I know you asked about vertex positions, but texture coordinates may have additional precision limits beyond normal floating point rules. Assigning a texture coordinate of (9999999.5, 9999999.5) and expecting it to wrap properly to the (0..1, 0..1) range may not work as you expect (depending on implementation/hardware).

The floating point pipeline has constant speed on operations, regardless of the size of the operands.
The only danger is getting abnormally large or small numbers where the error gets out of hand and you end up with subnormal numbers.

Related

GPU-Computation (CUDA) tex2d/tex3d - How to deal with anisotropic pixel/voxel

I am quite new to cuda programming and i have a question about the texXD function. My goal is to implement a simple GPU-based ray tracer using the optimized CUDA functionality.
See CUDA texture API that is used by NVIDIA.
At my research I have to deal with images that have a different resolution for every dimension (like CT images, (x,y) have a different resolution as (z)). Resampling to an isotropic pixel/voxel size might bring up some problems (especially for medical diagnosis).
For example i have an image with size (100px x 50px) and a resolution of (2px/mm x 1px/mm). The ray enters the image at an arbitrary point and leaves is somewhere else. The ray is sampled in the direction form entrance to leaving point. At each sample point (pos.x,pos.y) the tex2D function carries out an (bicubicbilinear) interpolation taking the neighbour pixel values into account weighted by their distance from the sample point.
example image:
In both shapes the corner points are named the same way(x1,y1),.... The only difference is the physical space between the corner points. The interpolation point is (x,y). I computed an example using the formula for rectangular grids and yield a different results for both grids. But if I use the ratio of areas of the numbered rectangles I got a different result.
My Question: Will CUDA take care of the different resolutions of the dimensions or does CUDA see all pixel in the same distance (and therefore as a squared grid)?
The formula used by CUDA seems to be the one for a squared grid (google:CUDA Texture fetching).
Or can I resample the image to squared grid before using tex2D without a substantial information loss?
Any suggestions are recommended. If you need some more clarification, feel free to ask. I will specifiy my question.
I don't believe what (I think it is) you are trying to do can be achieved using textures. The sole filtering mode supported using textures is described here.
Some salient points:
Textures don't have resolution. The just have dimensions.
Textures data is implicitly uniformly spaced in all dimensions.
Texture interpolation is done in a reduced accuracy fixed point arithmetic format which gives 8 bits of representational accuracy
None of this seems like anything that would be useful for the interpolation on a non-uniform grid which you are describing. At a minimum you would need to perform a coordinate transformation before you could use the uniform filtering mode. The amount of effort and expense would be about the same as just writing an interpolation routine yourself in user code.

Fragment shader - drawing a line?

I was interested in how to draw a line with a specific width (or multiple lines) using a fragment shader. I stumbled on the this post which seems to explain it.
The challenge I have is understanding the logic behind it.
A couple of questions:
Our coordinate space in this example is (0.0-1.0,0.0-1.0), correct?
If so, what is the purpose of the "uv" variable. Since thickness is 500, the "uv" variable will be very small. Therefore the distances from it to pont 1 and 2 (stored in the a and b variables)?
Finally, what is the logic behind the h variable?
i will try to answer all of your questions one by one:
1) Yes, this is in fact correct.
2) It is common in 3d computer graphics to express coordinates(within certain boundaries) with floating-point values between 0 and 1(or between -1 and 1). First of all, this makes it quite easy to decide whether a given value crosses said boundary or not, and abstracts away from a concept of "pixel" being a discrete image unit; furthermore this common practise can be found pretty much everywhere else(think of device coordinates or texture coordinates)
Don't be afraid that values that you are working with are less than one; in fact, in computer graphics you usually deal with floating-point arithmetics, and FLOAT types are quite good at expressing Real values line around the "1" point.
3) The formula give for h consists of 2 parts: the square-root part, and the 2/c coefficient. The square root part should be well known from scholl math classes - this is Heron formula for the area of a triangle(between a,b,c). 2/c extracts the height of the said triangle, which is stored in h and is also the distance between point uv and the "ground line" of the triangle. This distance is then used to decide, where is uv in relation to the line p1-p2.

Is scale important in OpenGL?

Does the scale of a scene matter in OpenGL? By scale I mean, for instance, drawing a 1 unit cube but setting the camera position 1000 pixels away vs setting the camera 100 pixels away from a 0.1 unit cube? I guess, firstly, am I correct in thinking this would yield the same results visually? And if so, does anyone know how either would effect performance, if at all?
My hunch would be that it would have no effect but a more definitive answer would be nice.
It doesn't matter except for imprecisions using floating point arithmetic. So try not to use super small or large numbers.
If you mean 2D this may not matter. Image created both ways may look the same.
"Setting the camera" actually only changes the transformation matrix the vertices are multiplied by, so after the transformation is applied, the vertex positions are the same. There might be minor differences resulting from the imprecision of floating-point values.
If the camera has a constant near and far clipping distance, the resulting values in depth buffer will differ, and one of the cubes might get outside of the clipping plane range, which would make it appear different / not at all.
You should know that floats have two modes: normalized and unnormalized mode. For values below 0, you will have more bits and more precision. So, it is better if you keep everything between 0 and one.
Also, looking to how the exp/mantissa works, the difference between numbers when a single bit changes, is bigger when you move away from zero, and things will start to make popping when you move very far from origin.
Yes, it matters. Not because of OpenGL, but because of the nature of floating point operations - floating point precision degrades at larger magnitudes. Try and normalize your values such that you have values closer to zero. That is, don't make a planet object 1,000,000 units wide just because you want a planet that's 1,000 KM in diameter.

How does OpenGL render its triangles?

I need an algorithm to render pixel perfect triangles without using OpenGL.
Where can I get such an algorithm like OpenGL is using? Preferrably single threaded.
It must do it like OpenGL does it, because OpenGL doesn't leave gaps or overlapped pixels on two triangles that share an edge. Scanline method makes those errors: http://joshbeam.com/articles/triangle_rasterization/ . I tried this half-space algo too: http://devmaster.net/forums/topic/1145-advanced-rasterization/ but I couldnt make it work (renders gibberish).
How does OpenGL do it? Is it so intensive job that I should just forget about it and use the (buggy) scanline method instead?
Where can I get the algorithm?
I would prefer a C++ solution but the language isn't the problem.
OpenGL uses a scanline based method, as Alan and your article link mentions. However, managing gaps is not a matter of math accuracy. You need an additional convention for the pixels on the edges of the triangle. The convention OpenGL uses is the same as the convention used by Direct3D. It's called "top-left" and the best explanation I've seen is on this MSDN rasterization rules article.
A typical OpenGL implementation will use the scanline/rasterization method more or less the same as your first link. The key to avoiding the gaps and overlaps is to be very careful with your math to avoid uncertainty in rounding errors.
If you think of pixels as squares of the screen with some finite area, then you can get overlaps because adjoining triangles will often both overlap the pixel. But instead if you think of the pixel as the infinitely small point in the center of the square, then you can can guarantee no gaps or overlaps.
You also need the arithmetic-certainty of fixed-point integer math instead of the rounding errors of floating point math. The left end of each rasterized scanline starts at first pixel greater or equal to the starting scan value. The right end of the rasterized scanline ends on the one before the first pixel strictly less than the end scan value.
Take a look at OpenMesa, an open-source implementation of the OpenGL specification. It currently supports up to OpenGL 3.0.

D3DXVec3Unproject is inaccurate in C++

I have a problem in my 3D Direct X Map Application (aka Google Earth)
All coordinates are geocentric (XYZ) in meters.
The maps are drawn above earth ellipsoid.
Everything works nice, except one issue.
When I go to very close zoom (about 1 meter per pixel), the D3DXVec3Unproject function returns inaccurate results. I have an error of 10-30 meters and cannot make map panning.
A lot of investigations isolate problem as follows:
D3DXVECTOR3 vScreen, vBack;
D3DXVec3Project( &vScreen, &m_vecAt, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
D3DXVec3Unproject( &vBack, &vScreen, &m_viewPort, &m_matProj, &m_matView, &m_matWorld );
m_vecAt is a looking point.
D3DXVec3Project returns correct vScreen - exactly center of window (+/- 0.5 pixel) with vScreen.z = 0.99, but vBack differs from m_vecAt by tens meters.
Sign again, that everything else works, so I assume, that all matrices and other data are calculated correctly.
The only one reason, that I can think about, is a not enough calculation precision of single float point arithmetic.
Please help.
Thank you guys.
I solved the problem. The near plane of projection matrix was too close to camera.
Precision of float might be one reason but another issue you should investigate is the accuracy of the ellipsoid or geoid model you are using. Errors at the range of 10~40 meters is expected in most usual models. More precise models are available but they are significantly more complex. Wikipedia article has some good links to follow.
It can't be the geoid given that the example code is generic 3D projection code which doesn't know anything about it. It must be floating point inaccuracy. Is it particularly bad for points towards the back of screen space (ie z ~ 1)? Since you're projecting a frustum to a cube, the back of the projected volume covers more world space, so any inaccuracy is more pronounced.