I am having trouble finding the right coordinates in openGL.
For e.g.: - If the h and w are height and width of the window, then I want to draw a line of length w/2 at a distance h/4 from bottom. How would I do this in openGL?
I don't find any references telling the maximum and minimum values of coordinates.
My computer screen is 1024*768 so technically the limit should be:-
x coordinate: -512 to 512
y coordinate: -384 to 384
z coordinate: -inf to 0
But this doesn't work. Why? I need to know how coordinate system is working for openGL.
In OpenGL you can redefine the coordinate system to whatever you need. The default coordinate system is defined by a identity transform from model space to clip space identity transformed to NDC space. What this means is, that the xy coordinate range [-1,1]² maps to the viewport you set with glViewport. However by applying the right transformations you can alter that mappings to whatever you want, or need.
So I strongly suggest you read some tutorial on the OpenGL transformation pipeline and how to use it.
Fixed Function pipeline approach http://www.opengl.org/wiki/Vertex_Transformation
And the modern approach http://arcsynthesis.org/gltut/Positioning/Tut03%20A%20Better%20Way.html
I was having a difficult time with the generic coordinate system at first too.
What I wound up doing which makes more sense to me is working with OpenGL coordinate systems as if I am working with real objects in a real world coordinate system.
What I did was: I took a blueprint of a TARDIS (From the TV Show Doctor Who), which had dimensions for building the blue box in inches and feet.
From there, I took the GL coordinate system, and for every "1" GL unit, I made that equivalent to "1" foot, or 12 inches.
And - based on 0,0,0 being the center point of my TARDIS, i just hand drew the TARDIS through code, as I saw on the blueprint, translating the precise dimensions from what i saw on the blueprint.
Here's a SMALL example of what i did:
glBegin( GL_QUADS );
glNormal3f( 1.0f, 0.0f, 0.0f);
glTexCoord2f(0.0f, 0.0f); glVertex3f(15.0f/12.0f, myTop, 2.0f-(ONEINCH*0.25f));
glTexCoord2f(0.0f, 1.0f); glVertex3f(3.25f/12.0f, myTop, 2.0f-(ONEINCH*0.25f));
glTexCoord2f(1.0f, 1.0f); glVertex3f(3.25f/12.0f, myTop-(14.5f/12.0f), 2.0f - (ONEINCH*0.25f));
glTexCoord2f(1.0f, 0.0f); glVertex3f(15.0f/12.0f, myTop-(14.5f/12.0f), 2.0f - (ONEINCH*0.25f));
glEnd();
The first thing I learned in this exercise is - GL units are generic units. So applying a system I was more familiar with - feet and inches - made it SO much easier to focus on GL rather than what the heck does this unit mean?
Once I started the drawing, I was able to work with gluPerspective much more effectively, which helped me understand the viewport resolution (not to be confused with screen resolution) should ONLY have to be dealt with once, and that's when leveraging the gluPerspective command (as follows)
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,250.0f);
So to answer your question: There's no maximums for the floating point values, only levels of accuracy of the float you're dealing with.
For instance, in my perspective example above, i put the viewport extremely close (0.1 GL Unit, which is 1.2 inches), and set the distance to 250 GL uints, or 250 feet in the distance.
I think that's in part what was intended with OpenGL to begin with. It's generic units because three dimensional design makes so much more sense if you have a real world unit system to measure it again.
My advice is: Do not think of the GL units as having bounds or limitations other than degrees of error associated with the floating point inaccuracies at small and large scales.
In fact, I advise comparing it to real world units. I work in Feet and Inches. Metric system and meters are quite common. If those don't work for you, make one up that does.
Here's the TARDIS I built:
https://universalbri.wordpress.com/2015/05/24/creators-journal-holodeck-management-system-progress-10/
I am working on a game with all of this, a Star Trek Online without combat and now doing database work (SQL Server), the OpenGL was part of the question: Can I roll my own engine rather than leverage someone else's.
The answer is Yes, I can, and in fact it's preferable.
Good luck
Related
I have a project I'm working on that is making movies from a simulation. The simulation is passed from another program that defines the projection matrix.
The issue I am running into is that the other program has a sort of 'fake' orthographic view, what I mean by this is that its projection matrix is as follows:
PerspectiveMatrix = glm::perspective(3.5, 1, 1.0f, 50.0f);
And it uses the LookAt function:
ViewMatrix = glm::lookAt(
(2000,-3000,2000), // eye
(0,0,0), // center
(0,0,1)//up
);
So what I mean by 'fake' orthographic view is that they have positioned the camera far enough away (and small angle to zoom the scene) that the "view lines" (for lack of a better term) are almost parallel like in a real orthographic projection.
So this is all fine and well but what I've run into, and is an issue in the other program as well, is that all of the high precision depth testing is close to the camera and in my case this is empty space. This means that there is quite a lot of z fighting as shown in the link below:
So my question is what ways can I change my depth testing in order to maybe bias the buffer towards the far plane? or something along those lines. I have tried moving the NearPlane farther out, which has the result of zooming out the screen, so I compensate with a narrower angle in the perspective. But doing this enough times makes the problem worse, there isn't z fighting but it doesn't draw things at the right depth. The spheres end up on top of everything.
I did find some info at Outerra:
http://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
They had some ideas for reversing the depth buffer but it was Nvidia specific and I need to be compatible with both ATI and Nvidia
Both logarithmic depth and reversed depth mapping described in that blog post will work for you.
Reverse floating point is better, and it works normally in DirectX. In OpenGL it won't bring you any extra precision due to a design flaw, unless the driver exposes the NV_depth_buffer_float extension, through which you can effectively turn off the bias that makes it unusable normally.
AMD supports that extension since their 13.12 Catalyst drivers, so the technique is usable on all 5000+ series AMD GPUs (older series aren't supported by the drivers).
Simpler than any of above: move your znear further from the camera. Looks like it's the third parameter of glm::perspective(), set to 1.0 in your example. Set it as large as you can before it starts clipping away stuff in the foreground of your scene, and your z-buffer precision problems will probably go away.
Reverse-float-z is great but only really needed for scenes with wider field of view and deeper geometry. For a near-orthographic scene like yours, just set your znear/zfar appropriately.
Suppose I have the following code:
glRotatef(angle, 1.0f, 1.0f, 0.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glRotatef(angle, 0.0f, 0.0f, 1.0f);
glTranslatef(0.0f, 0.0f -5.0f);
Is this less efficient than utilizing one's own custom matrix via the glLoadMatrix function that accomplishes the same functionality?
Also, I know that when multiplying matrices to form a custom linear transformation, the last matrix multiplied is the first transformation to take place. Likewise, is this the case if I utilize the above code? Will it translate, then rotate about the Z axis, followed by rotations about the y and x axes?
In general if you assemble your matrix on your own and load it via glLoadMatrix or glMultMatrix your program will run faster. Unless you make stupid mistakes in your own matrix routines that ruin the performance of course.
This is because the glRotate glTranslate etc. functions do quite a bit more than the pure math. They have to check the matrix-mode. glRotate has to deal with cases where the axis is not passed as a unit-vector etc.
But unless you do this 10thousands times per frame I wouldn't worry about the lost performance. It adds up, but it's not that much.
My personal way of dealing with openGL transformations is to build the matrices in my code and only upload them to OpenGL via glLoadMatrix. This allows me to do lots of shortcuts like reversing the order of multiplications (faster to calculate than the way OpenGL does it). Also it gives me instant access to the matrix which is required if you want to do boundary box checks before rendering.
Needless to say code written with such an approach is also easier to port onto a different graphics API (think OpenGL|ES2, DirectX, Game-Consoles...)
According to the OpenGL specs the glRotate and glTranslate are using their parameters to produce a 4x4 matrix, then the current matrix is multiplied by the (glRotate or glTranslate) produced matrix with the product replacing the current matrix.
This roughly means that in your enlisted code you have 4 matrix multiplications! On top of that you have 4 API calls and a few other calculations that convert the angles of glRotate to a 4x4 matrix.
By using glLoadMatrix you will have to produce the transformation matrix yourself. Having the angles and the translation there are way more efficient ways to produce that transformation matrix and thus speedup the whole thing.
Is this less efficient than utilizing one's own custom matrix via the glLoadMatrix function that accomplishes the same functionality?
Very likely. However if you're running into a situation where setting of the transformation matrices has become a bottleneck you're doing something fundamentally wrong. In a sanely written realtime graphics program calculation of the transformation matrices should make only a very small amount of the things processed in total.
A example for very bad programming was something like this (pseudocode):
glMatrixMode(GL_MODELVIEW)
for q in quads:
glPushMatrix()
glTranslatef(q.x, q.y, q.z)
glBindTexture(GL_TEXTURE_2D, q.texture)
glBegin(GL_QUADS)
for v in [(0,0), (1,0), (1,1), (0,1)]:
glVertex2f(v[0], v[1]
glEnd()
glPopMatrix()
Code like this will perform very poorly. First you're spending an awful lot of time in calculating the new transformation matrix for each quad, then you restart a primitive batch for each quad, the texture switches kill the caches and last but not least its using immediate mode. Indeed the above code is the the worst of all OpenGL anti-patterns in one single example.
Your best bet for increasing rendering performance is to avoid any of the patterns you can see in above example.
Those matrix functions are implemented in the driver, so they might be optimized. You will have to spend some time to write your own code and test if the performance is better or not than the original OpenGL code.
On the other hand in "new" version of OpenGL all of those functions are missing and marked as deprecated. So in new standard you are forced to use custom math functions (assuming that you are using Core profile)
I'm writing a space exploration application. I've decided on light years being the units and have accurately modeled the distances between stars. After tinkering and a lot of arduous work (mostly learning the ropes) I have got the camera working correctly from the point of view of a starship traversing through the cosmos.
Initially I paid no attention to the zNear parameter of gluPerspective () until I worked on planetary objects. Since my scale is in light year units I soon realized that due to zNear being 1.0f I would not be able to see such objects. After experimentation I arrived at these figures:
#define POV 45
#define zNear 0.0000001f
#define zFar 100000000.0f
gluPerspective (POV, WinWidth/WinHeight, zNear ,zFar);
This works exceptionally well in that I was able to cruise my solar system (position 0,0,0) and move up close to the planets which look great lit and texture mapped. However other systems (not at position 0,0,0) were much harder to cruise through because the objects moved away from the camera in unusual ways.
I had noticed however that strange visual glitches started to take place when cruising through the universe. Objects behind me would 'wrap around' and show ahead, if I swing 180 degrees in the Y direction they'll also appear in their original place. So when warping through space, most the stars are correctly parallaxing but some appear and travel in the opposite direction (which is disturbing to say the least).
By changing the zNear to 0.1f immediately corrects ALL of these glitches (but also won't resolve solar system objects). So I'm stuck. I've also tried working with glFrustum and it produces exactly the same results.
I use the following to view the world:
glTranslatef(pos_x, pos_y, pos_z);
With relevant camera code to orientate as required. Even disabling camera functionality does not change anything. I've even tried gluLookAt() and again it produces the same results.
Does gluPerspective() have limits when extreme zNear / zFar values are used? I tried to reduce the range but to no avail. I even changed my world units from light years to kilometers by scaling everything up and using a bigger zNear value - nothing. HELP!
The problem is that you want to resolve too much at the same time. You want to view things on the scale of the solar system, while also having semi-galactic scale. That is simply not possible. Not with a real-time renderer.
There is only so much floating-point precision to go around. And with your zNear being incredibly close, you've basically destroyed your depth buffer for anything that is more than about 0.0001 away from your camera.
What you need to do is to draw things based on distance. Near objects (within a solar system's scale) are drawn with one perspective matrix, using one depth range (say, 0 to 0.8). Then more distant objects are drawn with a different perspective matrix and a different depth range (0.8 to 1). That's really the only ways you're going to make this work.
Also, you may need to compute the matrices for objects on the CPU in double-precision math, then translate them back to single-precision for OpenGL to use.
OpenGL should not be drawing anything farther from the camera than zFar, or closer to the camera than zNear.
But for things in between, OpenGL computes a depth value that is stored in the depth buffer which it uses to tell whether one object is blocking another. Unfortunately, the depth buffer has limited precision (generally 16 or 24 bits) and according to this, roughly log2(zFar/zNear) bits of precision are lost. Thus, a zFar/zNear ratio of 10^15 (~50 bits lost) is bound to cause problems. One option would be to slightly increase zNear (if you can). Otherwise, you will need to look into Split Depth Buffers or Logarithmic Depth Buffers
Nicol Bolas already told you one piece of the story. The other is, that you should start thinking about a structured way to store the coordinates: Store the position of each object in relation to the object that dominates it gravitatively and use apropriate units for those.
So you have stars. Distances between stars are measured in lightyears. Stars are orbited by planets. Distances within a starsystem are measured in lightminutes to lighthours. Planets are orbited by moons. Distances in a planetary system are measured in lightseconds.
To display such scales you need to render in multiple passes. The objects with their scales form a tree. First you sort the branches distant to close, then you traverse the tree depth first. For each branching level you use apropriate projection parameters so that the near→far clip planes snuggily fit the to be rendered objects. After rendering each level clear the depth buffer.
and this time i have loaded a model successfully! yay!!
but theres a slight problem, one that i had with another obj loader...
heres what it looks like:
http://img132.imageshack.us/i/newglitch2.jpg/
heres another angle if u cant see it right away:
http://img42.imageshack.us/i/newglitch3.jpg/
now this is supposed to look like a cube, but as you can see, the edges of the faces on the cube are being very choppy
is anyone else having this problem, or if anyone knows how to solve this then let me know
also comment if theres any code that needs to be shown, ill be happy to post it.
hey i played around with the code(changed some stuff) and this is what i have come up with
ORIGINAL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -10.0f);
result: choopy image(look at images)
CURRENT:
glMatrixMode(GL_MODELVIEW);
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -50.0f);
glLoadIdentity();
result: model is not choppy but cannot move camera(model is right in front of me)
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
^^^
|
That's your problem right there ---------------+
The near clip distance must be greater than 0 for perspective projections. Actually you should choose near to be as far away as possible and the far clip plane to be as near as possible.
Say your depth buffer is 16 bits wide, then you slice the scene into 32768 slices. The slice distribution follows a 1/x law. Technically you're dividing by zero.
Well this looks like a projection setting issue. Some parts of your cube, when transformed into clip space, exceed near/far planes.
From what I see you are using orthogonal projection matrix - it's standard for making 2D UI. Please review nearVal and farVal of your glOrtho call. For 2D UI they are usually set as -1 and 1 respectively (or 0 and 1), so may want to either scale down cube or increase view frustum depth by modifying mentioned parameters.
I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
But the result is not a triangle rotated 90 degrees.
Edit
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
glMatrixMode(GL_MODELVIEW);
Otherwise, you may be modifying either the projection or a texture matrix instead.
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.
Two major problems with the code as written:
Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.
You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.
The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow.
Regarding Projection matrix, you can find a good source to start with here:
http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx
It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).
transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:
Transform1 = Object coordinates system to World (for example - object rotation and scale)
Transform2 = World coordinates system to Camera (placing the object in the right place)
Transform3 = Camera coordinates system to Screen space (projecting to screen)
Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.
Have fun