How does the coordinate system work in OpenGL? - opengl

I'm learning OpenGL and I've been told there are different coordinate systems: one for the model and another one for the application, but I'm not sure when should I use each one.
Can someone help me?

The model's coordinate system is used when creating the different objects in a scene. The application's coordinate system is used to represent the entire scene. By multiplying the vertices of an object by the transformation matrix you are placing the object in the scene you want to represent, along with all the other objects.
I hope this answers your question.

Related

How to render CGAL objects in OpenGL properly?

I am quite new to CGAL as well as OpenGL. I know that CGAL provides a Qt interface to display objects but I want to use only OpenGL and I am able to render polyhedrons and nef polyhedrons in openGL(I referred to polyhedron demo). Question is, how to display polyhedrons of different size efficiently in openGL. I apply translation in my program using glTranslatef to view the objects properly. Problem is, it may not work for each and every object because of the difference in the size. Therefore I need to apply translations based on the size of the object. If I can find the longest diagonal of the object this may be possible by adjusting the value of the parameters that I pass to glTranslatef(). Is there any way to do this in CGAL?
Treat your objects as a collection of points, and create a bounding volume from it. The size of the bounding volume should give you the scaling required. For example, you might wish to center the view around the center of the bounding sphere, and scale the view based on its radius.
See the chapter on bounding volumes.
Also, you probably want to use glScale to scale the view in addition to glTranslate to center it.

In a big OpenGL game with lots of 3D moving objects, how are the points typically updated?

Points calculated with own physics engine and then sent to OpenGL every time it has to display, e.g. with glBufferSubDataArb, with the updated coordinates of a flying barrel
There are lots of barrels with the same world coordinates but somehow for each one you tell OpenGL to use a different matrix transformation. When a barrel moves you update it's transformation matrix somehow, to reflect which way it rotated/translated in the world.
Some other way
Also, if the answer is #2, is there any easy way to do it, e.g. with abstracted code rather than manipulating the matrices yourself
OpenGL is not a scene graph, it's a drawing API. Most recent versions of OpenGL (OpenGL-3 core and above) reflect this, by not managing matrix state at all. Indeed the answer is 2, more or less. And actually you are expected to deal with the matrix math. OpenGL-3 no longer provides any primitives for that.
Usually a physics engine sees an object as a rigid body with a convex hull. The natural way to represent such a body is using a 4×3 matrix (a 3×3 rotation matrix and a translation vector). So if using a physics engine you're presented with such matrices anyway.
Also you must understand that OpenGL doesn't maintain a scene, so there is nothing you "update". You just draw your data using OpenGL. Matrices are loaded as they are needed.

The purpose of Model View Projection Matrix

For what purposes are we using Model View Projection Matrix?
Why do shaders require Model View Projection Matrix?
The model, view and projection matrices are three separate matrices. Model maps from an object's local coordinate space into world space, view from world space to camera space, projection from camera to screen.
If you compose all three, you can use the one result to map all the way from object space to screen space, making you able to work out what you need to pass on to the next stage of a programmable pipeline from the incoming vertex positions.
In the fixed functionality pipelines of old, you'd apply model and view together, then work out lighting using another result derived from them (with some fixes so that e.g. normals are still unit length even if you've applied some scaling to the object), then apply projection. You can see that reflected in OpenGL, which never separates the model and view matrices — keeping them as a single modelview matrix stack. You therefore also sometimes see that reflected in shaders.
So: the composed model view projection matrix is often used by shaders to map from the vertices you loaded for each model to the screen. It's not required, there are lots of ways of achieving the same thing, it's just usual because it allows all possible linear transforms. Because of that, a lesser composed version of it was also the norm in ye olde fixed pipeline world.
Because matrices are convenient. Matrices help to convert locations/directions with respect to different spaces (A space can be defined by 3 perpendicular axes and an origin).
Here is an example from a book specified by #legends2k in comments.
The residents of Cartesia use a map of their city with the origin
centered quite sensibly at the center of town and axes directed along
the cardinal points of the compass. The residents of Dyslexia use a
map of their city with the coordinates centered at an arbitrary point
and the axes running in some arbitrary directions that probably seemed
a good idea at the time. The citizens of both cities are quite happy
with their respective maps, but the State Transportation Engineer
assigned a task of running up a budget for the first highway between
Cartesia and Dyslexia needs a map showing the details of both cities,
which therefore introduces a third coordinate system that is superior
to him, though not necessarily to anybody else.
Here is another example,
Assume that you have created a car object in a game with it's vertex positions using world's co-ordinates. Suppose you have to use this same car in some other game in an entirely different world, you have to define the positions again and the calculations will go complex. This is because you again have to calculate the positions of window, hood, headlight, wheels etc., in the car with respect to new world.
See this video to understand the concepts of model, view and projection. (highly recommended)
Then see this to understand how the vertices in the world are represented as Matrices and how they are transformed.

OpenGL Mouse Picking Strategy

I am using OpenGL to render a model of an object that is rotationally-symmetric in a given plane, and I want the user to be able to scroll over the model (possibly after rotations, zooms, etc.) and determine what world coordinate on the model the mouse is currently pointing to.
The reason I mentioned the symmetry is that I'm building the model using VBOs of individual components for ease of use. An analogy to what I'm doing would be a bicycle wheel - I'd have one VBO for a spoke, one for the hub, and one for the wheel/tire and I'd resuse the spoke VBO a number of times (after suitable translations and rotations). My first question is, is this arrangement conducive to the kind of picking that I'm trying to do? I want each individual spoke in the resulting model to be "pickable", for example. Do I need a seperate VBO for each quad/triangle in the mesh to do the kind of selection that I'm trying to do? I really hope that's not the case...
Also, what would be the best picking algorithm to use? I've heard nothing but negative things about OpenGL's built-in selection mode. Thanks in advance!
Regarding your question about VBOs, there is nothing wrong in having few VBOs and reuse them. In fact, you can have everything in one VBO and only index into it. For your bicycle wheel analogy, the spoke vertices could be followed by the tire vertices, and you could draw the vertices of the spokes several times (maybe even using instancing, see glMultiDrawElements), followed by drawing from the same VBO but starting at a different index, the tire.
Regarding your picking question, one easy way of getting the world coordinate of the point on the model under the mouse would be to read back the depth value (glReadPixels on a 1x1 rectangle, preferrably with a pixel buffer object on the last frame's data, that way you hide the transfer latency). Then call gluUnproject to get world coordinates.

OpenGL Collision Detection

I am currently working on designing my first FPS game using JOGL. (Java bindings for OpenGL).
So far I have been able to generate the 'world' (a series of cubes), and a player model. I have the collision detection between the player and the cubes working great.
Now I am trying to add in the guns. I have the gun models drawn correctly and loading onto player model. The first gun I'm trying to implement is a laser gun, which shoots and instantaneous line-of-sight laser at whatever you're aiming at. Before I work on implementing the enemy models, I would like to get the collision detection between the laser and the walls working.
My laser, currently, is drawn by a series of small cubes, one after the other. The first cube is drawn at the end of the players gun, then it draws continuously from there. The idea was to continue drawing the cubes of the laser until a collision was detected with something, namely the cubes in the world.
I know the locations of the cubes in the world. The problem is that I have to call glMatrixPush to draw my character model. The laser is then drawn within this modelview. Meaning that I have lost my old coordinate system - so I'm drawing the world in one system, then the lazer in another. Within this player matrix, I have go call glRotate and glTranslate several times, in order to sync everything up with the way the camera is rotating. The lazer is then built by translating along the z-axis of this new system.
My problem is that through all of these transformations, I no longer have any idea where my laser exists in the map coordinate system, primarily due to the rotations involving the camera.
Does anyone know of a method - or have any ideas, for how to solve this problem? I believe I need a way to convert the new coordinates of the laser into the old coordinates of the map, but I'm not sure how to go about undoing all of the transformations that have been done to it. There may also be some functionality provided by OpenGL to handle this sort of problem that I'm just unaware of.
You shouldn't be considering the laser as a spacial child of the character that fires it. Once its been fired, the laser is an entity of its own, so you should render as follows:
glPushMatrix(viewMatrix);
glPushMatrix(playerMatrix);
DrawPlayer();
glPopMatrix();
glPushMatrix(laserMatrix);
DrawLaser();
glPopMatrix();
glPopMatrix();
Also, be sure that you don't mix your rendering transformation logic with the game logic. You should always store the world-space position of your objects to be able to test for intersections regardless of your current OpenGL matrix stack.
Remember to be careful with spacial parent/child relationships. In practice, they aren't that frequent. For more information, google about the problems of scene graphs.
The point that was being made in the first answer is that you should never depend on the matrix to position the object in the first place. You should be keeping track of the position and rotation of the laser before you even think about drawing it. Then you use the translate and rotate commands to put it where you know it should be.
You're trying to do things backwards, and yes, that does mean you'll have to do the matrix math, and OpenGL doesn't keep track of that because the ModelView matrix is the ONLY thing that OpenGL does keep track of in regards to object positions. OpenGL has no concept of "world space" or "camera space". There is only the matrix that all input is multiplied by. It's elegantly simple... but in some cases I do prefer the way DirectX has a a separate view matrix and model matrix.
So, if you don't know where an object is located without matrix math, then I would consider that a fundamental design problem. If you don't need to know the object position, then matrix-transform to your hearts content, but if you do need it's position, start with the position.
(pretty much what the first answer says, just in a different way...)