How do you get the modelview and projection matrices in OpenGL? - opengl

I am trying to use the OpenGL Shading Language (GLSL) version 1.5 to make vertex and geometry shaders.
I have learned that in GLSL version 1.5, the built-in variables like gl_ModelViewProjectionMatrix are deprecated so you have to pass them in manually. If I have already set the modelview and projection matrices (using gluLookAt and gluPerspective for example) then how do I get the matrices to pass into the vertex and geometry shaders? I've done some searching and some sites seem to mention a function glGetMatrix(), but I can't find that function in any official documentation, and it doesn't seem to exist in the implementation I am using (I get a compilation error unknown identifier: glGetMatrix when I try to compile it with that function).

Hey, let's slow down a bit here :) Yes, that's true that you receive the matrix by glGetFloatv(GL_MODELVIEW_MATRIX, ptr)... But that's definitely not the thing you should do here!
Let me explain:
In GLSL, built-in variables like gl_ModelViewProjectionMatrix or functions like ftransform() are deprecated - that's right, but that's only because the whole matrix stack is deprecated in GL 3.x and you're supposed to use your own matrix stack (or use any other solution, a matrix stack is helpful but isn't obligatory!).
If you're still using the matrix stack, then you're relying on functionality from OpenGL 2.x or 1.x. That's okay since all of this is still supported on modern graphics cards because of the GL compatibility profile - it's good to switch to a new GL version, but you can stay with this for now.
But if you are using an older version of OpenGL (with matrix stack), also use an older version of GLSL. Try 1.2, because higher versions (including your 1.5) are designed to be compatible with OpenGL3, where things such as projection or modelview matrices no longer exist in OpenGL and are expected to be passed explicitly as custom, user-defined uniform variables if needed.
The correspondence between OpenGL and GLSL versions used to be a bit tricky (before they cleaned up the version numbering to match), but it should be more or less similar to:
GL GLSL
4.1 - 4.1
4.0 - 4.0
3.3 - 3.3
3.2 - 1.5
3.1 - 1.4
3.0 - 1.3
2.x and lower - 1.2 and lower
So, long story short - the shader builtin uniforms are deprecated because the corresponding functionality in OpenGL is also deprecated; either go for a higher version of OpenGL or a lower version of GLSL.

To get either matrix you use the constants GL_MODELVIEW_MATRIX or GL_PROJECTION_MATRIX with glGetxxxx:
GLfloat model[16];
glGetFloatv(GL_MODELVIEW_MATRIX, model);

float modelview[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview);

Related

Need Minimum Textures required for OpenGL

Quick question, what is the minimum amount of textures that can be bound for the fragment shader that a OpenGL implementation is required to have?
Note:
I would like to know this for OpenGL 1.5, for OpenGL 2.0, and OpenGL 2.1
OpenGL 1.x and 2.x require at least 2 texture units. OpenGL 3.x and 4.x require at least 16. Most current GPUs have 32.
You can find those values fairly easily in the OpenGL specification itself, in the "Implementation Dependent Values" table. This specific value is called MAX_TEXTURE_UNITS in 1.x and 2.x and MAX_TEXTURE_IMAGE_UNITS in 3.x and 4.x.

Porting C++ OpenGL from windows to mac. What to do with GLKMatrix4MakePerspective?

I am trying to port a c++ program that uses SFML and OpenGL I wrote from Windows to Mac OS X. I am using g++ to compile on both platforms.
Here is my current code for calculating the projection matrix:
void perspectiveCalculate (int width, int height) {
// Prevent A Divide By Zero
if (height<1) height=1;
if (width<1) width=1;
glViewport(0, 0, width, height); // Reset The Current Viewport
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
//field of vision , width , height , near clipping , far clipping
//and calculate The Aspect Ratio Of The Window
gluPerspective(45.0f, (GLfloat)width/(GLfloat)height, 0.1f, 1000.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
}
However, g++ is saying that gluPerspective is deprecated, and I should be using GLKMatrix4MakePerspective instead. I found the manual page on the function but I'm not sure how to integrate it with the rest of my code. What should I do?
All your matrix code there is deprecated in modern OpenGL. You can keep using it on the Mac (warnings and all) if you have a GL 2.x context, but if you want to use GL 3.2 or newer you only get a Core Profile context, and for that you need to move to modern OpenGL code. (Ditto if you want to use OpenGL ES 2.0 or newer on mobile platforms.)
In modern OpenGL, the matrix setup functions you're using are all gone. The fixed-function pipeline they go with is replaced by a programmable pipeline... instead of telling OpenGL what model, view, and projection matrices you want and having it do the transformations for you, you do the transformations yourself in a GLSL vertex shader. The good news is that you can now leverage programmable shaders to do all kinds of effects not possible with the fixed-function pipeline. The bad news is you have to do the basic stuff yourself.
Part of doing that basic stuff is finding (or writing) a math library to generate all the matrices you'll be handing off to the vertex shader for doing your transformations. GLKMatrix4 is part of Apple's library for such things. GLKMatrix4MakePerspective takes almost the same set of arguments as gluPerspective (vertical FOV in radians, aspect ratio, near clipping distance, far clipping distance), but instead of setting fixed-function matrix state it returns a GLKMatrix4 data structure.
You then pass this data structure to a vertex shader via a uniform variable. Or, if you're working with GLKit anyway, you can look into [GLKBaseEffect][2], which provides an Objective-C interface roughly analogous to the old fixed-function pipeline.
Dealing with the entire process of moving to modern OpenGL is beyond the scope of one SO answer. Here's a few resources:
http://www.opengl-tutorial.org — Good overall tutorial. They use GLM for matrix math, but you can use GLKit math just as well
Migrating to OpenGL Core Profile — Apple Developer video
"OpenGL Game" Xcode template — this is for OpenGL ES on iOS, but you can run it in the simulator, and it provides a nice simple example of how to use GLKit to generate matrices and either hand them off to shaders or pass them to GLKBaseEffect. The GL part of this is pretty much the same on both iOS and OS X.

OpenGL 3.+ glsl compatibility mess?

So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it.
Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130.
The problem is : version 130 is fully based on the legacy opengl pipeline, because it contains things like viewmatrix, modelmatrix, etc. Then how am I supposed to use them when I am using core functions in my client app (OpenGL 3+)?
This is really confusing, give me concrete examples.
Furthermore, I want my app to be able to run on most OpenGL implementations, then could you tell me where the border is between legacy GLSL and modern GLSL? Is GLSL 300 the modern GLSL, and is there a compatibilty with OpenGL 3.+ with older GLSL versions?
I would say OpenGL 3.1 is modern OpenGL.
Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. Whether the driver always support of it is another matter. Updating your graphics card will probably bump you up to OpenGL 3.3.
Just to clear this up OpenGL 3.1 is not legacy OpenGL.
legacy OpenGL would be:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(90.0, 0.0, 1.0, 0.0);
glTranslatef(0.0, 0.0, -5.0);
Which OpenGL 3.1 with a compatibility context supports, but that doesn't mean it should be used. If you are developing for OpenGL 3 capable hardware you should most definitely not be using it. You can disable the legacy functionality by requesting a core context.
if you are using shaders then you already moved away the legacy fixed function pipeline. So GLSL 130 is not legacy :P.
Working on my Linux Laptop with my Intel CPU where the latest stable drivers are only at OpenGL 3.1 (Yes OpenGL 3.3 commits are in place, but I'm waiting for MESA 10 ;) ) I have without much effort been able to get the OpenGL 3.3 Tutorials to run on my machine without touching legacy OpenGL.
One of the wonderful things about OpenGL is that you can extend the functionality with OpenGL extension. Even if your HW isn't capable of handling OpenGL 4.4 you can still use the extensions that doesn't require OpenGL 4 HW with updated drivers!
See https://developer.nvidia.com/opengl-driver and http://developer.amd.com/resources/documentation-articles/opengl-zone/ for info on what features are added to older HW, but if you are uncertain all you have to do is test it on your HW.
And I'll finish of by saying Legacy OpenGL also has it's place.
In my opinion legacy OpenGL might be easier to learn than modern OpenGL, since you don't need knowledge of shaders and OpenGL buffers to draw your first triangle, but I don't think you should be using it in a modern production application.
If you need support for old hardware you might need to use an older OpenGL version. Even modern CPU's support OpenGL 3 so I would not worry about this to much.
Converting from OpenGL 3.3 to OpenGL 3.0
I tested it on the tutorials from http://www.opengl-tutorial.org/. I cannot put the code up I converted as most of it is as is from the tutorials and I don't have permission to put the code here.
They author talked about OpenGL 3.1, but since he is capped at glsl 130 (OpenGL 3.0) I am converting to 3.0.
First of all change the context version to OpenGL 3.0 (Just change
the minor version to 0 if your working from the tutorials). Also don't set it to use core context if your using OpenGL 3.0 since as far as I know ARB_compatibility is only available from OpenGL 3.1.
Change the shader version to
#version 130
Remove all layout binding in shaders
layout(location = #) in vec2 #myVarName;
to
in vec2 #myVarName;
Use glBindAttribLocation to bind the in layouts as they were specified (see 3)
e.g
glBindAttribLocation(#myProgramName, #, "#myVarName");
Use glBindFragDataLocation to bind the out layout as they were specified (see 3)
e.g
glBindFragDataLocation(#myProgramName, #, "#myVarName");
glFramebufferTexture doesn't work in OpenGL 3.0. (Used for shadowmapping and deferred rendering etc.). Instead you need to use glFramebufferTexture2D. (It has a extra parameter, but the documentation is sufficient)
Here is screenshot of tutorial16 (I though this one covered the most areas and used this a test to see if that all that's needed)
There is a mistake in the source of tutorial16 (At the time of writing). The FBO is set to have no color output, but the fragment shader still outputs a color value, causing a segfault (Trying to write to nothing ussually does that). Simply changing the depth fragment shader to output nothing fixes it. (Doesn't produce segfault on more tolerant drivers, but that's not something you should bargain on)

How do I retrieve the ModelView matrix in GLSL 1.5?

I have been reading through the specification of openGL 1.5, and saw that any reference to what used to be a variable holding the reference to the ModelView matrix (like gl_ModelViewMatrix) has been deprecated, and is only availble in some kind of compatability mode (which happens not to be supported on my GPU).
I have seen a few examples first retrieving the ModelView matrix or creating one, then sending it back to the GPU as a uniform variable.
Now this all seems just backward to me; even for a simple Vertex Shader you will in many cases want to use some kind of transformation on the geometry.
So I am really wondering now; is there any way to get the current ModelView matrix from within a vertex shader, using GLSL 1.5?
OpenGL-3 core completely dropped the matrix stack, i.e. the built-in modelview, projection, texture and color matrices. It's now expected from you to implement the matrix math and supply the matrices through self chosen uniforms.
There is no built in matrix system/lib in core openGL - since 3.+ version.
A lot of people had similar (bad) opinions about that "huge change" in openGL.
You have to use your own set of functions to perform matrix calculation. See libraries like: GLM or in lighthouse3D.
All in all it was very useful to have matrix functions in OpenGL when learning. Now you have to look for other solutions...
On the other side it is not a problem for game engines or game frameworks that usually have their own math libraries. So for them "new" OpenGL is even easier to use.

Orthographic Projection in Modern OpenGL

I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
Some people are saying that it's still OK to call glMatrixMode(GL_PROJECTION) and then glOrtho. This has always worked for me in the past, but I'm wondering if this has been deprecated in modern OpenGL.
If so, are vertex shaders the standard way to do an orthographic projection nowadays? Does GLSL provide a handy built-in function to set up an orthographic projection, or do I need to write that math myself in the vertex shader?
If so, are vertex shaders the standard way to do an orthographic projection nowadays?
Not quite. Vertex shaders perform the calculations, but the transformation matrices are usually fed into the shader through a uniform. A shader should only evaluate things, that vary with each vertex. Implementing a "ortho" function, the returns a projection matrix in GLSL is counterproductive.
I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
The matrix stack of OpenGL before version 3 has nothing to do with the immediate mode. Immediate mode was glBegin(…); for(…){ ...; glVertex(…);} glEnd(). And up to OpenGL version 3 is was rather common to use the matrices specified through the matrix stack in a vertex shader.
With OpenGL-3 it was aimed to do, what was originally planned for OpenGL-2: A unification of the API and removing old cruft. One of those things removed was the matrix stack. Many shaders already used more than the standard matrices already, anyway (like for example for skeletal animation), so matrices already had been passed as uniforms and the programs did already contain the whole matrix math code. Taking the step of remocing the matrix math stuff from OpenGL-3 was just the next logical step.
In general, you do not compute matrices in shaders. You compute them on the CPU and upload them as uniforms to the shaders. So your vertex shader would neither know nor care if it is for an orthographic projection or a perspective one. It just transforms by whatever projection matrix it is given.
When you target an OpenGL version > 3.0, glOrtho has been removed from the official specs (but still is present in the compatability profile), so you shouldn't use it anymore. Therefore, you will have to calculate the projection you want to use directly within the vertex shader
(for OpenGL up to 3.0 glOrtho is perfectly ok ;).
And no there is no "handy" function to get the projection matrix in GLSL, so you have to specify it yourself. This is, however, no problem, as there is a) plenty of example code in the web and b) the equations are right in the OpenGL spec, so you can take it simply from there if you want to.