Orthographic Projection in Modern OpenGL - opengl

I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
Some people are saying that it's still OK to call glMatrixMode(GL_PROJECTION) and then glOrtho. This has always worked for me in the past, but I'm wondering if this has been deprecated in modern OpenGL.
If so, are vertex shaders the standard way to do an orthographic projection nowadays? Does GLSL provide a handy built-in function to set up an orthographic projection, or do I need to write that math myself in the vertex shader?

If so, are vertex shaders the standard way to do an orthographic projection nowadays?
Not quite. Vertex shaders perform the calculations, but the transformation matrices are usually fed into the shader through a uniform. A shader should only evaluate things, that vary with each vertex. Implementing a "ortho" function, the returns a projection matrix in GLSL is counterproductive.
I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
The matrix stack of OpenGL before version 3 has nothing to do with the immediate mode. Immediate mode was glBegin(…); for(…){ ...; glVertex(…);} glEnd(). And up to OpenGL version 3 is was rather common to use the matrices specified through the matrix stack in a vertex shader.
With OpenGL-3 it was aimed to do, what was originally planned for OpenGL-2: A unification of the API and removing old cruft. One of those things removed was the matrix stack. Many shaders already used more than the standard matrices already, anyway (like for example for skeletal animation), so matrices already had been passed as uniforms and the programs did already contain the whole matrix math code. Taking the step of remocing the matrix math stuff from OpenGL-3 was just the next logical step.

In general, you do not compute matrices in shaders. You compute them on the CPU and upload them as uniforms to the shaders. So your vertex shader would neither know nor care if it is for an orthographic projection or a perspective one. It just transforms by whatever projection matrix it is given.

When you target an OpenGL version > 3.0, glOrtho has been removed from the official specs (but still is present in the compatability profile), so you shouldn't use it anymore. Therefore, you will have to calculate the projection you want to use directly within the vertex shader
(for OpenGL up to 3.0 glOrtho is perfectly ok ;).
And no there is no "handy" function to get the projection matrix in GLSL, so you have to specify it yourself. This is, however, no problem, as there is a) plenty of example code in the web and b) the equations are right in the OpenGL spec, so you can take it simply from there if you want to.

Related

Adjusting the distortion caused by a projector in OpenGL ES

I'm getting started with a OpenGL ES app (C++/SDL) running on a Raspberry PI, that I'll want to output through a pocket projector.
I want to give the user the ability to correct the distortion caused by aiming the projector from a direction that is non-normal to the surface. The surface will also be smaller than the projected area. To do this, the user will be able to "move" the 4 corners of the projection window independently to match the surface.
I was planning to do this by solving a simple linear system of equations that transforms the original corners (in the current projection matrix coordinates) to the corners set by the user, and just push this resulting matrix on top of the GL_PROJECTION matrix stack.
However... I found out that I can't "read" the projection matrix in OpenGL ES:
float matrix[16];
glGetFloatv(GL_PROJECTION_MATRIX, matrix);
In particular, the GL_PROJECTION_MATRIX symbol doesn't exist... and I've even read that there's no such thing as a projection matrix in OpenGL ES (something I find hard to believe, but I'm totally new to it, so...).
Any idea of a workaround for reading this matrix, or maybe an alternative approach to do what I'm trying to do?
Thanks!
OpenGL ES 1.x has a projection matrix, and you should be able to get the current value with exactly the code you have in your question. See http://www.khronos.org/opengles/sdk/1.1/docs/man/glGet.xml for the documentation confirming this, or the header file at http://www.khronos.org/registry/gles/api/GLES/gl.h.
ES 2.0 and higher are a different story. ES 2.0 eliminates most of the fixed pipeline that was present in ES 1.x, and replaces it with shader based rendering.
Therefore, concepts like a built-in projection matrix don't exist anymore in ES 2.0 and are replaced with the much more flexible concept of shader programs written in GLSL. If you want to use a projection matrix in ES 2.0, you have a matrix variable in your vertex shader code, pass a value for the matrix into the shader program (using the glUniformMatrix4fv API call), and the GLSL code you provide for the vertex shader performs the multiplication of vectors with the projection matrix.
So in the case of ES 2.0, there is no need for a glGet*() call for the projection matrix. If you have one, you calculated it, and passed it into the shader program. Which means that you already have the matrix in your code. Well, you could get any matrix value back with the glGetUniformfv() API call, but it's much easier and cleaner to just keep it around in your code.

How does gl_ClipVertex work relative to gl_ClipDistance?

I was planning on using gl_ClipDistance in my vertex shader until I realized my project is OpenGL 2.1/GLSL 1.2 which means gl_ClipDistance is not available.
gl_ClipVertex is the predecessor to gl_ClipDistance but I can find very little information about how it works and, especially relative to gl_ClipDistance, how it is used.
My goal is to clip the intersection of clipping planes without the need for multiple rendering passes. In the above referenced question, it was suggested that I use gl_ClipDistance. Examples like this one are clear to me, but I don't see how to apply it to gl_ClipVertex.
How can I use gl_ClipVertex for the same purpose?
When in doubt, you should always examine the formal GLSL specification. In particular, since this pre-declared variable was introduced in GLSL 1.3 you know (or should assume) that there will be a discussion of the deprecation of the old technique and the implementation of the new one.
In fact, there is if you look here:
The OpenGL® Shading Language 1.3 - 7.1 Vertex Shader Special Variables - pp. 60-61
The variable gl_ClipVertex is deprecated. It is available only in the vertex language and provides a place for vertex shaders to write the coordinate to be used with the user clipping planes. The user must ensure the clip vertex and user clipping planes are defined in the same coordinate space. User clip planes work properly only under linear transform. It is undefined what happens under non-linear transform.
Further investigation of the actual types used for both should also give a major hint as to the difference between the two:
out float gl_ClipDistance[]; // may be written to
out vec4 gl_ClipVertex; // may be written to, deprecated
You will notice that gl_ClipVertex is a full-blown positional (4 component) vector, where as gl_ClipDistance[] is simply an array of floating-point numbers. What you may not notice is that gl_ClipDistance[] is an input/output for geometry shaders and an input to fragment shaders, where as gl_ClipVertex only exists in vertex shaders.
The clip vertex is the position used for clipping, where as clip distance is the distance from each clipping plane (which you are allowed to calculate yourself). The ability to specify the distance arbitrarily for each clipping plane allows for non-linear transformations as discussed above, prior to this all you could do is set the location used to compute the distance from each clipping plane.
To put this all in perspective:
The calculation of clipping from the clip vertex used to occur as part of the fixed-function pipeline between vertex transformation and fragment shading. When GLSL 1.3 was introduced Shader Model 4.0 had already formally been defined by DX10 for a number of years, which exposed programmable primitive assembly and logically more flexible computation of clipping. We did not get geometry shaders until GLSL 1.5, but many other parts of Shader Model 4.0 were gradually introduced between 1.3 and 1.5

Matrix stacks in OpenGL deprecated?

I just read this:
"OpenGL provided support
for managing coordinate transformations and projections using the standard matrix stacks
(GL_MODELVIEW and GL_PROJECTION). In core OpenGL 4.0, however, all of the functionality
supporting the matrix stacks has been removed. Therefore, it is up to us to provide our own
support for the usual transformation and projection matrices, and then to pass them into our
shaders."
This is strange, so how do I set the modelview and projection matrices now? I should create
them in the opengl app and then multiply the vertices in the vertex shader with the matrices?
This is strange
Nope. Fixed function was replaced by programmable pipeline that lets you design your transformations however you want.
I should create them in the opengl app and then multiply the vertices in the vertex shader with the matrices?
If you want to have something that would work just like the old OpenGL pair of matrix stacks, then you'd want to make your vertex shader look, for instance, like:
in vec4 vertexPosition;
// ...
uniform mat4 ModelView, Projection;
void main() {
gl_Position = Projection * ModelView * vertexPosition;
// ...
}
(You can optimise that a bit, of course)
And the corresponding client-size code (shown as C++ here) would be like:
std::stack<Matrix4x4> modelViewStack;
std::stack<Matrix4x4> projectionStack;
// Initialize them by
modelViewStack.push(Matrix4x4.Identity());
projectionStack.push(Matrix4x4.Identity());
// glPushMatrix:
stack.push(stack.top());
// `stack` is either one stack or the other;
// in old OpenGL you switched the affected stack by glMatrixMode
// glPopMatrix:
stack.pop();
// glTranslate and family:
stack.top().translate(1,0,0);
// And in order to pass the topmost ModelView matrix to a shader program:
GLint modelViewLocation = glGetUniformLocation(aLinkedProgramObject,
"ModelView");
glUniformMatrix4fv(modelViewLocation, 1, false, &modelViewStack.top());
I've assumed here that you have a Matrix4x4 class that supports operations like .translate(). A library like GLM can provide you with client-side implementations of matrices and vectors that behave like corresponding GLSL types, as well as implementations of functions like gluPerspective.
You can also keep using the OpenGL 1 functionality through the OpenGL compatibility profile, but that's not recommended (you won't be using OpenGL's full potential then).
OpenGL 3 (and 4)'s interface is more low level than OpenGL 1; If you consider the above to be too much code, then chances are you're better off with a rendering engine, like Irrlicht.
The matrix stack is part of the fixed function pipeline which is deprecated. You still can access the old functionality over the compatibility extension but you should avoid to do this.
There are some good tutorials on matrices and cameras but I prefer this one. Send your matrix to the shader and multiply, as you said, the vertices with the matrix.

How do I retrieve the ModelView matrix in GLSL 1.5?

I have been reading through the specification of openGL 1.5, and saw that any reference to what used to be a variable holding the reference to the ModelView matrix (like gl_ModelViewMatrix) has been deprecated, and is only availble in some kind of compatability mode (which happens not to be supported on my GPU).
I have seen a few examples first retrieving the ModelView matrix or creating one, then sending it back to the GPU as a uniform variable.
Now this all seems just backward to me; even for a simple Vertex Shader you will in many cases want to use some kind of transformation on the geometry.
So I am really wondering now; is there any way to get the current ModelView matrix from within a vertex shader, using GLSL 1.5?
OpenGL-3 core completely dropped the matrix stack, i.e. the built-in modelview, projection, texture and color matrices. It's now expected from you to implement the matrix math and supply the matrices through self chosen uniforms.
There is no built in matrix system/lib in core openGL - since 3.+ version.
A lot of people had similar (bad) opinions about that "huge change" in openGL.
You have to use your own set of functions to perform matrix calculation. See libraries like: GLM or in lighthouse3D.
All in all it was very useful to have matrix functions in OpenGL when learning. Now you have to look for other solutions...
On the other side it is not a problem for game engines or game frameworks that usually have their own math libraries. So for them "new" OpenGL is even easier to use.

Using both programmable and fixed pipeline functionality in OpenGL

I have a vertex shader that transforms vertices to create a fisheye affect. Is is possible to just use just the vertex shader and use fixed pipeline for the fragment portion.
So basically i have an application that doesnt use shaders. I want to apply a fisheye affect using a vertex shader to transform all vertices, and then leave it to the application to take care to lighting, texturing, etc?
If this is not possible, is it possible to get a fisheye affect by messing with the contents of the gl back buffer?
Thanks
If your code is on fixed function, then what you described is a problem - that's why having your graphics code in shaders is good: they let you change anything easily. Remember to use them in your next project. :)
OK, but for this particular I assume that you don't want to rewrite your whole rendering from scratch to shaders now...
You mentioned you want to have a "fisheye effect". Seems like you're lucky, because I believe you don't need shaders for that effect! If we're talking about the same effect, then you can achieve it just by replacing the GL_PROJECTION matrix from OpenGL's fixed function to a perspective matrix with a wider angle of vision.
Yes, it's possible, altough some cards (notably ATI) don't support using a vertex shader without a fragment shader.