GLSL + OpenGL Moving away from state machine - opengl

I started moving one of my projects away from fixed pipeline, so to try things out I tried to write a shader that would simply pass the OpenGL matrices and transform the vertex with that and then start calculating my own once I knew that worked. I thought this would be a simple task but even this will not work.
I started out with this shader for normal fixed pipeline:
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then changed it to this:
uniform mat4 model_matrix;
uniform mat4 projection_matrix;
void main(void)
{
gl_Position = model_matrix * projection_matrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
I then retrieve the OpenGL matrices like this and pass them to the shader with this code:
[material.shader bindShader];
GLfloat modelmat[16];
GLfloat projectionmat[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelmat);
glGetFloatv(GL_PROJECTION_MATRIX, projectionmat);
glUniformMatrix4fv([material.shader getUniformLocation:"model_matrix"], 1, GL_FALSE, modelmat);
glUniformMatrix4fv([material.shader getUniformLocation:"projection_matrix"], 1, GL_FALSE, projectionmat );
... Draw Stuff
For some reason this does not draw anything (I am 95% positive those matrices are correct before I pass them btw) Any Ideas?

The problem was that my order of matrix multiplication was wrong. I was not aware that the operations were not commutative.
The correct order should be:
projection * modelview * vertex
Thanks to ltjax and doug65536

For the matrix math, try using an external library, such as GLM. They also have some basic examples on how to create the necessary matrices and do the projection * view * model transform.

Use OpenGL 3.3's shading language. OpenGL 3.3 is roughly comparable to DirectX10, hardware-wise.
Don't use the deprecated functionality. Almost everything in your first void main example is deprecated. You must explicity declare your inputs and outputs if you expect to use the high-performance code path of the drivers. Deprecated functionality is also far more likely to be full of driver bugs.
Use the newer, more explicit style of declaring inputs and outputs and set them in your code. It really isn't bad. I thought this would be ugly but it actually was pretty easy (I wish I had just done it earlier).
FYI, the last time I looked at a lowest common denominator for OpenGL (2012), it was OpenGL 3.3. Practically all video cards from AMD and NVidia that have any gaming capability will have OpenGL 3.3. And they have for a while, so any code you write now for OpenGL 3.3 will work on a typical low-end or better GPU.

Related

In openGL should model coordinates be calculated on my CPU or on the GPU with OpenGL calls?

I am currently trying to understand openGL. I have a good understanding of the math behind the matrices transformations.
So I want to write a small 3D application where I could render many, many vertices. There would be different objects, with each there own set of vertices and world coordinates.
To get the actual coordinates of my vertices, I need to multiply them by the transformation matrix corresponding to the position/rotation of my object.
Here is my problem, I don't understand in OpenGL how to do these transformation for all these vertices by the GPU. From my understanding it would be much faster, but I don't seem to understand how to do it.
Or should I calculate each of those coordinates with the CPU and draw the transformed vertices with openGL?
There's a couple of different ways to solve this problem, depending on your circumstances.
The major draw model that people use looks like this: (I'm not going to check the exact syntax, but I'm pretty sure this is correct)
//Host Code, draw loop
for(Drawable_Object const& object : objects) {
glm::mat4 mvp;
glm::projection = /*...*/;
glm::view = /*...*/;
glm::model = glm::translate(glm::mat4(1), object.position);//position might be vec3 or vec4
mvp = projection * view * model;
glUniformMatrix1fv(glGetUniformLocation(program, "mvp"), 1, false, glm::value_ptr(mvp));
object.draw();//glDrawArrays, glDrawElements, etc...
}
//GLSL Vertex Shader
layout(location=0) in vec3 vertex;
uniform mat4 mvp;
/*...*/
void main() {
gl_Position = mvp * vec4(vertex, 1);
/*...*/
}
In this model, the matrices are calculated on the host, and then applied on the GPU. This minimizes the amount of data that needs to be passed on the CPU<-->GPU bus (which, while not often a limitation in graphics, can be a consideration to keep in mind), and is generally the cleanest in terms of reading/parsing the code.
There's a variety of other techniques you can use (and, if you do instanced rendering, have to use), but for most applications, it's not necessary to deviate from this model in any significant way.

Perspective-correct shader rendering

I want to put a texture on a rectangle which has been transformed by a non-affine transform (more specifically a perspective transform).
I have a very complex implementation based on openscenegraph and loading my own vertex and fragment shaders.
The problem starts with the fact that the shaders were written quite a long time ago and are using GLSL 120.
The OpenGL side is written in C++ and in its simplest form, loads a texture and applies it to a quad. Up to recently, everything was working fine because the quad was at most affine-transformed (rotation + translation) so the rendering of the texture on it was correct.
Now however we want to support quads of any shape, including something like this:
http://ibin.co/1dbsGPpzbkOX
As you can see in the picture above, the texture on it is incorrect in the middle (shown by arrows)
After hours of research I found out that this is due to OpenGL splitting quads into triangles and rendering each triangle independently. This is of course incorrect if my quad is as shown, because the 4th point influences the texture stretch.
I then even found that this issue has a name: it's a "perspectively incorrect interpolation of texture coordinates", as explained here:
[1]
Looking for solutions to this, I came across this article which mentions the use of the "smooth" attribute in later GLSL versions: [2]
but this means updating my shaders to a newer version.
An alternative I found was to use GL_Hints, as described here: [3]
but the disadvantage here is that it is only a hint, and there is no way to make sure it is used.
Now that I have shown my research, here is my question:
Updating my (complex) shaders and all the OpenGL which goes with it to abide by the new OpenGL pipeline paradigm would be too time-consuming so I tried using the GLSL "version 330 compatibility" and changing the "varying" to "smooth out" and "smooth in", as well as adding the GL_NICE hint on the C++ side, but these changes did not solve my problem. Is this normal, because the compatibility mode somehow doesn't support this correct perspective transform? Or is there something more that I need to do?
Or is there a better way for me to get this functionality without needing to refactor everything?
Here is my vertex shader:
#version 330 compatibility
smooth out vec4 texel;
void main(void) {
gl_Position = ftransform();
texel = gl_TextureMatrix[0] * gl_MultiTexCoord0;
}
and the fragment shader is much too complex, but it starts with
#version 330 compatibility
smooth in vec4 texel;
Using derhass's hint I solved the problem in a much different way.
It is true that the "smooth" keyword was not the problem but rather the projective texture mapping.
To solve it I passed directly from my C++ code to the frag shader the perspective transform matrix and calculated the "correct" texture coordinate in there myself, without using GLSL's barycentric interpolation.
To help anyone with the same problem, here is a cut-down version of my shaders:
.vert
#version 330 compatibility
smooth out vec4 inQuadPos; // Used for the frag shader to know where each pixel is to be drawn
void main(void) {
gl_Position = ftransform();
inQuadPos = gl_Vertex;
}
.frag
uniform mat3 transformMat; // the transformation between texture coordinates and final quad coordinates (passed in from c++)
uniform sampler2DRect source;
smooth in vec4 inQuadPos;
void main(void)
{
// Calculate correct texel coordinate using the transformation matrix
vec3 real_texel = transformMat * vec3(inQuadPos.x/inQuadPos.w, inQuadPos.y/inQuadPos.w, 1);
vec2 tex = vec2(real_texel.x/real_texel.z, real_texel.y/real_texel.z);
gl_FragColor = texture2DRect(source, tex).rgba;
}
Note that the fragment shader code above has not been tested exactly like that so I cannot guarantee it will work out-of-the-box, but it should be mostly there.

Weird noise on rendered objects - OpenGL

To be more specific, here's the screenshot:
https://drive.google.com/file/d/0B_o-Ym0jhIqmY2JJNmhSeGpyanM/edit?usp=sharing
After debugging for about 3 days, I really have no idea. Those black lines and strange fractal black segments just drive me nuts. The geometries are rendered by forward rendering, blending layer by layer for each light I add.
My first guess was downloading the newest graphics card driver (I'm using GTX 660m), but that didn't solve it. Can VSync be a possible issue here? (I'm rendering in a window rather on full screen mode) Or what is the most possible point to cause this kind of trouble?
My code is like this:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(false);
glDepthFunc(GL_EQUAL);
/*loop here*/
/*draw for each light I had*/
glDepthFunc(GL_LESS);
glDepthMask(true);
glDisable(GL_BLEND);
One thing I've noticed looking at your lighting vertex shader code:
void main()
{
gl_Position = projectionMatrix * vec4(position, 1.0);
texCoord0 = texCoord;
normal0 = (normalMatrix * vec4(normal, 0)).xyz;
modelViewPos0 = (modelViewMatrix * vec4(position, 1)).xyz;
}
You are applying the projection matrix directly to the vertex position, which I'm assuming is in object space.
Try setting it to:
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
And we can work from there.
This answer is slightly speculative, but based on the symptoms, and the code you posted, I suspect a precision problem. The rendering code you linked, looks like this in a shortened form:
useShader(FWD_AMBIENT);
part.render();
glDepthMask(GL_FALSE);
glDepthFunc(GL_EQUAL);
for (Light light : lights) {
useShader(light.getShaderType());
part.render();
}
So you're rendering the same thing multiple times, with different shaders, and rely on the resulting pixels to end up with the same depth value (depth comparison function is GL_EQUAL). This is not a safe assumption. Quote from the GLSL spec:
In this section, variance refers to the possibility of getting different values from the same expression in different programs. For example, say two vertex shaders, in different programs, each set gl_Position with the same expression in both shaders, and the input values into that expression are the same when both shaders run. It is possible, due to independent compilation of the two shaders, that the values assigned to gl_Position are not exactly the same when the two shaders run. In this example, this can cause problems with alignment of geometry in a multi-pass algorithm.
I copied the whole paragraph because the example they are using sounds like an exact description of what you are doing.
To prevent this from happening, you can declare your out variables as invariant. In each of your vertex shaders that you use for the multi-pass rendering, add this line:
invariant gl_Position;
This guarantees that the outputs are identical if all the inputs are the same. To meet this condition, you should also make sure that you pass exactly the same transformation matrix into both shaders, and of course use the same vertex coordinates.

Does GLSL really do unnecessary computations with uniform (not per-vertex) values?

For example, if I use vertex shader like the following:
#version 400 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
out vec4 pass_position_model;
void main(void) {
gl_Position = projM * viewM * modelM * in_Position;
pass_position_model = modelM * in_Position;
}
Will it do projM * viewM * modelM matrix multiplication for each vertex, or it it smart enough to calculate if once and do not recalculate until uniform variables are changed?
If it isn't "smart enough", then is there a way to optimize it other than computing all uniform-dependent values on CPU and send them as uniform variables to GPU?
Also I'm interested in solutions that can be ported to OpenGL ES 2.0 later without problems.
So there is no general answer, as I understand. I did some tests on my hardware, though. I have 2 GPUs in my inventory, Intel HD Graphics 3000 and NVidia GeForce GT 555M. I tested my program (the program itself is written in java/scala) with matrix multiplication in vertex shader, and then moved multiplication to the CPU program and tested again.
(sphereN - it's a continuously rotating sphere with 2*N^2 quads, drawn with glDrawElements(GL_QUADS,...) with 1 texture and without any lighting/other effects)
matrix multiplication in vertex shader:
intel:
sphere400: 57.17552887364208 fps
sphere40: 128.1394156842645 fps
nvidia:
sphere400: 134.9527665317139 fps
sphere40: 242.0135527589545 fps
matrix multiplication on cpu:
intel:
sphere400: 57.37234652897303 fps
sphere40: 128.2051282051282 fps
nvidia:
sphere400: 142.28799089356858 fps
sphere40: 247.1576866040534 fps
Tests show that multiplicating (uniform) matrices in vertex shader is bad idea, at least on this hardware. So in general one may not rely on corresponding GLSL compiler optimization.
Will it do projM * viewM * modelM matrix multiplication for each vertex, or it it smart enough to calculate if once and do not recalculate until uniform variables are changed?
Ask the developer of the OpenGL implementation in question. The OpenGL specification has nothing to say about this, but driver and GLSL compiler writers may have implemented optimizations for this.
If it isn't "smart enough", then is there a way to optimize it other than computing all uniform-dependent values on CPU and send them as uniform variables to GPU?
No. You have to do the legwork yourself.
All OpenGL and GLSL optimizations are vendor specific. It is quite hard to tell what is the final output from the glsl compiler.
You can look here for vendor specific information:
http://renderingpipeline.com/graphics-literature/low-level-gpu-documentation/
For your code you can always 'pack' matrices into new uniform: matModelViewProjection, multiply it in the application and send it to the vertex shader.
That depends entirely on the driver. OpenGL is a specification, if you pay them for the rights to make an implimentation they'll give you a sample implimentation to use, but that's it.
Aside from that you need to consider matrix multiplications restrictions, doing projM * viewM * modelM * vertex is not the same as doing vertex * projM * viewM * modelM. That's because matrices are multiplyed right to left, and the order does matter with that. So the shader could'nt pre-computed projM * viewM * modelM to share between vertices, because that would give you bogus results.

Matrix stacks in OpenGL deprecated?

I just read this:
"OpenGL provided support
for managing coordinate transformations and projections using the standard matrix stacks
(GL_MODELVIEW and GL_PROJECTION). In core OpenGL 4.0, however, all of the functionality
supporting the matrix stacks has been removed. Therefore, it is up to us to provide our own
support for the usual transformation and projection matrices, and then to pass them into our
shaders."
This is strange, so how do I set the modelview and projection matrices now? I should create
them in the opengl app and then multiply the vertices in the vertex shader with the matrices?
This is strange
Nope. Fixed function was replaced by programmable pipeline that lets you design your transformations however you want.
I should create them in the opengl app and then multiply the vertices in the vertex shader with the matrices?
If you want to have something that would work just like the old OpenGL pair of matrix stacks, then you'd want to make your vertex shader look, for instance, like:
in vec4 vertexPosition;
// ...
uniform mat4 ModelView, Projection;
void main() {
gl_Position = Projection * ModelView * vertexPosition;
// ...
}
(You can optimise that a bit, of course)
And the corresponding client-size code (shown as C++ here) would be like:
std::stack<Matrix4x4> modelViewStack;
std::stack<Matrix4x4> projectionStack;
// Initialize them by
modelViewStack.push(Matrix4x4.Identity());
projectionStack.push(Matrix4x4.Identity());
// glPushMatrix:
stack.push(stack.top());
// `stack` is either one stack or the other;
// in old OpenGL you switched the affected stack by glMatrixMode
// glPopMatrix:
stack.pop();
// glTranslate and family:
stack.top().translate(1,0,0);
// And in order to pass the topmost ModelView matrix to a shader program:
GLint modelViewLocation = glGetUniformLocation(aLinkedProgramObject,
"ModelView");
glUniformMatrix4fv(modelViewLocation, 1, false, &modelViewStack.top());
I've assumed here that you have a Matrix4x4 class that supports operations like .translate(). A library like GLM can provide you with client-side implementations of matrices and vectors that behave like corresponding GLSL types, as well as implementations of functions like gluPerspective.
You can also keep using the OpenGL 1 functionality through the OpenGL compatibility profile, but that's not recommended (you won't be using OpenGL's full potential then).
OpenGL 3 (and 4)'s interface is more low level than OpenGL 1; If you consider the above to be too much code, then chances are you're better off with a rendering engine, like Irrlicht.
The matrix stack is part of the fixed function pipeline which is deprecated. You still can access the old functionality over the compatibility extension but you should avoid to do this.
There are some good tutorials on matrices and cameras but I prefer this one. Send your matrix to the shader and multiply, as you said, the vertices with the matrix.