OpenGL gluLookat not working with shaders on - c++

When i want to use gluLookat and have the shaders on it doesnt move the "camera" when i turn the shaders off it works correctly.
Is there something missing in my shaders, i cant figure out what.
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(0.8, 0.8, 0.8, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.5,0,1,0.5,0,0,0,1,0);
glColor3f(0, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, 6);
glDrawArrays(GL_LINES, 6, 6);
glDrawArrays(GL_TRIANGLES, 12,6);
glDrawArrays(GL_LINES, 18, 4);
glDrawArrays(GL_TRIANGLES, 22, 6);
glColor3f(1, 0.7, 0);
glDrawArrays(GL_TRIANGLES, 28, 6);
glFlush();
}
Vertex Shader:
#version 450 core // 420, 330 core , compatibility
in vec4 position
out vec4 Color;
void main()
{
gl_Position = position;
}
Fragment Shader:
#version 450 core // 420, 330 core , compatibility
in vec4 Color;
layout(location=0) out vec4 fColor;
void main()
{
fColor = vec4(0,0,0,0);
}
Move the "camera" to where i want it to be with shaders on

When you use a shader program, then the vertex coordinate attributes are not magically processed, by the current matrices. The shader program has to do the transformations of the vertex coordinates.
You've 2 possibilities, either you use a compatibility profile context and use a lower glsl version (e.g. 1.10).
Then you can use the built in uniform gl_ModelViewProjectionMatrix (see GLSL 1.10 secification) and the fixed function matrix stack will work:
#version 110
attribute vec4 position
// varying vec4 Color;
void main()
{
// ...
gl_Position = gl_ModelViewProjectionMatrix * position;
}
But note this is deprecated since decades. See Fixed Function Pipeline and Legacy OpenGL.
I recommend to use a library like OpenGL Mathematics to calculate the view matrix by lookAt() and a uniform variable:
#version 450 core // 420, 330 core , compatibility
in vec4 position
// out vec4 Color;
layout(location = 7) uniform mat4 view_matrix;
void main()
{
gl_Position = view_matrix * position;
}
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
// [...]
{
// [...]
glUseProgram(program);
glm::mat4 view = glm::lookAt(
glm::vec3(0.5f,0.0f,1.0f), glm::Vec3(0.5f,0.0f,0.0f), glm::Vec3(0.0f,1.0f,0.0f));
glUniformMatrix4fv(7, 1, GL_FALSE, glm::value_ptr(view);
// [...]
}
The uniform location is set explicite by a Layout qualifier (location = 7).
glUniformMatrix4fv sets the value of the uniform at the specified location in the default uniform block. This has to be done after the progroam was installed by glUseProgram.

Related

Simple GLSL render chain doesn't draw reliably

I have a simple compositing system which is supposed to render different textures and a background texture into an FBO. It also renders some primitives.
Here's an example:
I'm rendering using a simple GLSL shader for the texture and another one for the primitive. Also, I'm waiting for each shader to finish using glFinish after each glDrawArrays call.
So basically:
tex shader (background tex)
tex shader (tex 1)
primitive shader
tex shader (tex 2)
tex shader (tex 3)
When I only do this once, it works. But if I do another render pass directly after the first one finished, some textures just aren't rendered.
The primitive however is always rendered.
This doesn't happen always, but the more textures I draw, the more often this occurs.
Thus, I'm assuming that this is a timing problem.
I tried to troubleshoot for the last two days and I just can't find the reason for this.
I'm 100% sure that the textures are always valid (I downloaded them using glGetTexImage to verify).
Here are my texture shaders.
Vertex shader:
#version 150
uniform mat4 mvp;
in vec2 inPosition;
in vec2 inTexCoord;
out vec2 texCoordV;
void main(void)
{
texCoordV = inTexCoord;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment shader:
#version 150
uniform sampler2D tex;
in vec2 texCoordV;
out vec4 fragColor;
void main(void)
{
fragColor = texture(tex, texCoordV);
}
And here's my invocation:
NSRect drawDestRect = NSMakeRect(xPos, yPos, str.texSize.width, str.texSize.height);
NLA_VertexRect rect = NLA_VertexRectFromNSRect(drawDestRect);
int texID = 0;
NLA_VertexRect texCoords = NLA_VertexRectFromNSRect(NSMakeRect(0.0f, 0.0f, 1.0f, 1.0f));
NLA_VertexRectFlipY(&texCoords);
[self.texApplyShader.arguments[#"inTexCoord"] setValue:&texCoords forNumberOfVertices:4];
[self.texApplyShader.arguments[#"inPosition"] setValue:&rect forNumberOfVertices:4];
[self.texApplyShader.arguments[#"tex"] setValue:&texID forNumberOfVertices:1];
GetError();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, str.texName);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glFinish();
The setValue:forNumberOfCoordinates: function is an object-based wrapper around OpenGL's parameter application functions. It basically does this:
glBindVertexArray(_vertexArrayObject);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBufferObject);
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray((GLuint)self.boundLocation);
glVertexAttribPointer((GLuint)self.boundLocation, numVectorElementsForType, GL_FLOAT, GL_FALSE, 0, 0);
Here are two screenshots of what it should look like (taken after first render pass) and what it actually looks like (taken after second render pass):
https://www.dropbox.com/s/0nmquelzo83ekf6/GLRendering_issues_correct.png?dl=0
https://www.dropbox.com/s/7aztfba5mbeq5sj/GLRendering_issues_wrong.png?dl=0
(in this example, the background texture is just black)
The primitive shader is as simple as it gets:
Vertex:
#version 150
uniform mat4 mvp;
uniform vec4 inColor;
in vec2 inPosition;
out vec4 colorV;
void main (void)
{
colorV = inColor;
gl_Position = mvp * vec4(inPosition, 0.0, 1.0);
}
Fragment:
#version 150
in vec4 colorV;
out vec4 fragColor;
void main(void)
{
fragColor = colorV;
}
Found the issue... I didn't realize that the FBO is drawn to the screen already after the first render pass. This happens on a different thread and wasn't locked properly.
Apparently the context was switched while the compositing took place which explains why it caused different issues randomly depending on when the second thread switched the context.

Position Vector in Vertexshader conflicts with glTranslate

I'm trying to render an image and offset it by using glTranslate:
glPushMatrix();
glTranslatef(x, y, 0.0f);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glPopMatrix();
I'm also using a shader, and in the vertexshader I set the position of the vertices:
in vec2 position;
in vec3 color;
out vec3 Color;
void main() {
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
However, this always renders the square at the same position. I'm thinking this is because the position vector is always the same. How can I use this shader but still be able to move the image around with glTranslate? I suspect I have to change my shader input, but how?
glTranslatef changes the MVP matrix which gets passed as a uniform into the vertex shader. There is a shortcut in pre 150 by using
gl_Position = ftransform();
Which applies the transformation matrices to the input position as it was passed in with glVertex*.
However glsl 150 core doesn't allow using that uniform or that function. Instead create a matrix uniform and pass it in:
#version 150 core
in vec2 position;
in vec3 color;
out vec3 Color;
uniform mat4 mvp;
void main() {
Color = color;
gl_Position = mvp * vec4(position, 0.0, 1.0);
}

OpenGL - GL_LINE_STRIP acts like GL_LINE_LOOP

Im using OpenGL 3.3 with GLFW.
The problem is that GL_LINE_STRIP and GL_LINE LOOP give the same result.
Here is the array of 2D coordinates:
GLfloat vertices[] =
{
0, 0,
1, 1,
1, 2,
2, 2,
3, 1,
};
The attribute pointer:
// Position attribute 2D
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
And finally:
glDrawArrays(GL_LINE_STRIP, 0, sizeof(vertices)/4);
Vertex shader:
#version 330 core
layout (location = 0) in vec2 position;
layout (location = 1) in vec3 color;
out vec3 ourColor;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(position, 0.0f, 1.0f);
ourColor = color;
}
Fragment shader:
#version 330 core
in vec3 ourColor;
out vec3 color;
void main()
{
color = vec3(ourColor);
}
The Color attrib. is disabled (lines are black and visible)
Any idea?
You have only 5 pairs of floats, so 5 vertices. Total size of your array is 4 times 10 floats, so 40 bytes.
Your equation for count, 40/4 gives 10. sizeof(array) / (sizeof(array[0]) * dimensionality) would be the correct equation there.

OpenGL color transform

I'm using OpenGL to draw a large array of 2D points with their colors. Each point (vertex) has also defined it's alpha channel in MX.c array. I'd like to be able to increase or decrease the alpha value of whole array (of every vertex displayed). Is there a clever way to do it, using OpenGL functions? Here's my drawing method:
void PointsMX::drawMX()
{
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, MX.c);
glVertexPointer(2, GL_DOUBLE, 0, MX.p);
glPushMatrix();
glTranslated(position[X], position[Y], 0.0);
glScaled(scale, scale, 1.0);
glDrawArrays(GL_POINTS, 0, MX.size);
glPopMatrix();
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
}
As datenwolf points out in his comments, you can do this pretty simply using a shader, but not using the fixed function pipeline (which is what you're using if you never call glUseProgram().
If you're not using lighting, reproducing the fixed function shaders isn't very hard, and a little googling will help you get up to that point.
The key here is that you want to change something that is normally a vertex attribute (the alpha channel of the color) to a configurable value for the entire drawing operation. In shader terms this means overriding the vertex attribute with a uniform. A uniform is simply a value you pass into an OpenGL program which then has the same value for every vertex or fragment processed (depending on whether you put it into the vertex or fragment shader).
Here's an example of a very basic vertex shader:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
}
And a corresponding fragment shader
#version 330
in vec4 vColor;
out vec4 FragColor;
void main()
{
FragColor = vColor;
}
In order to accomplish what you're trying to do, you'd want to change the vertex shader to add an additional uniform representing your alpha override:
#version 330
uniform mat4 Projection = mat4(1);
uniform mat4 ModelView = mat4(1);
uniform float AlphaOverride = -1.0;
layout(location = 0) in vec3 Position;
layout(location = 3) in vec4 Color;
out vec4 vColor;
void main() {
gl_Position = Projection * ModelView * vec4(Position, 1);
vColor = Color;
if (AlphaOverride > 0.0) {
vColor.a = AlphaOverride;
}
}
If you fail to set the AlphaOverride uniform it will be -1, and will therefore be ignored by the vertex shader. But if you set it to a value between 0 and 1, then it will be applied to the alpha channel of your vertex.

GLSL geometry shader requires glProgramParameteriEXT regardless of layout

I created a basic quad drawing shader using a single point and a geometry shader.
I've read many posts and articles suggesting that I would not need to use glProgramParameteriEXT and could use the layout keyword so long as I was using a shader #version 150 or higher. Some suggested #version 400 or #version 420. My computer will not support #version 420 or higher.
If I use only layout and #version 150 or higher, nothing draws. If I remove layout (or even keep it; it does not seem to care because it will compile) and use glProgramParameteriEXT, it renders.
In code, this does nothing:
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
This is the only code that works:
glProgramParameteriEXT( id, GL_GEOMETRY_INPUT_TYPE_EXT, GL_POINTS );
glProgramParameteriEXT( id, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP );
glProgramParameteriEXT( id, GL_GEOMETRY_VERTICES_OUT_EXT, 4 );
The alternative is to create a parser that creates the parameters via shader source.
Source for quad rendering via geometry shader:
#version 330
#ifdef VERTEX_SHADER
in vec4 aTexture0;
in vec4 aColor;
in mat4 aMatrix;
out vec4 gvTex0;
out vec4 gvColor;
out mat4 gvMatrix;
void main()
{
// Texture color
gvTex0 = aTexture0;
// Vertex color
gvColor = aColor;
// Matrix
gvMatrix = aMatrix;
}
#endif
#ifdef GEOMETRY_SHADER
layout (points) in;
layout (triangle_strip, max_vertices=4) out;
in vec4 gvTex0[1];
in vec4 gvColor[1];
in mat4 gvMatrix[1];
out vec2 vTex0;
out vec4 vColor;
void main()
{
vColor = gvColor[0];
// Top right.
//
gl_Position = gvMatrix[0] * vec4(1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].y);
EmitVertex();
// Top left.
//
gl_Position = gvMatrix[0] * vec4(-1, 1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].y);
EmitVertex();
// Bottom right.
//
gl_Position = gvMatrix[0] * vec4(1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].z, gvTex0[0].w);
EmitVertex();
// Bottom left.
//
gl_Position = gvMatrix[0] * vec4(-1, -1, 0, 1);
vTex0 = vec2(gvTex0[0].x, gvTex0[0].w);
EmitVertex();
EndPrimitive();
}
#endif
#ifdef FRAGMENT_SHADER
uniform sampler2D tex0;
in vec2 vTex0;
in vec4 vColor;
out vec4 vFragColor;
void main()
{
vFragColor = clamp(texture2D(tex0, vTex0) * vColor, 0.0, 1.0);
}
#endif
I am looking for suggestions as to why something like this might happen.