I have the following extremely simple vertex shader, when I render with it I get a blank screen:
#version 110
layout(location = 1) attribute vec3 position;
uniform mat4 modelview_matrix;
uniform mat4 projection_matrix;
void main() {
vec4 eye = modelview_matrix * vec4(position, 1.0);
gl_Position = projection_matrix * eye;
}
However, changing
layout(location = 1) attribute vec3 position; to
layout(location = 0) attribute vec3 position;
allows me to render correctly. Here's my rendering function:
glUseProgram(program);
GLenum error;
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(
modelview_uniform, 1, GL_FALSE, glm::value_ptr(modelview));
glUniformMatrix4fv(
projection_uniform, 1, GL_FALSE, glm::value_ptr(projection));
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glVertexAttribPointer(
position_attribute,
3,
GL_FLOAT,
GL_FALSE,
0,
(void*)0);
glEnableVertexAttribArray(position_attribute);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, element_buffer);
glDrawElements(
GL_TRIANGLES,
monkey_mesh.indices.size(),
GL_UNSIGNED_INT,
(void*)0);
glDisableVertexAttribArray(position_attribute);
glutSwapBuffers();
I obtain position_attribute through a call to glGetAttribLocation(program, "position");. It contains the correct value in both cases (1 in the first case, 0 in the second).
Is there something I'm doing wrong? I'm sure I'm able to render when location == 0 only because I'm lucky and the data is written there by sheer luck but I can't figure out for the life of me what step I'm missing.
What you are seeing is not possible. GLSL version 1.10 does not support layout syntax at all. So your compiler should have rejected the shader. Therefore, either your compiler is not rejecting the shader and is therefore broken, or you are not loading the shader you think you are.
If it still doesn't work when using GLSL version 3.30 or higher (the first core version to support layout(location=#) syntax for attribute indices), then what you're seeing is the result of a different bug. Namely, the compatibility profile implicitly states that, to render with vertex arrays, you must either use attribute zero or gl_Vertex. The core profile has no such restrictions. However, this restriction was in GL for a while, so some implementations will still enforce it, even on the core profile where it doesn't exist.
So just use attribute zero. Or possibly switch to the core profile if you're not already using it (though I'd be surprised if an implementation actually implements the distinction correctly. Generally, it'll either be too permissive in compatibility or too restrictive in core).
Related
I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code
As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.
I want to change color of objects in my program with shaders - fragment shader to be precise.
I have two shader programs: box, triangle (names are random - just for easier reference). For both programs I use this same vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
vec3 pos;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
}
and then i am using my box_shader program:
box_shader.Use();
// Create camera transformation
view = camera.GetViewMatrix();
glm::mat4 projection;
projection = glm::perspective(camera.Zoom, (float)WIDTH/(float)HEIGHT, 0.1f, 100.0f);
// Get the uniform locations
GLint modelLoc = glGetUniformLocation(box_shader.Program, "model");
GLint viewLoc = glGetUniformLocation(box_shader.Program, "view");
GLint projLoc = glGetUniformLocation(box_shader.Program, "projection");
// Pass the matrices to the shader
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
later in the program I'd like to use triangle_shader program. What I was trying is:
triangl_shader.Use()
DrawTraingles();
So I don't call again glGetUniformLocation, instead use those created earlier. Unfortunately with this I don't see triangles drawn with DrawTraingles(), although, when I don't switch shader program they appear.
For my shaders loading and use I use this class: learnopengl so everything is there regarding Use() function.
Can someone tell me what I should do to make use of different shaders?
EDIT:
What I've figured out was add glGetUniformLocations after packet_shader.Use(), so it looks now like this:
packet_shader.Use();
// Get the uniform locations
modelLoc = glGetUniformLocation(packet_shader.Program, "model");
viewLoc = glGetUniformLocation(packet_shader.Program, "view");
projLoc = glGetUniformLocation(packet_shader.Program, "projection");
// Pass the matrices to the shader
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
Although, I am not sure if this is the best idea in the matter of performance. Can someone tell me is it ok?
Uniforms are per-program state in the GL. So there are two issues here:
The uniform locations might be completely different in each program. For each program, you'll have to query the uniform locations, and have to store them separately for late use.
The uniform values have to be set for each program seperately. Even if you consider two programs "sharing" the same uniform, this is not the case. All the glUniform*() setters affect only the program currently in use. Since each of your program has its own model, view and projection uniform, you have to set these for each program, everytime they change. Currently, it looks like you never set those for the second program, so they are left at ther initial defaults of all zeros.
If you want to share uniforms between different programs, you might consider looking into Uniform Buffer Objects (UBOs).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I have a file from which I read vertex-positions/-uvs/-normals and also indices. I want to render them in the most efficient way possible. That's not the problem. I also want to use a vertex shader to displace the vertecies, from bones and animations.
Of course I want to achieve this in the most efficient way possible. The texture is bound externally, nothing I should care about.
My first idea was to use glVertexAttribute* and glBindBuffer* etc. But I can't figure out a way to get my normals through like when I do glNormal, glTexCoord they get processed by OpenGl automatically.
Like I said I ONLY can use vertex shaders, fragment etc. is already "blocked".
What version of GLSL are you using?
This probably will not answer your question, but it shows how to properly setup generic vertex attributes without relying on non-standard attribute aliasing.
The general idea is the same for all versions (you use generic vertex attributes), but the syntax for declaring them in GLSL differs. Regardless what version you are using, you need to tie the named attributes in your vertex shader to the same index as you pass to glVertexAttribPointer (...).
Pre-GLSL 1.30 (GL 2.0/2.1):
#version 110
attribute vec4 vtx_pos_NDC;
attribute vec2 vtx_tex;
attribute vec3 vtx_norm;
varying vec2 texcoords;
varying vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
GLSL 1.30 (GL 3.0):
#version 130
in vec4 vtx_pos_NDC;
in vec2 vtx_tex;
in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
For both of these shaders, you can set the attribute location for each of the inputs (before linking) like so:
glBindAttribLocation (<GLSL_PROGRAM>, 0, "vtx_pos_NDC");
glBindAttribLocation (<GLSL_PROGRAM>, 1, "vtx_tex");
glBindAttribLocation (<GLSL_PROGRAM>, 2, "vtx_norm");
If you are lucky enough to be using an implementation that supports
GL_ARB_explicit_attrib_location (or GLSL 3.30), you can also do this:
GLSL 3.30 (GL 3.3)
#version 330
layout (location = 0) in vec4 vtx_pos_NDC;
layout (location = 1) in vec2 vtx_tex;
layout (location = 2) in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
Here's an example of how to set up an vertex buffer in GL 3 with packed vertex data: Position, color, normal, and one set of texture coords
typedef struct ccqv {
GLfloat Pos[3];
unsigned int Col;
GLfloat Norm[3];
GLfloat Tex2[4];
} Vertex;
...
glGenVertexArrays( 1, _vertexArray );
glBindVertexArray(_vertexArray);
glGenBuffers( 1, _vertexBuffer );
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, _arraySize*sizeof(Vertex), NULL, GL_DYNAMIC_DRAW );
glEnableVertexAttribArray(0); // vertex
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Pos));
glEnableVertexAttribArray(3); // primary color
glVertexAttribPointer( 3, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Col));
glEnableVertexAttribArray(2); // normal
glVertexAttribPointer( 2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex1));
glEnableVertexAttribArray(8); // texcoord0
glVertexAttribPointer( 8, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex2));
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
The first parameter to the Attrib functions is the index of the attributes. For simplicity, I'm using aliases defined for NVIDIA CG: http://http.developer.nvidia.com/Cg/gp4gp.html
If you're using GLSL shaders, you'll need to use glBindAttribLocation() to define these indices, as explained in Andon M. Coleman's answer.
I have used https://github.com/akrinke/Font-Stash.git for some desktop applications. Now I want to use it on a raspberry Pi which use gles2. I looked into the code and see the only path that don't work on gles is flush_draw function:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts);
glTexCoordPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts+2);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I'm trying to port to gles to this:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
GLint position_index = get_attrib(stash->program, "position");
glEnableVertexAttribArray(position_index);
glVertexAttribPointer (position_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts);
GLint texture_coord_index = get_attrib(stash->program, "texCoord");
glEnableVertexAttribArray(texture_coord_index);
glVertexAttribPointer (texture_coord_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts + 2);
GLint texture_index = get_uniform(stash->program, "texture");
glUniform1i(texture_index, 0);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
with vertex sl
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 texCoordVar;
void main() {
gl_Position = position;
texCoordVar = texCoord;
}
and fragment sl
precision mediump float; // set default precision for floats to medium
uniform sampler2D texture; // shader texture uniform
varying vec2 texCoordVar; // fragment texture coordinate varying
void main() {
// sample the texture at the interpolated texture coordinate
// and write it to gl_FragColor
gl_FragColor = texture2D( texture, texCoordVar);
}
but I can't get anything, nothing on screen.
Can anybody show me what's wrong with my code?
You should setup transformations in your vertex shader. Best way to port fixed function OpenGL app is to write vertex and pixel shader that replicate fixed pipeline with transformations set as uniforms and set those uniforms every time transform is changed.
glEnable(GL_TEXTURE_2D), is not valid GLES2 btw. Also you're not doing any manipulation of the position in your vertex shader, so unless the coordinates are guaranteed to sit within the frustum and you're just passing them through to the rasterizer, then you are leaving it to luck as to whether or not they end up in the frustum. Are you sure you've accounted for everything the fixed function pipe used to handle regarding transforms?
I have the following vertex shader:
uniform mat4 uMVP;
attribute vec4 aPosition;
attribute vec4 aNormal;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_Position = uMVP * aPosition;
vPrimaryColor = vec4(1.0, 1.0, 1.0, 1.0);
vTexCoord = aTexCoord;
}
And the following fragment shader:
uniform sampler2D sTex;
varying vec2 vTexCoord;
varying vec4 vPrimaryColor;
void main() {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
Note that while I have a vTexCoord and vPrimaryColor, neither of which are used in the fragment shader. (The reason why they are there is because they eventually will be).
Now, I also set uMVP to be the identity matrix for now, and draw using the following code:
// Load the matrix
glUniformMatrix4fv(gvMVPHandle, 1, false, &mvPMatrix.matrix[0][0]);
// Draw the square to be textured
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glDrawArrays(GL_QUADS, 0, 4);
where the square is:
const GLfloat PlotWidget::gFullScreenQuad[] = { -1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f};
So, when I run this program, I get a black screen. Which does not seem like you would expect. However, when I change the line in the shade:
vTexCoord = aTexCoord;
To
vTexCoord = vec2(1.0, 1.0);
It works perfectly. So I would assume the problem with the code is with that line of code, but I can't think of anything in opengl that would cause this. Also, I'm using Qt for this project, which means this class is using the QGLWidget. I've never had this issue with OpenGL ES 2.0.
Any suggestions?
I'm sorry for the vague title, but I don't even know what class of problem this would be.
Are you checking glGetShaderInfoLog and glGetProgramInfoLog during your shader compilation? If not then I would recommend that as the first port of call.
Next thing to check would be your the binding for the texture coordinates. Are the attributes are being set up correctly? Is the data valid?
Finally, start stepping through your code with liberal spraying of glGetError calls. It wil almost certainly fail on glDrawArrays which won't help you much, but that's usually when the desparation sets in for me!
OR
You could try gDEBugger. I use it mainly to look for bottlenecks and to make sure I'm releasing OpenGL resources properly so can't vouch for the debugger, but it's worth a shot.
Apparently you need to actually use the whole use glEnableVertexAttribArray if it's getting passed into the fragment shader. I have no idea why though. But changing the drawing code to this:
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvPositionHandle);
glVertexAttribPointer(gvTexCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, gFullScreenQuad);
glEnableVertexAttribArray(gvTexCoordHandle);
glDrawArrays(GL_QUADS, 0, 4);
made it work.
Same problem, different cause.
For some devices the automatic variable linking in glLinkProgram does not work as specified.
Make sure things are done in the following order:
glCreateProgram
glCreateShader && glCompileShader for both shaders
glBindAttribLocation for all attributes
glLinkProgram
Step 3 can be repeated later at any time to rebind variables to different buffer slots - however changes only become effective after another call to glLinkgProgram.
or short: whenever you call glBindAttribLocation make sure a glLinkProgram calls comes after.