Simple GL fragment shader behaves strangely on newer GPU - opengl

I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code

As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.

Related

OpenGL: changing texture coordinates on the fly

I am currently trying to render the value of an integer using a bitmap (think scoreboard for invaders) but I'm having trouble changing texture coordinates while the game is running.
I link the shader and data like so:
GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(float), (void*)(2 * sizeof(float)));
And in my shaders I do the following:
Vertex Shader:
#version 150
uniform mat4 mvp;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
FragmentShader:
#version 150 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture2D(tex, Texcoord);
}
How would I change this code/implement a function to be able to change the texcoord variable?
If you need to modify the texture coordinates frequently, but the other vertex attributes remain unchanged, it can be beneficial to keep the texture coordinates in a separate VBO. While it's generally preferable to use interleaved attributes, this is one case where that's not necessarily the most efficient solution.
So you would have two VBOs, one for the positions, and one for the texture coordinates. Your setup code will look something like this:
GLuint vboIds[2];
glGenBuffers(2, vboIds);
// Load positions.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// Load texture coordinates.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_DYNAMIC_DRAW);
Note the different last argument to glBufferData(), which is a usage hint. GL_STATIC_DRAW suggests to the OpenGL implementation that the data will not be modified on a regular basis, while GL_DYNAMIC_DRAW suggests that it will be modified frequently.
Then, anytime your texture data changes, you can modify it with glBufferSubData():
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(texCoords), texCoords);
Of course if only part of them change, you would only make the call for the part that changes.
You did not specify how exactly the texture coordinates change. If it's just something like a simple transformation, it would be much more efficient to apply that transformation in the shader code, instead of modifying the original texture coordinates.
For example, say you only wanted to shift the texture coordinates. You could have a uniform variable for the shift in your vertex shader, and then add it to the incoming texture coordinate attribute:
uniform vec2 TexCoordShift;
in vec2 TexCoord;
out vec2 FragTexCoord;
...
FragTexCoord = TexCoord + TexCoordShift;
and then in your C++ code:
// Once during setup, after linking program.
TexCoordShiftLoc = glGetUniformLocation(program, "TexCoordShift");
// To change transformation, after glUseProgram(), before glDraw*().
glUniform2f(TexCoordShiftLoc, xShift, yShift);
So I make no promises on the efficiency of this technique, but it's what I do and I'll be damned if text rendering is what slows down my program.
I have a dedicated class to store mesh, which consists of a few vectors of data, and a few GLuints to store pointers to my uploaded data. I upload data to openGL like this:
glBindBuffer(GL_ARRAY_BUFFER, position);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.position.size(), &data.position[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.normal.size(), &data.normal[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2) * data.uv.size(), &data.uv[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * data.index.size(), &data.index[0], GL_DYNAMIC_DRAW);
Then, to draw it I go like this:
glEnableVertexAttribArray(positionBinding);
glBindBuffer(GL_ARRAY_BUFFER, position);
glVertexAttribPointer(positionBinding, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(normalBinding);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glVertexAttribPointer(normalBinding, 3, GL_FLOAT, GL_TRUE, 0, NULL);
glEnableVertexAttribArray(uvBinding);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glVertexAttribPointer(uvBinding, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
glDisableVertexAttribArray(positionBinding);
glDisableVertexAttribArray(normalBinding);
glDisableVertexAttribArray(uvBinding);
This setup is designed for a full fledged 3D engine, so you can definitely tone it down a little. Basically, I have 4 buffers, position, uv, normal, and index. You probably only need the first two, so just ignore the others.
Anyway, each time I want to draw some text, I upload my data using the first code chunk I showed, then draw it using the second chunk. It works pretty well, and it's very elegant. This is my code to draw text using it:
vbo(genTextMesh("some string")).draw(); //vbo is my mesh containing class
I hope this helps, if you have any questions feel free to ask.
I use a uniform vec2 to pass the texture offset into the vertex shader.
I am not sure how efficient that is, but if your texture coordinates are the same shape, and just moved around, then this is an option.
#version 150
uniform mat4 mvp;
uniform vec2 texOffset;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord + texOffset;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}

How to use glDrawElementsInstanced + Texture Buffer Objects?

My use case is a bunch a textured quads that I want to draw. I'm trying to use the same indexed array of a quad to draw it a bunch of times and use the gl_InstanceID and gl_VertexID in GLSL to retrieve texture and position info from a Texture Buffer.
The way I understand a Texture Buffer is that I create it and my actual buffer, link them, and then whatever I put in the actual buffer magically appears in my texture buffer?
So I have my vertex data and index data:
struct Vertex
{
GLfloat position[4];
GLfloat uv[2];
};
Vertex m_vertices[4] =
{
{{-1,1,0,1},{0,1}},
{{1,1,0,1},{1,1}},
{{-1,-1,0,1},{0,0}},
{{1,-1,0,1},{1,0}}
};
GLuint m_indices[6] = {0,2,1,1,2,3};
Then I create my VAO, VBO and IBO for the quads:
glGenBuffers(1,&m_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER,m_vertexBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(Vertex)*4,&m_vertices,GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
glGenVertexArrays(1,&m_vao);
glBindVertexArray(m_vao);
glBindBuffer(GL_ARRAY_BUFFER,m_vertexBuffer);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0,4,GL_FLOAT, GL_FALSE, sizeof(struct Vertex),(const GLvoid*)offsetof(struct Vertex, position));
glEnableVertexAttribArray(1);
glVertexAttribPointer(0,2,GL_FLOAT, GL_FALSE, sizeof(struct Vertex),(const GLvoid*)offsetof(struct Vertex, uv));
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindVertexArray(m_vao);
glGenBuffers(1, &m_ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(GLuint)*6,&m_indices,GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glBindVertexArray(0);
I'm pretty sure that I've done the above correctly. My quads have 4 vertices, with six indexes to draw triangles.
Next I create my buffer and texture for the the Texture Buffer:
glGenBuffers(1,&m_xywhuvBuffer);
glBindBuffer(GL_TEXTURE_BUFFER, m_xywhuvBuffer);
glBufferData(GL_TEXTURE_BUFFER, sizeof(GLfloat)*8*100, nullptr, GL_DYNAMIC_DRAW); // 8 floats
glGenTextures(1,&m_xywhuvTexture);
glBindTexture(GL_TEXTURE_BUFFER, m_xywhuvTexture);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RG32F, m_xywhuvBuffer); // they're in pairs of 2, in r,g of each texel.
glBindBuffer(GL_TEXTURE_BUFFER,0);
So, the idea is that every four texels belongs to one quad, or gl_InstanceID.
When I'm drawing my quads, they execute the below:
glActiveTexture(GL_TEXTURE0);
glBindBuffer(GL_TEXTURE_BUFFER, m_xywhuvBuffer);
std::vector<GLfloat> xywhuz =
{
-1.0f + position.x / screenDimensions.x * 2.0f,
1.0f - position.y / screenDimensions.y * 2.0f,
dimensions.x / screenDimensions.x,
dimensions.y / screenDimensions.y,
m_region.x,
m_region.y,
m_region.w,
m_region.h
};
glBufferSubData(GL_TEXTURE_BUFFER, sizeof(GLfloat)*8*m_rectsDrawnThisFrame, sizeof(GLfloat)*8, xywhuz.data());
m_rectsDrawnThisFrame++;
So I increase m_rectsDrawThisFrame for each quad. You'll notice that the data I'm passing is 8 GLfloats, so each of the 4 texels that belong to each gl_InstanceID is the x,y position, the width and height, and then the same details for the real texture that I'm going to texture my quads with.
Finally once all of my rects have updated their section of the GL_TEXTURE_BUFFER I run this:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D,texture); // this is my actual texture that the quads take a section from to texture themselves.
glUniform1i(m_program->GetUniformLocation("tex"),1);
glUniform4f(m_program->GetUniformLocation("color"),1,0,1,1);
glBindVertexArray(m_vao);
glDrawElementsInstanced(GL_TRIANGLES,4,GL_UNSIGNED_INT,0,m_rectsDrawnThisFrame);
m_rectsDrawnThisFrame = 0;
I reset the draw count. I also noticed that I had to activate the texture in the second slot. Does the Texture Buffer Object use up one?
Finally my Vert shader
#version 410
layout (location = 0) in vec4 in_Position;
layout (location = 1) in vec2 in_UV;
out vec2 ex_texcoord;
uniform samplerBuffer buf;
void main(void)
{
vec2 position = texelFetch(buf,gl_InstanceID*4).xy;
vec2 dimensions = texelFetch(buf,gl_InstanceID*4+1).xy;
vec2 uvXY = texelFetch(buf,gl_InstanceID*4+2).xy;
vec2 uvWH = texelFetch(buf,gl_InstanceID*4+3).xy;
if(gl_VertexID == 0)
{
gl_Position = vec4(position.xy,0,1);
ex_texcoord = uvXY;
}
else if(gl_VertexID == 1)
{
gl_Position = vec4(position.x + dimensions.x, position.y,0,1);
ex_texcoord = vec2(uvXY.x + uvWH.x, uvXY.y);
}
else if(gl_VertexID == 2)
{
gl_Position = vec4(position.x, position.y + dimensions.y, 0,1);
ex_texcoord = vec2(uvXY.x, uvXY.y + uvWH.y);
}
else if(gl_VertexID == 3)
{
gl_Position = vec4(position.x + dimensions.x, position.y + dimensions.y, 0,1);
ex_texcoord = vec2(uvXY.x + uvWH.x, uvXY.y + uvWH.y );
}
}
And my Frag shader
#version 410
in vec2 ex_texcoord;
uniform sampler2D tex;
uniform vec4 color = vec4(1,1,1,1);
layout (location = 0) out vec4 FragColor;
void main()
{
FragColor = texture(tex,ex_texcoord) * color;
}
Now the problem, after I'm getting no errors reported in GLIntercept, is that I'm getting nothing drawn on the screen.
Any help?
There is one subtle issue in your code that would certainly stop it from working. At the end of the VAO/VBO setup code, you have this:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glBindVertexArray(0);
The GL_ELEMENT_ARRAY_BUFFER binding is part of the VAO state. If you unbind it while the VAO is bound, this VAO will not have an element array buffer binding. Which means that you don't have indices when you draw later.
You should simply remove this call:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
Also, since you have 6 indices, the second argument to the draw call should be 6:
glDrawElementsInstanced(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0, m_rectsDrawnThisFrame);
Apart from that, it all looks reasonable to me. But there's quite a lot of code, so I can't guarantee that I would have spotted all problems.
I also noticed that I had to activate the texture in the second slot. Does the Texture Buffer Object use up one?
Yes. The buffer texture needs to be bound, and the value of the sampler variable set to the corresponding texture unit. Since you bind the buffer texture during setup, never unbind it, and the default value of the sampler variable is 0, you're probably fine there. But I think it would be cleaner to set it up more explicitly. Where you prepare for drawing:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_BUFFER, m_xywhuvTexture);
glUniform1i(m_program->GetUniformLocation("buf"), 0);

Port from OpenGL to GLES 2.0

I have used https://github.com/akrinke/Font-Stash.git for some desktop applications. Now I want to use it on a raspberry Pi which use gles2. I looked into the code and see the only path that don't work on gles is flush_draw function:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glVertexPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts);
glTexCoordPointer(2, GL_FLOAT, VERT_STRIDE, texture->verts+2);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
I'm trying to port to gles to this:
glBindTexture(GL_TEXTURE_2D, texture->id);
glEnable(GL_TEXTURE_2D);
GLint position_index = get_attrib(stash->program, "position");
glEnableVertexAttribArray(position_index);
glVertexAttribPointer (position_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts);
GLint texture_coord_index = get_attrib(stash->program, "texCoord");
glEnableVertexAttribArray(texture_coord_index);
glVertexAttribPointer (texture_coord_index, 2, GL_FLOAT, GL_FALSE, VERT_STRIDE, texture->verts + 2);
GLint texture_index = get_uniform(stash->program, "texture");
glUniform1i(texture_index, 0);
glDrawArrays(GL_TRIANGLES, 0, texture->nverts);
glDisable(GL_TEXTURE_2D);
with vertex sl
attribute vec4 position;
attribute vec2 texCoord;
varying vec2 texCoordVar;
void main() {
gl_Position = position;
texCoordVar = texCoord;
}
and fragment sl
precision mediump float; // set default precision for floats to medium
uniform sampler2D texture; // shader texture uniform
varying vec2 texCoordVar; // fragment texture coordinate varying
void main() {
// sample the texture at the interpolated texture coordinate
// and write it to gl_FragColor
gl_FragColor = texture2D( texture, texCoordVar);
}
but I can't get anything, nothing on screen.
Can anybody show me what's wrong with my code?
You should setup transformations in your vertex shader. Best way to port fixed function OpenGL app is to write vertex and pixel shader that replicate fixed pipeline with transformations set as uniforms and set those uniforms every time transform is changed.
glEnable(GL_TEXTURE_2D), is not valid GLES2 btw. Also you're not doing any manipulation of the position in your vertex shader, so unless the coordinates are guaranteed to sit within the frustum and you're just passing them through to the rasterizer, then you are leaving it to luck as to whether or not they end up in the frustum. Are you sure you've accounted for everything the fixed function pipe used to handle regarding transforms?

OpenGL ES 2.0 Texture loading visual glitch

I have been successful in rendering primitives with a colour component via the shader and also translating them. However, upon attempting to load a texture and render it for the primitive via the shader, the primitives glitch, they should be squares:
As you can see, it successfully loads and applies the texture with the colour component to the single primitive in the scene.
If I then remove the color component, I again have primitives, but oddly, they are scaled by changing the uvs - this should not be the case, only the uvs should scale! (also their origin is offset)
My shader init code:
void renderer::initRendererGfx()
{
shader->compilerShaders();
shader->loadAttribute(#"Position");
shader->loadAttribute(#"SourceColor");
shader->loadAttribute(#"TexCoordIn");
}
Here is my object handler rendering function code:
void renderer::drawRender(glm::mat4 &view, glm::mat4 &projection)
{
//Loop through all objects of base type OBJECT
for(int i=0;i<SceneObjects.size();i++){
if(SceneObjects.size()>0){
shader->bind();//Bind the shader for the rendering of this object
SceneObjects[i]->mv = view * SceneObjects[i]->model;
shader->setUniform(#"modelViewMatrix", SceneObjects[i]->mv);//Calculate object model view
shader->setUniform(#"MVP", projection * SceneObjects[i]->mv);//apply projection transforms to object
glActiveTexture(GL_TEXTURE0); // unneccc in practice
glBindTexture(GL_TEXTURE_2D, SceneObjects[i]->_texture);
shader->setUniform(#"Texture", 0);//Apply the uniform for this instance
SceneObjects[i]->draw();//Draw this object
shader->unbind();//Release the shader for the next object
}
}
}
Here is my sprite buffer initialisation and draw code:
void spriteObject::draw()
{
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex), NULL);
glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*) (sizeof(GL_FLOAT) * 3));
glVertexAttribPointer((GLuint)2, 2, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*)(sizeof(GL_FLOAT) * 7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(SpriteIndices)/sizeof(SpriteIndices[0]), GL_UNSIGNED_BYTE, 0);
}
void spriteObject::initBuffers()
{
glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(SpriteVertices), SpriteVertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(SpriteIndices), SpriteIndices, GL_STATIC_DRAW);
}
Here is the vertex shader:
attribute vec3 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 MVP;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColor = SourceColor;
gl_Position = MVP * vec4(Position,1.0);
TexCoordOut = TexCoordIn;
}
And finally the fragment shader:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}
If you want to see any more specifics of certain elements, just ask.
Many thanks.
Are you sure your triangles have the same winding? The winding is the order in which the triangle points are listed ( either clockwise or counter-clockwise ). The winding is used in face culling to determine if the triangle is facing or back-facing.
You can easily check if your triangle are wrongly winded by disabling face culling.
glDisable( GL_CULL_FACE );
More information here ( http://db-in.com/blog/2011/02/all-about-opengl-es-2-x-part-23/#face_culling )

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.