I am currently trying to render the value of an integer using a bitmap (think scoreboard for invaders) but I'm having trouble changing texture coordinates while the game is running.
I link the shader and data like so:
GLint texAttrib = glGetAttribLocation(shaderProgram, "texcoord");
glEnableVertexAttribArray(texAttrib);
glVertexAttribPointer(texAttrib, 2, GL_FLOAT, GL_FALSE,
4 * sizeof(float), (void*)(2 * sizeof(float)));
And in my shaders I do the following:
Vertex Shader:
#version 150
uniform mat4 mvp;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
FragmentShader:
#version 150 core
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texture2D(tex, Texcoord);
}
How would I change this code/implement a function to be able to change the texcoord variable?
If you need to modify the texture coordinates frequently, but the other vertex attributes remain unchanged, it can be beneficial to keep the texture coordinates in a separate VBO. While it's generally preferable to use interleaved attributes, this is one case where that's not necessarily the most efficient solution.
So you would have two VBOs, one for the positions, and one for the texture coordinates. Your setup code will look something like this:
GLuint vboIds[2];
glGenBuffers(2, vboIds);
// Load positions.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(positions), positions, GL_STATIC_DRAW);
// Load texture coordinates.
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(texCoords), texCoords, GL_DYNAMIC_DRAW);
Note the different last argument to glBufferData(), which is a usage hint. GL_STATIC_DRAW suggests to the OpenGL implementation that the data will not be modified on a regular basis, while GL_DYNAMIC_DRAW suggests that it will be modified frequently.
Then, anytime your texture data changes, you can modify it with glBufferSubData():
glBindBuffer(GL_ARRAY_BUFFER, vboIds[1]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(texCoords), texCoords);
Of course if only part of them change, you would only make the call for the part that changes.
You did not specify how exactly the texture coordinates change. If it's just something like a simple transformation, it would be much more efficient to apply that transformation in the shader code, instead of modifying the original texture coordinates.
For example, say you only wanted to shift the texture coordinates. You could have a uniform variable for the shift in your vertex shader, and then add it to the incoming texture coordinate attribute:
uniform vec2 TexCoordShift;
in vec2 TexCoord;
out vec2 FragTexCoord;
...
FragTexCoord = TexCoord + TexCoordShift;
and then in your C++ code:
// Once during setup, after linking program.
TexCoordShiftLoc = glGetUniformLocation(program, "TexCoordShift");
// To change transformation, after glUseProgram(), before glDraw*().
glUniform2f(TexCoordShiftLoc, xShift, yShift);
So I make no promises on the efficiency of this technique, but it's what I do and I'll be damned if text rendering is what slows down my program.
I have a dedicated class to store mesh, which consists of a few vectors of data, and a few GLuints to store pointers to my uploaded data. I upload data to openGL like this:
glBindBuffer(GL_ARRAY_BUFFER, position);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.position.size(), &data.position[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec3) * data.normal.size(), &data.normal[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glBufferData(GL_ARRAY_BUFFER, sizeof(vec2) * data.uv.size(), &data.uv[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * data.index.size(), &data.index[0], GL_DYNAMIC_DRAW);
Then, to draw it I go like this:
glEnableVertexAttribArray(positionBinding);
glBindBuffer(GL_ARRAY_BUFFER, position);
glVertexAttribPointer(positionBinding, 3, GL_FLOAT, GL_FALSE, 0, NULL);
glEnableVertexAttribArray(normalBinding);
glBindBuffer(GL_ARRAY_BUFFER, normal);
glVertexAttribPointer(normalBinding, 3, GL_FLOAT, GL_TRUE, 0, NULL);
glEnableVertexAttribArray(uvBinding);
glBindBuffer(GL_ARRAY_BUFFER, uv);
glVertexAttribPointer(uvBinding, 2, GL_FLOAT, GL_FALSE, 0, NULL);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, index);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, NULL);
glDisableVertexAttribArray(positionBinding);
glDisableVertexAttribArray(normalBinding);
glDisableVertexAttribArray(uvBinding);
This setup is designed for a full fledged 3D engine, so you can definitely tone it down a little. Basically, I have 4 buffers, position, uv, normal, and index. You probably only need the first two, so just ignore the others.
Anyway, each time I want to draw some text, I upload my data using the first code chunk I showed, then draw it using the second chunk. It works pretty well, and it's very elegant. This is my code to draw text using it:
vbo(genTextMesh("some string")).draw(); //vbo is my mesh containing class
I hope this helps, if you have any questions feel free to ask.
I use a uniform vec2 to pass the texture offset into the vertex shader.
I am not sure how efficient that is, but if your texture coordinates are the same shape, and just moved around, then this is an option.
#version 150
uniform mat4 mvp;
uniform vec2 texOffset;
in vec2 position;
in vec2 texcoord;
out vec2 Texcoord;
void main() {
Texcoord = texcoord + texOffset;
gl_Position = mvp * vec4(position, 0.0, 1.0) ;
}
Related
I'm trying to get some basic shaders working in OpenGL, and I seem to have hit a roadblock at the first barrier. I'm trying to enable some vertex attributes, but I'm getting weird results. I've brought up the draw call in RenderDoc, and only vertex attribute 0 is being enabled. Here is my VAO making code, and my render loop. I'm probably overlooking something really obvious. Thanks!
std::vector<float> positions;
std::vector<float> normals;
std::vector<float> texCoords;
for (auto x : model->positions)
{
positions.push_back(x.x);
positions.push_back(x.y);
positions.push_back(x.z);
}
for (auto x : model->normals)
{
normals.push_back(x.x);
normals.push_back(x.y);
normals.push_back(x.z);
}
for (auto x : model->texCoords)
{
texCoords.push_back(x.x);
texCoords.push_back(x.y);
}
GLuint indicesVBO = 0;
GLuint texCoordsVBO = 0;
GLuint vertsVBO = 0;
GLuint normsVBO = 0;
glGenVertexArrays(1, &model->vao);
glBindVertexArray(model->vao);
glGenBuffers(1, &vertsVBO);
glBindBuffer(GL_ARRAY_BUFFER, vertsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * positions.size(), positions.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(0);
glGenBuffers(1, &normsVBO);
glBindBuffer(GL_ARRAY_BUFFER, normsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * normals.size(), normals.data(), GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(1);
glGenBuffers(1, &texCoordsVBO);
glBindBuffer(GL_ARRAY_BUFFER, texCoordsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * texCoords.size(), texCoords.data(), GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)0);
glEnableVertexAttribArray(2);
glGenBuffers(1, &indicesVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesVBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, model->indices.size() * sizeof(uint32_t), model->indices.data(), GL_STATIC_DRAW);
glBindVertexArray(0);
My Render Loop is this:
//I'm aware this isn't usually needed but I'm just trying to make sure
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
for (GamePiece * x : gamePieces)
{
glUseProgram(x->program->programID);
glBindVertexArray(x->model->vao);
glBindTexture(GL_TEXTURE_2D, x->texture->texID);
glDrawElements(GL_TRIANGLES, x->model->indices.size(), GL_UNSIGNED_INT,(void*)0);
}
And my vertex shader:
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 texCoord;
out vec2 outUV;
out vec3 outNormal;
void main()
{
outUV = texCoord;
outNormal = normal;
gl_Position = vec4(position, 1.0f);
}
#version 330
in vec2 inUV;
in vec3 normal;
out vec4 outFragcolor;
uniform sampler2D colourTexture;
void main()
{
outFragcolor = texture2D(colourTexture, inUV);
}
See OpenGL 4.5 Core Profile Specification - 7.3.1 Program Interfaces, page 96:
[...] When a program is linked, the GL builds a list of active resources for each interface. [...] For example, variables might be considered inactive if they are declared but not used in executable code, [...] The set of active resources for any interface is implementation-dependent because it depends on various analysis and optimizations performed by the compiler and linker
This means that, if the compiler and linker determine that the an attribute variable is "not used", when the executable code is executed, then the attribute is inactive.
Inactive attributes are no active program resources and thus not visible in RenderDoc.
Furthermore the output variables of a shader stage are linked to the input variables of the next shader stage by its name.
texCoord is not an active program resource, because it is assigned to the output variable outUV. The fragment shader has no input variable outUV.
Vertex shader:
out vec2 outUV;
out vec3 outNormal;
Fragment shader:
in vec2 inUV;
in vec3 normal;
See Program separation linkage:
Either use the same names for the outputs of the vertex shader and inputs of the fragment shader, or use layout locations to linke the interface variables:
Vertex shader:
layout(location = 0) out vec2 outUV;
layout(location = 1) out vec3 outNormal;
Fragment shader:
layout(location = 0) in vec2 inUV;
layout(location = 1) in vec3 normal;
I wanted to load a model into OpenGL with Assimp.
I am using QT as my framework that provides me with the functions needed.
Baiscally, my program crashes at Mesh::DrawMesh at gl.glDrawElements()...
I bet it has to do with one of my allocations but I don't know.
I am sure the model is loaded correctly because I compared the loaded results ;)
So here I post the initialize function that basically sets up the buffers etc. for that mesh. I think that maybe something went wrong there:
void Mesh::initialize()
{
vao->create();
vbo->create();
ebo->create();
vao->bind();
vbo->bind(); //glBindBuffer(GL_ARRAY_BUFFER, vbo); // Bind vbo
vbo->setUsagePattern(QOpenGLBuffer::StaticDraw);
vbo->allocate(vertices.data(),vertices.size() * sizeof(Vertex)); //glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(Vertex), vertices.data(), GL_STATIC_DRAW); //Allocates space in bytes
ebo->bind(); //glBindBuffer(GL_ARRAY_BUFFER, ebo);
ebo->setUsagePattern(QOpenGLBuffer::StaticDraw);
ebo->allocate(indices.data(),indices.size()*sizeof(GLuint)); //glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLuint), indices.data(), GL_STATIC_DRAW); //Allocates space in bytes
program->enableAttributeArray(0); //glEnableVertexAttribArray(0); //On layout = 0
program->setAttributeBuffer(0,GL_FLOAT,0,sizeof(Vertex));//glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,sizeof(Vertex),0); // Stride is sizeof(Vertex) ofc, offset is 0 because we want to access Position
program->enableAttributeArray(1); //glEnableVertexAttribArray(1);
program->setAttributeBuffer(1,GL_FLOAT,offsetof(Vertex,Normal),3,sizeof(Vertex)); //glVertexAttribPointer(1,3,GL_FLOAT,GL_FALSE,sizeof(Vertex), (GLvoid*) offsetof(Vertex, Normal));
program->enableAttributeArray(2);
program->setAttributeBuffer(2,GL_FLOAT,offsetof(Vertex,TextCoords),2,sizeof(Vertex));//glVertexAttribPointer(2,2,GL_FLOAT,GL_FALSE,sizeof(Vertex), (GLvoid*) offsetof(Vertex, TextCoords));
vao->release();
}
This is my Draw method that gets called when the program is bound:
void Mesh::DrawMesh(QOpenGLFunctions_3_3_Core& gl)
{
vao->bind();
qDebug() << vertices.size();
gl.glDrawElements(GL_TRIANGLES, indices.size(),GL_UNSIGNED_INT,0);
vao->release();
}
Vertex Shader:
#version 330 core
layout(location=0) in vec3 position;
layout(location=1) in vec3 normal;
layout(location=2) in vec2 textCoords;
uniform mat4 MVP;
out vec4 color;
void main(void)
{
gl_Position= MVP*vec4(position,1);
color = vec4(0.5,0.5,0.5,0.5);
}
InitializeGL function of my QOpenGLWidget:
void RenderingWindow::initializeGL()
{
this->initializeOpenGLFunctions();
program.create();
program.addShaderFromSourceFile(QOpenGLShader::Vertex, ":/testvert.vert" );
program.addShaderFromSourceFile(QOpenGLShader::Fragment, ":/testfrag.frag");
program.link();
program.bind();
this->glEnable(GL_DEPTH_TEST);
Camera::instance().LookAt(0,0,10, 0,0,0 ,0,1,0);
model.SetProgram(&program);
model.LoadModel(*this, "C:/Users/TestCube.fbx");
program.release();
}
This is my Vertex struct:
struct Vertex
{
QVector3D Position;
QVector3D Normal;
QVector2D TextCoords;
};
I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code
As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I have a file from which I read vertex-positions/-uvs/-normals and also indices. I want to render them in the most efficient way possible. That's not the problem. I also want to use a vertex shader to displace the vertecies, from bones and animations.
Of course I want to achieve this in the most efficient way possible. The texture is bound externally, nothing I should care about.
My first idea was to use glVertexAttribute* and glBindBuffer* etc. But I can't figure out a way to get my normals through like when I do glNormal, glTexCoord they get processed by OpenGl automatically.
Like I said I ONLY can use vertex shaders, fragment etc. is already "blocked".
What version of GLSL are you using?
This probably will not answer your question, but it shows how to properly setup generic vertex attributes without relying on non-standard attribute aliasing.
The general idea is the same for all versions (you use generic vertex attributes), but the syntax for declaring them in GLSL differs. Regardless what version you are using, you need to tie the named attributes in your vertex shader to the same index as you pass to glVertexAttribPointer (...).
Pre-GLSL 1.30 (GL 2.0/2.1):
#version 110
attribute vec4 vtx_pos_NDC;
attribute vec2 vtx_tex;
attribute vec3 vtx_norm;
varying vec2 texcoords;
varying vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
GLSL 1.30 (GL 3.0):
#version 130
in vec4 vtx_pos_NDC;
in vec2 vtx_tex;
in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
For both of these shaders, you can set the attribute location for each of the inputs (before linking) like so:
glBindAttribLocation (<GLSL_PROGRAM>, 0, "vtx_pos_NDC");
glBindAttribLocation (<GLSL_PROGRAM>, 1, "vtx_tex");
glBindAttribLocation (<GLSL_PROGRAM>, 2, "vtx_norm");
If you are lucky enough to be using an implementation that supports
GL_ARB_explicit_attrib_location (or GLSL 3.30), you can also do this:
GLSL 3.30 (GL 3.3)
#version 330
layout (location = 0) in vec4 vtx_pos_NDC;
layout (location = 1) in vec2 vtx_tex;
layout (location = 2) in vec3 vtx_norm;
out vec2 texcoords;
out vec3 normal;
void main (void)
{
gl_Position = vtx_pos_NDC;
texcoords = vtx_tex;
normal = vtx_norm;
}
Here's an example of how to set up an vertex buffer in GL 3 with packed vertex data: Position, color, normal, and one set of texture coords
typedef struct ccqv {
GLfloat Pos[3];
unsigned int Col;
GLfloat Norm[3];
GLfloat Tex2[4];
} Vertex;
...
glGenVertexArrays( 1, _vertexArray );
glBindVertexArray(_vertexArray);
glGenBuffers( 1, _vertexBuffer );
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, _arraySize*sizeof(Vertex), NULL, GL_DYNAMIC_DRAW );
glEnableVertexAttribArray(0); // vertex
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Pos));
glEnableVertexAttribArray(3); // primary color
glVertexAttribPointer( 3, 4, GL_UNSIGNED_BYTE, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Col));
glEnableVertexAttribArray(2); // normal
glVertexAttribPointer( 2, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex1));
glEnableVertexAttribArray(8); // texcoord0
glVertexAttribPointer( 8, 4, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*)offsetof(Vertex,Tex2));
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
The first parameter to the Attrib functions is the index of the attributes. For simplicity, I'm using aliases defined for NVIDIA CG: http://http.developer.nvidia.com/Cg/gp4gp.html
If you're using GLSL shaders, you'll need to use glBindAttribLocation() to define these indices, as explained in Andon M. Coleman's answer.
I have been successful in rendering primitives with a colour component via the shader and also translating them. However, upon attempting to load a texture and render it for the primitive via the shader, the primitives glitch, they should be squares:
As you can see, it successfully loads and applies the texture with the colour component to the single primitive in the scene.
If I then remove the color component, I again have primitives, but oddly, they are scaled by changing the uvs - this should not be the case, only the uvs should scale! (also their origin is offset)
My shader init code:
void renderer::initRendererGfx()
{
shader->compilerShaders();
shader->loadAttribute(#"Position");
shader->loadAttribute(#"SourceColor");
shader->loadAttribute(#"TexCoordIn");
}
Here is my object handler rendering function code:
void renderer::drawRender(glm::mat4 &view, glm::mat4 &projection)
{
//Loop through all objects of base type OBJECT
for(int i=0;i<SceneObjects.size();i++){
if(SceneObjects.size()>0){
shader->bind();//Bind the shader for the rendering of this object
SceneObjects[i]->mv = view * SceneObjects[i]->model;
shader->setUniform(#"modelViewMatrix", SceneObjects[i]->mv);//Calculate object model view
shader->setUniform(#"MVP", projection * SceneObjects[i]->mv);//apply projection transforms to object
glActiveTexture(GL_TEXTURE0); // unneccc in practice
glBindTexture(GL_TEXTURE_2D, SceneObjects[i]->_texture);
shader->setUniform(#"Texture", 0);//Apply the uniform for this instance
SceneObjects[i]->draw();//Draw this object
shader->unbind();//Release the shader for the next object
}
}
}
Here is my sprite buffer initialisation and draw code:
void spriteObject::draw()
{
glVertexAttribPointer((GLuint)0, 3, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex), NULL);
glVertexAttribPointer((GLuint)1, 4, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*) (sizeof(GL_FLOAT) * 3));
glVertexAttribPointer((GLuint)2, 2, GL_FLOAT, GL_FALSE, sizeof(SpriteVertex) , (GLvoid*)(sizeof(GL_FLOAT) * 7));
glDrawElements(GL_TRIANGLE_STRIP, sizeof(SpriteIndices)/sizeof(SpriteIndices[0]), GL_UNSIGNED_BYTE, 0);
}
void spriteObject::initBuffers()
{
glGenBuffers(1, &vertexBufferID);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferID);
glBufferData(GL_ARRAY_BUFFER, sizeof(SpriteVertices), SpriteVertices, GL_STATIC_DRAW);
glGenBuffers(1, &indexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBufferID);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(SpriteIndices), SpriteIndices, GL_STATIC_DRAW);
}
Here is the vertex shader:
attribute vec3 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 MVP;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColor = SourceColor;
gl_Position = MVP * vec4(Position,1.0);
TexCoordOut = TexCoordIn;
}
And finally the fragment shader:
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}
If you want to see any more specifics of certain elements, just ask.
Many thanks.
Are you sure your triangles have the same winding? The winding is the order in which the triangle points are listed ( either clockwise or counter-clockwise ). The winding is used in face culling to determine if the triangle is facing or back-facing.
You can easily check if your triangle are wrongly winded by disabling face culling.
glDisable( GL_CULL_FACE );
More information here ( http://db-in.com/blog/2011/02/all-about-opengl-es-2-x-part-23/#face_culling )