OpenGL can only bind to GL_TEXTURE0 - c++

I'm trying to bind multiple textures to my shader but when I use some other enum than GL_TEXTURE0 my shader shows unpredictable behaviour (e.g. all inout variables get zero).
This is how I bind the textures (in other projects this works prefectly)
glActiveTexture(GL_TEXTURE0);
sceneTex->bind();
glActiveTexture(GL_TEXTURE1);
depthTex->bind();
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
VBO->bind(GL_ARRAY_BUFFER);
unsigned int stride = 2 * sizeof(float) + 2 * sizeof(float);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, stride, (void*)0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, stride, (void*)(2 * sizeof(float)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
VBO->unbind(GL_ARRAY_BUFFER);
glActiveTexture(GL_TEXTURE0);
sceneTex->unbind();
glActiveTexture(GL_TEXTURE1);
depthTex->unbind();
When I remove glActiveTexture(GL_TEXTURE1) everything works fine. When I remove glActiveTexture(GL_TEXTURE0) I still get the problem. The same problem occurs if I try to use GL_TEXTURE2 or some other enum.
This is my fragment shader:
#version 440
out vec4 fragColor;
layout(binding = 0) uniform sampler2D colorMap;
layout(binding = 1) uniform sampler2D depthMap;
in vec2 texUV;
void main()
{
fragColor = texture(colorMap, texUV) / 2.0 + texture(depthMap, texUV) / 2.0;
}
When I check the states with glGetIntegerv everything seems fine. But it isn't.

I finally found the error and I have to commit that I made a real beginner mistake. Hard to believe that no one found this error.
I thought that one call of glEnable(GL_TEXTURE_2D) would be enough for all texture slots but how i found it you have to call glEnable(GL_TEXTURE_2D) for every active texture you have. So the right code looks like this:
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
sceneTex->bind();
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
depthTex->bind();
// draw some stuff
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_2D)
sceneTex->unbind();
glActiveTexture(GL_TEXTURE0);
glDisable(GL_TEXTURE_2D)
depthTex->unbind();

Related

Repeat UV for multiple cubes in one vertex buffer object (VBO)?

I am making a little voxel engine using a chunk system (like in Minecraft). I decided to make 1 VBO per chunk, so the VBO contain multiple cubes that will use different textures.
I actually have the UV of a cube and i would like to use it on all cubes in a VBO so the texture will wrap all cubes the same way if the cubes were in separated VBOs.
Here is what I'm actually getting:
How to tell OpenGL to do the same thing as the first cube on all cubes?
EDIT:
here are my shaders:
vertex shader
#version 400
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
fragment shader
#version 400
in vec2 UV;
out vec3 color;
uniform sampler2D textureSampler;
void main(){
color = texture(textureSampler, UV).rgb;
}
my glfw loop:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID);
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &chunkHandler.player.mvp[0][0]);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, grass);
glUniform1i(textureID, 0);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, chunkHandler.loaded_chunks[0]->vboID);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, tboID);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, chunkHandler.loaded_chunks[0]->nbVertices);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers(window);
glfwPollEvents();
tboID: tbo is for texture buffer object
how i create the TBO:
glGenBuffers(1, &tboID);
glBindBuffer(GL_ARRAY_BUFFER, tboID);
glBufferData(GL_ARRAY_BUFFER, 36 * 2 * sizeof(float), uvcube, GL_STATIC_DRAW);
While not a complete answer, I can help in debugging.
It seems to me that the texture coordinate are wrong (you don't provide the code for filling them).
In your fragment shader, I would output the U and V coordinate as colors:
#version 400
in vec2 UV;
out vec3 color;
uniform sampler2D textureSampler;
void main(){
color = vec3(UV.u, UV.v, 0);
}
If the coordinates are correct, you should have a gradient on each cube face (each cube vertex will be colored based on its UV, so a (0,0) vertex is black and (0,1) is green and so on).
If it's not the case, try to fix the texture value until you can see them correctly.
This looks suspicious to me: glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);

Rendering Textures with Modern OpenGL

So I'm currently making a game using SDL2 with OpenGL (glew), and using SOIL to load images, and I'm using modern opengl techniques with vertex array objects and shaders and whatnot. I'm currently trying to just render just a texture to the window, but I can't seem to do it. I've looked up many tutorials and solutions, but I can't seem to understand how I'm supposed to do it properly. I'm unsure of whether it's the shader that's the problem, or my code itself. I'll post all the necessary code below, and any more is needed I'd be happy to supply it. Any answers and solutions are welcome. For future context, I have a class I use to store data for VAOs for convenience purposes. Code:
Here's the code to load the texture:
void PXSprite::loadSprite() {
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
int imagew, imageh;
//The path is a class variable, and I haven't received any errors from this function, so I can only assume it's getting the texture file correctly.
unsigned char* image = SOIL_load_image(path.c_str(), &imagew, &imageh, 0, SOIL_LOAD_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, imagew, imageh, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
glBindTexture(GL_TEXTURE_2D, 0);
//Don't worry about this code. It's just to keep the buffer object data.
//It works properly when rendering polygons.
spriteVAO.clear();
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addTextureCoordinate(0, 0);
spriteVAO.addTextureCoordinate(1, 0);
spriteVAO.addTextureCoordinate(1, 1);
spriteVAO.addTextureCoordinate(0, 1);
glGenVertexArrays(1, &spriteVAO.vaoID);
glGenBuffers(1, &spriteVAO.posVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*12, nullptr, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &spriteVAO.colVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.colVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 16, &spriteVAO.colors[0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &spriteVAO.texVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.texVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 8, &spriteVAO.texCoords[0], GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
And Here's my code for rendering the texture:
void PXSprite::render(int x, int y) {
spriteVAO.clear(PXVertexArrayObject::positionAttributeIndex);
spriteVAO.addPosition(x, y);
spriteVAO.addPosition(x+width, y);
spriteVAO.addPosition(x, y+height);
spriteVAO.addPosition(x+width, y+height);
glBindTexture(GL_TEXTURE_2D, textureID);
glBindVertexArray(spriteVAO.vaoID);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(GLfloat)*12, &spriteVAO.positions[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindTexture(GL_TEXTURE_2D, 0);
}
Here's my vertex shader:
#version 330 core
in vec3 in_Position;
in vec4 in_Color;
in vec2 in_TexCoord;
out vec4 outPosition;
out vec4 outColor;
out vec2 outTexCoord;
void main() {
gl_Position = vec4(in_Position.x, in_Position.y*-1.0, in_Position.z, 1.0);
outTexCoord = in_TexCoord;
outColor = in_Color;
}
Here's my fragment shader:
#version 330 core
in vec2 outTexCoord;
in vec4 outColor;
out vec4 glFragColor;
out vec4 glTexColor;
uniform sampler2D pxsampler;
void main() {
vec4 texColor = texture(pxsampler, outTexCoord);
//This outputs the color of polygons if I don't multiply outColor by texColor, but once I add texColor, no colors show up at all.
glFragColor = texColor*outColor;
}
And lastly here's a bit of code to give reference to the VAO Attribute Pointers that I use at the right time when loading the shaders:
glBindAttribLocation(ProgramID, 0, "in_Position");
glBindAttribLocation(ProgramID, 1, "in_Color");
glBindAttribLocation(ProgramID, 2, "in_TexCoord");
Any help or suggestions are welcome. If any extra code is needed I'll add it. I'm sorry if this question seems redundant, but I couldn't find anything to help me. If someone could explain to me how exactly the code for rendering the textures works, that would be great, because from tutorials I've read they usually don't explain what does what so that I know how to do it again, so I'm not clear on what I'm doing exactly. Again, any help is appreciated. Thanks!
I spotted a few problems in this code. Partly already mentioned in other answers/comments, partly not.
VAO binding
The main problem is that your VAO is not bound while you set up the vertex attributes. See this code sequence:
glGenVertexArrays(1, &spriteVAO.vaoID);
glGenBuffers(1, &spriteVAO.posVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*12, nullptr, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Here, you create a VAO (or more precisely, the id for a VAO), but don't bind it. The state set up by the glVertexAttribPointer() call is stored in the currently bound VAO. This call should actually give you a GL_INVALID_OPERATION error in a core profile context, since you need to have a VAO bound when making it.
To fix this, bind the VAO after creating the id:
glGenVertexArrays(1, &spriteVAO.vaoID);
glBindVertexArray(spriteVAO.vaoID);
...
glGenerateMipmap() in wrong place
As pointed out in a comment, this call belongs after the glTexImage2D() call. It generate mipmaps based on the current texture content, meaning that you need to specify the texture data first.
This error does not cause immediate harm in your current code since you're not actually using mipmaps. But if you ever set the GL_TEXTURE_MIN_FILTER value to use mipmapping, this will matter.
glEnableVertexAttribArray() in wrong place
This needs to happen before the glDrawArray() call, because the attributes obviously have to be enabled for drawing.
Instead of just moving them before the draw call, it's even better to place them in the attribute setup code, along the glVertexAttribPointer() calls. The enabled/disabled state is tracked in the VAO, so there's no need to make these calls every time. Just set up all of this state once during setup, and then simply binding the VAO before the draw call will set all the necessary state again.
Unnecessary glVertexAttribPointer() call
Harmless, but this call in the render() method is redundant:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
Again, this state is tracked in the VAO, so making this call once during setup is enough, once you fix the problems listed above.
It's best to simplify your problem to track down the issue. For example, instead of loading an actual image, you could just allocate a width*heigth*3 buffer and fill it with 127 to get a grey image (or pink to make it more obvious). Try coloring your fragments with the uv coordinates instead of using the sampler to see whether these values are set correctly.
I think you should enable the vertex attribute arrays before the draw call:
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Also, before binding a texture to a texture unit, you should specify what unit to use instead of relying on the default:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
And make sure you bind your sampler to that texture unit:
glUniform1i(glGetUniformLocation(ProgramID, "pxsampler"), 0);
For the sampler uniform, we're setting indices directly instead of the GL_TEXTURE* values. So GL_TEXTURE0 needs an index 0 when setting the uniform and not GL_TEXTURE0.

Initialize GLSL layout with glEnableVertexAttribArray

I want to pass arrays of vertices, UV's and normals to the shader and transform them using MVP matrix, so I wrote a simple shader program:
#version 330 core
//Vertex shader
layout(location=0)in vec3 vertexPosition_modelspace;
layout(location=1)in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
UV = vertexUV;
}
#version 330
//Fragment Shader
in vec2 UV;
out vec3 color;
uniform sampler2D color_texture;
void main(void) {
color = texture(color_texture, UV).rgb;
}
Then I needed to pass an array of vertices, which is being initialized like that:
glGenBuffers(1, &vertex_buffer);
glBindBuffer(GL_ARRAY_BUFFER, vertex_buffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], model_usage);
Same with UV's and normals, the type is still GL_ARRAY_BUFFER for them.
Then a draw loop:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (Model* mdl : baseShader->getModels()) {
glUseProgram(baseShader->getShaderProgram());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mdl->getTextureBuffer());
glUniform1i(texture_location, 0);
glm::mat4 mvp = RootEngine->projection_matrix * RootEngine->view_matrix * mdl->getModelMatrix();
glUniformMatrix4fv(baseShader->getMVPlocation(), 1, GL_FALSE, &mvp[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mdl->getVertexBuffer());
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glEnableVertexAttribArray(1); // Matches layout (location = 1)
glBindBuffer(GL_ARRAY_BUFFER, mdl->getUVsBuffer());
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, mdl->getVertices()->size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
}
SDL_GL_SwapWindow(RootEngine->getMainWindow());
BaseShader and Model are my own classes which do a simple initialization and VBO handling.
The problem is nothing actually being rendered. I tried to add glEnableClientState(GL_VERTEX_ARRAY);
and
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
...
glDeleteVertexArrays(1, &VertexArrayID);
But still nothing. When I don't use layout and pass the data with glVertexPointer everything seems to work fine.
UPDATE 1: I found out what prevents vertices from being rendered. It's uniform variable in the VertexShader. If it's being removed the vertices are being rendered, but then there is no way to pass a matrix to the shader.
I've just passed the wrong texture, so the model was colored black and thus invisible. Thanks everybody for your answers.

Passing texture to shader in OpenGL

I'm working on some examples, and I was trying to pass a texture to a shader.
To build a VAO, I have this piece of code:
void PlaneShaderProgram::BuildVAO()
{
// Generate and bind the vertex array object
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);
// Generate and bind the vertex buffer object
glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, 12 * sizeof(GLfloat), _coordinates, GL_STATIC_DRAW);
// Generate and bind the index buffer object
glGenBuffers(1, &_ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * sizeof(GLuint), _indexes, GL_STATIC_DRAW);
// Generate and bind texture
_texture = LoadTexture("floor.bmp");
LoadAttributeVariables();
glBindVertexArray(0);
}
This is how I load the shader attributes:
void PlaneShaderProgram::LoadAttributeVariables()
{
GLuint VertexPosition_location = glGetAttribLocation(GetProgramID(), "vPosition");
glEnableVertexAttribArray(VertexPosition_location);
glVertexAttribPointer(VertexPosition_location, 3, GL_FLOAT, GL_FALSE, 0, 0);
}
void PlaneShaderProgram::LoadUniformVariables()
{
// OpenGL Matrices
GLuint ModelViewProjection_location = glGetUniformLocation(GetProgramID(), "mvpMatrix");
glUniformMatrix4fv(ModelViewProjection_location, 1, GL_FALSE, glm::value_ptr(_ModelViewProjection));
// Floor texture
// glActiveTexture(GL_TEXTURE0);
// glBindTexture(GL_TEXTURE_2D, _texture);
// GLint Texture_location = glGetUniformLocation(GetProgramID(), "texture");
// glUniform1i(Texture_location, 0);
}
And my LoatTexture:
GLuint ProgramManager::LoadTexture(const char* imagepath)
{
unsigned char * data = LoadBMP(imagepath, &width, &height);
GLuint textureID;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
return textureID;
}
Finally, my draw function, which is called in the OpenGL main loop, is the following:
void PlaneShaderProgram::DrawPlane(const glm::mat4 &Projection, const glm::mat4 &ModelView)
{
_ModelViewProjection = Projection * ModelView;
_ModelView = ModelView;
Bind();
glBindVertexArray(_vao);
LoadUniformVariables();
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
UnBind();
}
What I didn't get is that even if I didn't set the uniform texture (commented in the code) used by my shader, the plane is still draw using the texture. This makes no sense for me. Once the shader requires a sample2D, I think this shouldn't work and return some error.
Vertex Shader:
uniform mat4 mvpMatrix;
in vec4 vPosition;
smooth out vec2 uvCoord;
void main()
{
uvCoord = vPosition.xz;
gl_Position = mvpMatrix * vPosition;
}
Frag Shader:
uniform sampler2D texture;
in vec2 uvCoord;
out vec4 fragColor;
void main()
{
fragColor.rgb = texture(texture, uvCoord).rgb;
};
Am I missing something? Somehow this works I don't understand why, but I really like to.
The sampler data types in GLSL reference the texture unit, not a texture object. By default, uniforms will be initialized to 0, so if you don't set the sampler uniforms, they will sample from texture unit 0 (which is also the default unit). In your ProgramManager::LoadTexture() method, you bind the newly created texture, and very likely you are still using GL_TEXTURE0 as the currently active texture unit. You never seem to unbind it, so it is still bound at the time of the draw call, and the shader can access it.

OpenGL texture not rendering through GLSL

so i have a very simple glsl shader that renders an object with texture and directional light.
now I'm having a really tough time trying to get the texture to display, everything else works except for that.
when i disable the shader( glUseProgram(0) ) the texture renders in black and white but when i enable it the whole mesh is in a single color and not textured, and it changes color when I try different textures.
this is how i load my texture
data = CImg<unsigned char>(src);
glGenTextures(1, &m_textureObj);
glBindTexture(GL_TEXTURE_2D, m_textureObj);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, data.width(), data.height(), 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
this is how i bind my texture
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureObj);
and this is my vertex shader
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
uniform mat4 gWVP;
uniform mat4 gWorld;
out vec2 TexCoord0;
out vec3 Normal0;
void main()
{
gl_Position = (gWVP * gWorld) * vec4(Position, 1.0);
TexCoord0 = TexCoord;
Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;
}
and this is my fragment shader
#version 330
in vec2 TexCoord0;
in vec3 Normal0;
out vec4 FragColor;
struct DirectionalLight
{
vec3 Color;
float AmbientIntensity;
float DiffuseIntensity;
vec3 Direction;
};
uniform DirectionalLight gDirectionalLight;
uniform sampler2D gSampler;
void main()
{
vec4 AmbientColor = vec4(gDirectionalLight.Color, 1.0f) *
gDirectionalLight.AmbientIntensity;
float DiffuseFactor = dot(normalize(Normal0), -gDirectionalLight.Direction);
vec4 DiffuseColor;
if (DiffuseFactor > 0) {
DiffuseColor = vec4(gDirectionalLight.Color, 1.0f) *
gDirectionalLight.DiffuseIntensity *
DiffuseFactor;
}
else {
DiffuseColor = vec4(0, 0, 0, 0);
}
FragColor = texture2D(gSampler, TexCoord0.xy) *
(AmbientColor + DiffuseColor);
}
and last but not least i tell my shader to use the correct texture
glUniform1i(GetUniformLocation("gSampler"), 0);
I have done a F**** ton of reading online on how to properly do this simple and task and ran it in a bunch of different configurations only to become frustrated.
If anyone can point out what I'm doing wrong I would be thrilled, I'm sure I'm missing one stupid detail but i cant seem to find it. please don't refer me to https://www.opengl.org/wiki/Common_Mistakes#Creating_a_complete_texture thanks for reading.
this is how i create my mesh and upload it to the GPU.
this is the vertex constructor Vertex(x, y, z, u, v)
#define BUFFER_OFFSET(i) ((char *)NULL + (i))
unsigned int Indices[] = {
0, 3, 1,
1, 3, 2,
2, 3, 0,
1, 2, 0
};
Vertex Vertices[4] = {
Vertex(-1.0f, -1.0f, 0.5773f, 0.0f, 0.0f),
Vertex(0.0f, -1.0f, -1.15475f, 0.5f, 0.0f),
Vertex(1.0f, -1.0f, 0.5773f, 1.0f, 0.0f),
Vertex(0.0f, 1.0f, 0.0f, 0.5f, 1.0f)
};
Mesh *data = new Mesh(Vertices, 4, Indices, 12);
glGenBuffers(1, &vbo);
glGenBuffers(1, &ibo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(0));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(12));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(24));
glBufferData(GL_ARRAY_BUFFER, data->count*Vertex::Size(), data->verts, GL_STATIC_DRAW);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, data->indsCount*sizeof(unsigned int), data->inds, GL_STATIC_DRAW);
unsigned int Vertex::Size() {
return 32;
}
/*In the update function*/
glDrawElements(GL_TRIANGLES, data->indsCount, GL_UNSIGNED_INT, 0);
this is my vertex class http://pastebin.com/iYwjEdTQ
Edit
I HAVE SOLVED IT! i was binding the vertex the old way, then after ratchet freak showed me how to bind it correctly i noticed that i was binding the texture coordinates to location 2 but in the shader i was looking for in in location 1.
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(24));
(the frag shader)
layout (location = 1) in vec2 TexCoord;
so i changed the fragment shader to
layout (location = 2) in vec2 TexCoord;
and now it works beautifully. Thanks ratchet freak and birdypme for helping I'm gonna accept ratchet freak's answer because it lead me to the solution but you both steered me in the right direction.
This is your problem:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(0));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(12));
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(24));
This uses the old and deprecated fixed-function style specification instead you wan to use glVertexAttribPointer:
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, Vertex::Size(), BUFFER_OFFSET(0));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, Vertex::Size(), BUFFER_OFFSET(12));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, Vertex::Size(), BUFFER_OFFSET(24));
The other problem I see is that your Vertex class doesn't take the normal in the constructor. And Vertex::Size() should be equal to sizeof(Vertex)
Generally speaking, you should be using sizeof() instead of hard-coded offsets, to avoid bad surprises.
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(0));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(3*sizeof(float)));
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, Vertex::Size(), BUFFER_OFFSET(6*sizeof(float)));
and in vertex.h
static unsigned int Size() {
return 8*sizeof(float);
}
However, this is unlikely to properly answer your question. I am not deleting my answer to keep the vertex.h pastebin below, which can be important, but I'll probably delete it once a valid answer has been posted.