The very basic foundation for creating a VAO, VBO and applying a texture goes like this:
unsigned int VBO, VAO, EBO; //#1
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
// position attribute
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
// color attribute
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
// texture coord attribute
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)(6 * sizeof(float)));
glEnableVertexAttribArray(2); //#2
And for creating a texture:
unsigned int texture1, texture2;
// texture 1
// ---------
glGenTextures(1, &texture1); //#3
glBindTexture(GL_TEXTURE_2D, texture1); //#4
// set the texture wrapping parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // set texture wrapping to GL_REPEAT (default wrapping method)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
// set texture filtering parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// load image, create texture and generate mipmaps
int width, height, nrChannels;
stbi_set_flip_vertically_on_load(true); // tell stb_image.h to flip loaded texture's on the y-axis.
// The FileSystem::getPath(...) is part of the GitHub repository so we can find files on any IDE/platform; replace it with your own image path.
unsigned char *data = stbi_load(FileSystem::getPath("resources/textures/container.jpg").c_str(), &width, &height, &nrChannels, 0);
if (data)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
}
else
{
std::cout << "Failed to load texture" << std::endl;
}
stbi_image_free(data);
//some more code , inside the render loop
glActiveTexture(GL_TEXTURE0); #5
glBindTexture(GL_TEXTURE0, texture1); #6
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE0, texture1);
glDrawElements(...);
glfwSwapBuffers(...);
//end of render loop
According to what I've understood from learnopengl, from line #1 to line #2, the calls are stored inside the VAO which is why we don't have to write the stuff over and over again and we only switch the VAO for drawing.
But, is the code from line #3 to line #6 also stored in the VAO? If it is, then why don't we write line #5 directly after line #3? And if it isn't how do we link a specific texture unit to a specific VAO if we are using multiple VAOs?
EDIT:
after the glBindTexture(GL_TEXTURE_2D, texture1) the the texture calls that proceed affect the currently bound texture, isn't it? Then does that mean glActiveTexture(...) also affect the currently bound texture? And why do we bind the texture again after Activating it using glBindTexture(...)?
The reason lines 1 and 2 are "stored inside the VAO" is because that's what those functions do. Those functions set state within the currently bound vertex array object.
A vertex array object, as the name suggests, is an object that deals with vertex arrays. Texture management, texture storage, and using textures for rendering have nothing to do with vertex arrays. As such, none of those functions in any way affects VAO state, in much the same way that modifying the contents of a vector<int> won't modify the contents of some list<float> object.
how do we link a specific texture unit to a specific VAO if we are using multiple VAOs?
Again, VAOs deal with vertex data. Textures aren't vertex data, so VAOs don't care about them and vice-versa. You don't link textures and VAOs.
You use VAOs and textures (among other things) to perform rendering. VAOs and textures each do different things within the process of rendering and thus have no direct relationship to one another.
Related
I'm trying to implement shadows on my custom renderer through dynamic shadow mapping and forward rendering (deferred rendering will be implemented later). For instance, everything renders correctly to the Framebuffer used to generate the shadow map. However, when using the default Framebuffer to render the scene normally only the skybox gets rendered (it means that the default Framebuffer is used) and my only hypothesis is that the problem is related with the depth buffer since disabling the call to DrawActors(...) (in ForwardRenderShadows) appears to solve the problem but I can't generate my depth map if doing so. Any suggestions on the matter?
Code:
void Scene::DrawActors(const graphics::Shader& shader)
{
for(const auto& actor : actors_)
actor->Draw(shader);
}
template <typename T>
void ForwardRenderShadows(const graphics::Shader& shadow_shader, const std::vector<T>& lights)
{
for(const auto& light : lights)
{
if(light->Shadow())
{
light->DrawShadows(shadow_shader);
DrawActors(shadow_shader); //removing this line "solves the problem"
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
}
}
/*
shadow mapping is only implemented on directional lights for the moment, and that is the
relevant code that gets called when the process starts, more code details at the end of code
snippet.
*/
void DirectionalLight::SetupShadows()
{
glGenFramebuffers(1, &framebuffer_);
glGenTextures(1, &shadow_map_);
glBindTexture(GL_TEXTURE_2D, shadow_map_);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, constants::SHADOW_WIDTH, constants::SHADOW_HEIGHT, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, shadow_map_, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
throw std::runtime_error("Directional light framebuffer is not complete \n");
glBindFramebuffer(GL_FRAMEBUFFER, 0);
ShadowSetup(true);
}
void DirectionalLight::DrawShadows(const graphics::Shader& shader)
{
if(!ShadowSetup())
SetupShadows();
glViewport(0, 0, constants::SHADOW_WIDTH, constants::SHADOW_HEIGHT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer_);
glClear(GL_DEPTH_BUFFER_BIT);
shader.Use();
projection_ = clutch::Orthopraphic(-10.0f, 10.0f, -10.0f, 10.0f, 1.0f, 100.0f);
transform_ = clutch::LookAt(direction_ * -1.0f, {0.0f, 0.0f, 0.0f, 0.0f}, {0.0f, 1.0f, 0.0f, 0.0f});
shader.SetUniformMat4("light_transform", projection_ * transform_);
}
void DirectionalLight::Draw(const graphics::Shader& shader)
{
shader.SetUniform4f("light_dir", direction_);
shader.SetUniform4f("light_color", color_);
shader.SetUniformMat4("light_transform", transform_);
shader.SetUniformMat4("light_projection", projection_);
shader.SetUniformInt("cast_shadow", shadows_);
glActiveTexture(GL_TEXTURE12);
shader.SetUniformInt("shadow_map", 12);
glBindTexture(GL_TEXTURE_2D, shadow_map_);
}
Code the repo: https://github.com/rxwp5657/Nitro
Relevant Files for the problem:
include/core/scene.hpp
include/core/directional_light.hpp
include/core/light_shadow.hpp
include/core/directional_light.hpp
include/graphics/mesh.hpp
src/core/scene.cpp
src/core/directional_light.cpp
src/core/light_shadow.cpp
src/core/directional_light.cpp
src/graphics/mesh.cpp
Finally, what I have tried so far is:
Deactivating depth testing with glDepthMask(GL_FALSE) and glDisable(GL_DEPTH_TEST) //same problem.
Changing depth function to glDepthFunc(GL_ALWAYS); // No desired results;
If you have a NVIDIA graphics card, you could have a look a Nsight. You can capture a frame and view all occurred GL-calls.
Then, you can select an event, for instance the event 22 in my example, and inspect all textures, the color buffer, the depth buffer and the stencil buffer.
Furthermore, you can have a look on all GL-state parameters at one event.
Ok, after using apitrace I found out that the VBO changes when switching from a custom Framebuffer to the default one. Because of this, the solution to the problem is to set again the VBO after switching to the default frame buffer.
Based on the code of the project, the solution is calling the Setup function again after switching to the default Framebuffer.
Setup function of the Mesh class:
void Mesh::Setup(const Shader& shader)
{
glGenVertexArrays(1, &vao_);
glBindVertexArray(vao_);
glGenBuffers(1, &elementbuffer_);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer_);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices_.size() * sizeof(unsigned int), &indices_[0], GL_STATIC_DRAW);
glGenBuffers(1, &vbo_);
glBindBuffer(GL_ARRAY_BUFFER, vbo_);
glBufferData(GL_ARRAY_BUFFER, vertices_.size() * sizeof(Vertex), &vertices_[0], GL_STATIC_DRAW);
shader.PosAttrib("aPosition", 3, GL_FLOAT, sizeof(Vertex), 0);
shader.PosAttrib("aNormal", 3, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, normal));
shader.PosAttrib("aTexCoord", 2, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, tex_coord));
shader.PosAttrib("aTangent", 3, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, tangent));
shader.PosAttrib("aBitangent",3, GL_FLOAT, sizeof(Vertex), offsetof(Vertex, bitangent));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
loaded_ = true;
}
Vertices buffer after switching to default frame buffer
Vertices buffer with no Framebuffer switching (no shadow map generated)
So I'm currently making a game using SDL2 with OpenGL (glew), and using SOIL to load images, and I'm using modern opengl techniques with vertex array objects and shaders and whatnot. I'm currently trying to just render just a texture to the window, but I can't seem to do it. I've looked up many tutorials and solutions, but I can't seem to understand how I'm supposed to do it properly. I'm unsure of whether it's the shader that's the problem, or my code itself. I'll post all the necessary code below, and any more is needed I'd be happy to supply it. Any answers and solutions are welcome. For future context, I have a class I use to store data for VAOs for convenience purposes. Code:
Here's the code to load the texture:
void PXSprite::loadSprite() {
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
int imagew, imageh;
//The path is a class variable, and I haven't received any errors from this function, so I can only assume it's getting the texture file correctly.
unsigned char* image = SOIL_load_image(path.c_str(), &imagew, &imageh, 0, SOIL_LOAD_RGB);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, imagew, imageh, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
SOIL_free_image_data(image);
glBindTexture(GL_TEXTURE_2D, 0);
//Don't worry about this code. It's just to keep the buffer object data.
//It works properly when rendering polygons.
spriteVAO.clear();
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addColor(PXColor::WHITE());
spriteVAO.addTextureCoordinate(0, 0);
spriteVAO.addTextureCoordinate(1, 0);
spriteVAO.addTextureCoordinate(1, 1);
spriteVAO.addTextureCoordinate(0, 1);
glGenVertexArrays(1, &spriteVAO.vaoID);
glGenBuffers(1, &spriteVAO.posVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*12, nullptr, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &spriteVAO.colVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.colVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 16, &spriteVAO.colors[0], GL_STATIC_DRAW);
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
glGenBuffers(1, &spriteVAO.texVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.texVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 8, &spriteVAO.texCoords[0], GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
And Here's my code for rendering the texture:
void PXSprite::render(int x, int y) {
spriteVAO.clear(PXVertexArrayObject::positionAttributeIndex);
spriteVAO.addPosition(x, y);
spriteVAO.addPosition(x+width, y);
spriteVAO.addPosition(x, y+height);
spriteVAO.addPosition(x+width, y+height);
glBindTexture(GL_TEXTURE_2D, textureID);
glBindVertexArray(spriteVAO.vaoID);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(GLfloat)*12, &spriteVAO.positions[0]);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindTexture(GL_TEXTURE_2D, 0);
}
Here's my vertex shader:
#version 330 core
in vec3 in_Position;
in vec4 in_Color;
in vec2 in_TexCoord;
out vec4 outPosition;
out vec4 outColor;
out vec2 outTexCoord;
void main() {
gl_Position = vec4(in_Position.x, in_Position.y*-1.0, in_Position.z, 1.0);
outTexCoord = in_TexCoord;
outColor = in_Color;
}
Here's my fragment shader:
#version 330 core
in vec2 outTexCoord;
in vec4 outColor;
out vec4 glFragColor;
out vec4 glTexColor;
uniform sampler2D pxsampler;
void main() {
vec4 texColor = texture(pxsampler, outTexCoord);
//This outputs the color of polygons if I don't multiply outColor by texColor, but once I add texColor, no colors show up at all.
glFragColor = texColor*outColor;
}
And lastly here's a bit of code to give reference to the VAO Attribute Pointers that I use at the right time when loading the shaders:
glBindAttribLocation(ProgramID, 0, "in_Position");
glBindAttribLocation(ProgramID, 1, "in_Color");
glBindAttribLocation(ProgramID, 2, "in_TexCoord");
Any help or suggestions are welcome. If any extra code is needed I'll add it. I'm sorry if this question seems redundant, but I couldn't find anything to help me. If someone could explain to me how exactly the code for rendering the textures works, that would be great, because from tutorials I've read they usually don't explain what does what so that I know how to do it again, so I'm not clear on what I'm doing exactly. Again, any help is appreciated. Thanks!
I spotted a few problems in this code. Partly already mentioned in other answers/comments, partly not.
VAO binding
The main problem is that your VAO is not bound while you set up the vertex attributes. See this code sequence:
glGenVertexArrays(1, &spriteVAO.vaoID);
glGenBuffers(1, &spriteVAO.posVBOid);
glBindBuffer(GL_ARRAY_BUFFER, spriteVAO.posVBOid);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*12, nullptr, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
Here, you create a VAO (or more precisely, the id for a VAO), but don't bind it. The state set up by the glVertexAttribPointer() call is stored in the currently bound VAO. This call should actually give you a GL_INVALID_OPERATION error in a core profile context, since you need to have a VAO bound when making it.
To fix this, bind the VAO after creating the id:
glGenVertexArrays(1, &spriteVAO.vaoID);
glBindVertexArray(spriteVAO.vaoID);
...
glGenerateMipmap() in wrong place
As pointed out in a comment, this call belongs after the glTexImage2D() call. It generate mipmaps based on the current texture content, meaning that you need to specify the texture data first.
This error does not cause immediate harm in your current code since you're not actually using mipmaps. But if you ever set the GL_TEXTURE_MIN_FILTER value to use mipmapping, this will matter.
glEnableVertexAttribArray() in wrong place
This needs to happen before the glDrawArray() call, because the attributes obviously have to be enabled for drawing.
Instead of just moving them before the draw call, it's even better to place them in the attribute setup code, along the glVertexAttribPointer() calls. The enabled/disabled state is tracked in the VAO, so there's no need to make these calls every time. Just set up all of this state once during setup, and then simply binding the VAO before the draw call will set all the necessary state again.
Unnecessary glVertexAttribPointer() call
Harmless, but this call in the render() method is redundant:
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
Again, this state is tracked in the VAO, so making this call once during setup is enough, once you fix the problems listed above.
It's best to simplify your problem to track down the issue. For example, instead of loading an actual image, you could just allocate a width*heigth*3 buffer and fill it with 127 to get a grey image (or pink to make it more obvious). Try coloring your fragments with the uv coordinates instead of using the sampler to see whether these values are set correctly.
I think you should enable the vertex attribute arrays before the draw call:
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Also, before binding a texture to a texture unit, you should specify what unit to use instead of relying on the default:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
And make sure you bind your sampler to that texture unit:
glUniform1i(glGetUniformLocation(ProgramID, "pxsampler"), 0);
For the sampler uniform, we're setting indices directly instead of the GL_TEXTURE* values. So GL_TEXTURE0 needs an index 0 when setting the uniform and not GL_TEXTURE0.
So in my program I have a number of textures that I am trying to display. Earlier in my code I generate the VAO for the textures and the ibo (or index buffer) for each texture. But when I run my code it crashes at the glDrawElements() call and in nvoglv32.dll. I've read around that a bug in the nvidia driver might be causing it but I doubt it. Something is probably wrong when I generate or bind the VAO or ibo but I have no idea where. Here's the section of code where the error happens:
for (int i = 0; i < NUM_TEXTURES; i++){
glBindVertexArray(VAO_T[i]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo[i]);
glBindTexture(GL_TEXTURE_2D, texture[i]);
//error here
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, BUFFER_OFFSET(0));//error right here
}
This is the error I get when running in debug:
Unhandled exception at 0x0263FE4A in Comp465.exe: 0xC0000005: Access violation reading location 0x00000000.
Heres my code where I generate the VAO, ibo, and textures:
glGenVertexArrays(NUM_TEXTURES, VAO_T);
glGenBuffers(NUM_TEXTURES, VBO_T);
glGenBuffers(NUM_TEXTURES, ibo);
glGenTextures(NUM_TEXTURES, texture);
...
for (int i = 0; i < NUM_TEXTURES; i++){
//Tel GL which VAO we are using
glBindVertexArray(VAO_T[i]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo[i]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[i]), indices[i], GL_STATIC_DRAW);
//initialize a buffer object
glEnableVertexAttribArray(VBO_T[i]);
glBindBuffer(GL_ARRAY_BUFFER, VBO_T[i]);
glBufferData(GL_ARRAY_BUFFER, sizeof(point[i]) + sizeof(texCoords), NULL, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(point[i]), point[i]);
glBufferSubData(GL_ARRAY_BUFFER, sizeof(point[i]), sizeof(texCoords), texCoords);
GLuint vPosition = glGetAttribLocation(textureShader, "vPosition");
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
glEnableVertexAttribArray(vPosition);
GLuint vTexCoord = glGetAttribLocation(textureShader, "vTexCoord");
glVertexAttribPointer(vTexCoord, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(sizeof(point[i])));
glEnableVertexAttribArray(vTexCoord);
//Get handles for the uniform structures in the texture shader program
VP = glGetUniformLocation(textureShader, "ViewProjection");
//Bind the texture that we want to use
glBindTexture(GL_TEXTURE_2D, texture[i]);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
// set texture parameters
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
//Load texture
texture[i] = loadRawTexture(texture[i], TEX_FILE_NAME[i], PixelSizes[i][0], PixelSizes[i][1]);
if (texture[i] != 0) {
printf("texture loaded \n");
}
else
printf("Error loading texture \n");
}
This statement certainly looks wrong:
glEnableVertexAttribArray(VBO_T[i]);
glEnableVertexAttribArray() takes an attribute location as its argument, not a buffer id. You actually use it correctly later:
GLuint vPosition = glGetAttribLocation(textureShader, "vPosition");
...
glEnableVertexAttribArray(vPosition);
GLuint vTexCoord = glGetAttribLocation(textureShader, "vTexCoord");
...
glEnableVertexAttribArray(vTexCoord);
So you should be able to simply delete that extra call with the invalid argument.
Apart from that, I noticed a couple of things that look slightly off, or at least suspicious:
The following call is meaningless if you use the programmable pipeline, which you are based on what's shown in the rest of the code. It can be deleted.
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
This is probably just a naming issue, but textureShader needs to be a program object, i.e. the return value of a glCreateProgram(), not a shader object.
While inconclusive without seeing the declaration, I have a bad feeling about this, and a couple other similar calls:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices[i]), indices[i], GL_STATIC_DRAW);
If indices[i] is an array, i.e. the declaration looks something like this:
indices[NUM_TEXTURES][INDEX_COUNT];
then this is ok. But if indices[i] is a pointer, or degenerated to a pointer when it was passed as a function argument, sizeof(indices[i]) will be the size of a pointer. You may want to double check that it gives the actual size of the index array. Same thing for other similar cases.
recently i found an Opengl example that applies a single texture without specifying texture image unit and without sending the corresponding unit integer uniform into the shader, is it possible to apply textures without using texture units?? or it simply uses a default value for both active texture unit and its shader sampler value.
my code block (texture related):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
GLuint TextureID = 0;
SDL_Surface* Surface = IMG_Load("Abrams_Tank.jpg");
glGenTextures(1, &TextureID);
glBindTexture(GL_TEXTURE_2D, TextureID);
int Mode = GL_RGB;
if(Surface->format->BytesPerPixel == 4) {
Mode = GL_RGBA;
}
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, Surface->w, Surface->h, 0, GL_BGRA, GL_UNSIGNED_BYTE, Surface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLuint vbo;
GLuint vbo_Text;
GLuint vao;
glGenBuffers(1, &vbo);
glGenBuffers(1, &vbo_Text);
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData) * 30,vertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (GLvoid*)(sizeof(float) * 18));
glDrawArrays(GL_TRIANGLES, 0,6);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glBindBuffer(0, vbo);
my fragment shader:
#version 410
in vec2 texCoordout;
uniform sampler2D mysampler;
out vec4 Fcolor;
void main()
{
Fcolor = texture (mysampler, texCoordout);
}
This will work, and is within specs:
The default texture unit is GL_TEXTURE0, so there is no need to ever call glActiveTexture() if only texture unit 0 is used.
Uniforms have a default value of 0, which is the correct value for a sampler that references texture unit 0.
I would still always set the uniform values just to make it explicit. If you don't set the uniform in this case, it is also much more likely that you will miss to set it if you ever use a texture unit other than 0.
I'm doing a bachelor project with some freinds, and we've run in to a pretty confusing bug with OpenGL textures. And I'm wondering if there's any knowledge about such things here...
The problem we're having, is that at our laptops (running Intel HD graphics) we can see the textures fine. But if we change it to run at dedicated graphics cards, we can't see the textures. We can't see them at our desktops with dedicated craphics cards either (both AMD and Nvidia).
So, what is up with that? Any ideas?
EDIT: Added texture & render code, not made by me so I don't know 100% how it works. But I think i found everything.
Texture load:
Texture::Texture(const char* imagepath)
{
textureImage = IMG_Load(imagepath);
if (!textureImage)
{
fprintf(stderr, "Couldn't load %s.\n", imagepath);
}
else{
textureID = 0;
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D,
0, GL_RGB,
textureImage->w,
textureImage->h,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
textureImage->pixels);
SDL_FreeSurface(textureImage);
}
}
Platform VBO:
PlatformVBO::PlatformVBO() {
glGenBuffers(1, &vboID);
glGenBuffers(1, &colorID);
glGenBuffers(1, &texID);
//glGenBuffers(1, &indID);
texCoords.push_back(0.0f);
texCoords.push_back(0.0f);
texCoords.push_back(1.0f);
texCoords.push_back(0.0f);
texCoords.push_back(1.0f);
texCoords.push_back(1.0f);
texCoords.push_back(0.0f);
texCoords.push_back(1.0f);
//
indices.push_back(0);
indices.push_back(1);
indices.push_back(2);
indices.push_back(0);
indices.push_back(2);
indices.push_back(3);
bgTexture = new Texture("./Texture/shiphull.png");
platform = new Texture("./Texture/Abstract_Vector_Background.png");
// Give the image to OpenGL
glBindBuffer(GL_ARRAY_BUFFER, texID);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) *texCoords.size() / 2, &texCoords, GL_STATIC_DRAW);
//glBindBuffer(GL_ARRAY_BUFFER, texID);
//glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) *texCoords.size() / 2, &texCoords, GL_STATIC_DRAW);
}
Update VBO:
void PlatformVBO::setVBO(){
// Vertices:
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) *vertices.size(), &vertices.front(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, colorID);
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) *colors.size(), &colors.front(), GL_STATIC_DRAW);
}
Draw:
void PlatformVBO::drawTexture(){
if (vertices.size() > 0){
setVBO();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indID);
//Enable states, and render (as if using vertex arrays directly)
glEnableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vboID);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, texID);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
if (bgTexture->GetTexture() >= 0) {
glEnable(GL_TEXTURE_2D); // Turn on Texturing
// glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glBindTexture(GL_TEXTURE_2D, bgTexture->GetTexture());
}
//Draw the thing!
glDrawElements(GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, &indices[0]);
//restore the GL state back
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
if (bgTexture->GetTexture() >= 0) {
glDisable(GL_TEXTURE_2D); // Turn off Texturing
glBindTexture(GL_TEXTURE_2D, bgTexture->GetTexture());
}
glBindBuffer(GL_ARRAY_BUFFER, 0); //Restore non VBO mode
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
}
Do not divide your texture coordinate size by 2 (that will cut the coordinates in half), and do not pass the address of the vector (that will give undefined results as there is no guarantee about the memory layout of std::vector).
Use this instead:
glBufferData (GL_ARRAY_BUFFER, sizeof(GLfloat) * texCoords.size(), &texCoords [0], GL_STATIC_DRAW);
Notice how this uses the address of the reference returned by std::vector's operator [] instead? C++03 guarantees the memory used internally for element storage is a contiguous array. Many earlier implementations of the C++ standard library worked this way, but the spec. did not guarantee it. C++11 adds std::vector::data (), which effectively does the same thing, but makes your code require a newer compiler.