I want to draw a cube and a sphere and apply a different texture to each.
I use blender to create the scene and then export to an obj file which then includes the vertices, normals, uvs and faces for both objects as well as the textures.
I have created a routine which loads all the data from the obj file. This all works as I can load the objects and display them etc but with only one texture. As I say I have gone through pages and pages of code and posts and 99% only deal with 1 texture to 1 object and those that deal with multiple textures only deal with one object or are in a very old version of openGL.
The one thing I haven't tried is uniform sample2D arrays in the fragment shader but I haven't found an explanation on that so haven't tried it.
My code that I have below:
ObjLoader *obj = new ObjLoader();
string _filepath = "objects\\" + _filename;
//bool res = obj->loadObjWithStaticColor(_filepath.c_str(), _vertices, _normals, vertex_colors, _colors, 1.0);
bool res = obj->loadObjWithTextures(_filepath.c_str(), _objects, _textures);
program = InitShader("shaders\\vshader.glsl", "shaders\\fshader.glsl");
glUseProgram(program);
GLuint vao_world_objects;
glGenVertexArrays(1, &vao_world_objects);
glBindVertexArray(vao_world_objects);
//GLuint vbo_world_objects;
//glGenBuffers(1, &vbo_world_objects);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_world_objects);
NumVertices = _objects[_objects.size() - 1]._stop + 1;
for (size_t i = 0; i < _objects.size(); i++)
{
_vertices.insert(_vertices.end(), _objects[i]._vertices.begin(), _objects[i]._vertices.end());
_normals.insert(_normals.end(), _objects[i]._normals.begin(), _objects[i]._normals.end());
_uvs.insert(_uvs.end(), _objects[i]._uvs.begin(), _objects[i]._uvs.end());
}
GLuint _vSize = _vertices.size() * sizeof(point4);
GLuint _nSize = _normals.size() * sizeof(point4);
GLuint _uSize = _uvs.size() * sizeof(point2);
GLuint _totalSize = _vSize + _uSize; // normals + vertices + uvs
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, _vSize, &_vertices[0], GL_STATIC_DRAW);
GLuint uvbuffer;
glGenBuffers(1, &uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glBufferData(GL_ARRAY_BUFFER, _uSize, &_uvs[0], GL_STATIC_DRAW);
TextureID = glGetUniformLocation(program, "myTextureSampler");
TextureObjects = new GLuint[_textures.size()];
glGenTextures(_textures.size(), TextureObjects);
for (size_t i = 0; i < _textures.size(); i++)
{
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
// Give the image to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _textures[i].width, _textures[i].height, 0, GL_BGR, GL_UNSIGNED_BYTE, _textures[i]._tex_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
for (size_t i = 0; i < _objects.size(); i++)
{
if (i == 0)
{
glActiveTexture(GL_TEXTURE0);
}
else
{
glActiveTexture(GL_TEXTURE1);
}
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
GLuint _v_size = _objects[i]._vertices.size() * sizeof(point4);
GLuint _u_size = _objects[i]._uvs.size() * sizeof(point2);
GLuint vPosition = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(vPosition);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
if (i == 0)
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_v_size));
}
GLuint vUV = glGetAttribLocation(program, "vUV");
glEnableVertexAttribArray(vUV);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
if (i == 0)
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_u_size));
}
if (i == 0)
{
glUniform1i(TextureID, 0);
}
else
{
glUniform1i(TextureID, 1);
}
}
_scale = Scale(zoom, zoom, zoom);
_projection = Perspective(45.0, 4.0 / 3.0, 0.1, 100.0);
_view = LookAt(point4(Camera.x, Camera.y, Camera.z, 0), point4(0, 0, 0, 0), point4(0, 1, 0, 0));
_model = mat4(1.0); // identity matrix
_mvp = _projection * _view * _model;
MVP = glGetUniformLocation(program, "MVP");
theta = glGetUniformLocation(program, "theta");
Zoom = glGetUniformLocation(program, "Zoom");
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glClearColor(1.0, 1.0, 1.0, 1.0);
I understand that I have to switch between the active textures when drawing an object but I can't figure out how.
UPDATE
#immibis Ok I tried to do that yesterday but it didn't work but it was late and I was highly frustrated. SO just to get my thinking correct here, do I have to create a buffer every time (glGenBuffer) and then fill it, activate texture and then glDrawArrays or do I just create the buffer and then fill it every time with the different vetices and uvs for each object, set the offsets and then call glDrawArray for each object?
When I tried this originally I didn't know where the
glGetAttribLocation / glEnableVertexAttribArray /glBindBuffer
should go. So if I understand correctly every time I do a transformation like rotating around the x axis then buffers have to be filled etc so the code needs to go in the display function. Is that correct?
SOLVED
Ok so thanx to immibus' comments, it got me looking in a different direction. I was staring the whole time into how the data was pumped into the arrays that I never even looked at glDrawArrays. I was searching the web again and I came across a piece of code in a tutorial and the person explained glDrawArrays and I saw that you can tell it what to draw.
So then this became easy, as I originally thought it was supposed to be. I changed my code back to pumping everything in the buffers and since I have a start and stop property on my objects returned from my loader it was real easy to tell glDrawArrays what to do.
Thank you.
Related
This question already has an answer here:
Direct State access with vertex buffers
(1 answer)
Closed 2 years ago.
The Description
I'm currently learning OpenGL and want to try out the the direct state access extension from OpenGL 4.5. Thus I setup a simple triangle render example (in 3D) which should render my triangle. The code below is executed each frame and draws a triangle in 3D space.
void Geometry::draw(glm::vec3 pos, glm::vec3 pos1, glm::vec3 pos2) {
float vertices[9] = {
pos.x, pos.y, pos.z,
pos1.x, pos1.y, pos1.z,
pos2.x, pos2.y, pos2.z
};
unsigned int m_VaoID;
glGenVertexArrays(1, &m_VaoID);
glBindVertexArray(m_VaoID);
unsigned int m_VboID;
glGenBuffers(1, &m_VboID);
glBindBuffer(GL_ARRAY_BUFFER, m_VboID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 3, 0);
Shader shader = Shader("../../shader.glsl");
shader.compileShader();
shader.useShader();
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteVertexArrays(1, &m_VaoID);
glDeleteBuffers(1, &m_VboID);
}
However, as soon as I change the code to make use of the direct state access extension, it no longer renders a triangle and instead produces error messages.
void Geometry::drawDSA(glm::vec3 pos, glm::vec3 pos1, glm::vec3 pos2) {
float vertices[9] = {
pos.x, pos.y, pos.z,
pos1.x, pos1.y, pos1.z,
pos2.x, pos2.y, pos2.z
};
unsigned int m_VaoID;
glCreateVertexArrays(1, &m_VaoID);
glVertexArrayAttribFormat(m_VaoID, 0, 3, GL_FLOAT, GL_FALSE, 0);
unsigned int m_VboID;
glCreateBuffers(1, &m_VboID);
glNamedBufferData(m_VboID, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexArrayAttribBinding(m_VaoID, 0, m_VboID);
Shader shader = Shader("../../shader.glsl");
shader.compileShader();
shader.useShader();
glBindVertexArray(m_VaoID);
glBindBuffer(GL_ARRAY_BUFFER, m_VboID);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteVertexArrays(1, &m_VaoID);
glDeleteBuffers(1, &m_VboID);
}
Both methods get called in my update loop with the same values (between -1.f and 0.f).
The error output:
Here is the console output of the error I'm getting:
[OpenGL Debug HIGH] GL_INVALID_VALUE in glVertexArrayAttribBinding(bindingindex=16 >= GL_MAX_VERTEX_ATTRIB_BINDINGS)
[OpenGL Debug HIGH] GL_INVALID_VALUE in glVertexArrayAttribBinding(bindingindex=17 >= GL_MAX_VERTEX_ATTRIB_BINDINGS)
[OpenGL Debug HIGH] GL_INVALID_VALUE in glVertexArrayAttribBinding(bindingindex=18 >= GL_MAX_VERTEX_ATTRIB_BINDINGS)
[OpenGL Debug HIGH] GL_INVALID_VALUE in glVertexArrayAttribBinding(bindingindex=19 >= GL_MAX_VERTEX_ATTRIB_BINDINGS)
What I tried so far
I looked through the Khronos Wiki and through the docs.gl documentation, but I could not find out why my code produces the above error instead of rendering my triangle.
The 3d parameter of glVertexArrayAttribBinding is not the vertex buffer object, it is a binding index. It is an arbitrary id which you have to choose.
The following instruction associates a binding index to an attribute index (0) of the specified VAO.
glVertexArrayAttribBinding(m_VaoID, 0, m_VboID);
glVertexArrayAttribBinding(m_VaoID, 0, binding_index);
Furthermore you have to useglVertexArrayVertexBuffer to associate a buffer object to a binding index of a specified VAO:
glVertexArrayVertexBuffer(m_VaoID, binding_index, m_VboID, 0, sizeof(float) * 3);
I am trying to render 3D models with textures using Assimp. The conversion goes perfect, all textures positions and what not gets loaded. I have tested the texture images by drawing them to the screen in 2D.
For some reason it does not render the textures to the model.
I am a beginner in OpenGL so forgive me if i dont explain it right.
The tutorial I have based the code on is from here, but i stripped a big part since I have my own camera/movement system.
The model renders like this: http://i.stack.imgur.com/5sK9K.png
whilest the texture in use looks like this: http://i.stack.imgur.com/sWGp7.jpg
The relevant rendering code is the following:
Generating textures from data file:
int Mesh::LoadGLTextures(const aiScene* scene){
if (scene->HasTextures()) return -1; //yes this is correct
/* getTexture Filenames and Numb of Textures */
for (unsigned int m = 0; m<scene->mNumMaterials; m++){
int texIndex = 0;
aiReturn texFound;
aiString path; // filename
while ((texFound = scene->mMaterials[m]->GetTexture(aiTextureType_DIFFUSE, texIndex, &path)) == AI_SUCCESS){
textureIdMap[path.data] = NULL; //fill map with textures, pointers still NULL yet
texIndex++;
}
}
int numTextures = textureIdMap.size();
/* create and fill array with GL texture ids */
GLuint* textureIds = new GLuint[numTextures];
/* get iterator */
std::map<std::string, GLuint>::iterator itr = textureIdMap.begin();
std::string basepath = getBasePath(path);
ALLEGRO_BITMAP *image;
for (int i = 0; i<numTextures; i++){
std::string filename = (*itr).first; // get filename
(*itr).second = textureIds[i]; // save texture id for filename in map
itr++; // next texture
std::string fileloc = basepath + filename; /* Loading of image */
image = al_load_bitmap(fileloc.c_str());
if (image) /* If no error occured: */{
GLuint texId = al_get_opengl_texture(image);
//glGenTextures(numTextures, &textureIds[i]); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, texId); /* Binding of texture name */
//redefine standard texture values
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); /* We will use linear
interpolation for magnification filter */
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); /* We will use linear
interpolation for minifying filter */
textureIdMap[filename] = texId;
} else {
/* Error occured */
std::cout << "Couldn't load Image: " << fileloc.c_str() << "\n";
}
}
//Cleanup
delete[] textureIds;
//return success
return true;
}
Generating VBO/VAO:
void Mesh::genVAOsAndUniformBuffer(const aiScene *sc) {
struct MyMesh aMesh;
struct MyMaterial aMat;
GLuint buffer;
// For each mesh
for (unsigned int n = 0; n < sc->mNumMeshes; ++n){
const aiMesh* mesh = sc->mMeshes[n];
// create array with faces
// have to convert from Assimp format to array
unsigned int *faceArray;
faceArray = (unsigned int *)malloc(sizeof(unsigned int) * mesh->mNumFaces * 3);
unsigned int faceIndex = 0;
for (unsigned int t = 0; t < mesh->mNumFaces; ++t) {
const aiFace* face = &mesh->mFaces[t];
memcpy(&faceArray[faceIndex], face->mIndices, 3 * sizeof(unsigned int));
faceIndex += 3;
}
aMesh.numFaces = sc->mMeshes[n]->mNumFaces;
// generate Vertex Array for mesh
glGenVertexArrays(1, &(aMesh.vao));
glBindVertexArray(aMesh.vao);
// buffer for faces
glGenBuffers(1, &buffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * mesh->mNumFaces * 3, faceArray, GL_STATIC_DRAW);
// buffer for vertex positions
if (mesh->HasPositions()) {
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * mesh->mNumVertices, mesh->mVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(vertexLoc);
glVertexAttribPointer(vertexLoc, 3, GL_FLOAT, 0, 0, 0);
}
// buffer for vertex normals
if (mesh->HasNormals()) {
glGenBuffers(1, &buffer);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * mesh->mNumVertices, mesh->mNormals, GL_STATIC_DRAW);
glEnableVertexAttribArray(normalLoc);
glVertexAttribPointer(normalLoc, 3, GL_FLOAT, 0, 0, 0);
}
// buffer for vertex texture coordinates
if (mesh->HasTextureCoords(0)) {
float *texCoords = (float *)malloc(sizeof(float) * 2 * mesh->mNumVertices);
for (unsigned int k = 0; k < mesh->mNumVertices; ++k) {
texCoords[k * 2] = mesh->mTextureCoords[0][k].x;
texCoords[k * 2 + 1] = mesh->mTextureCoords[0][k].y;
}
glGenBuffers(1, &buffer);
glEnableVertexAttribArray(texCoordLoc);
glBindBuffer(GL_ARRAY_BUFFER, buffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 2 * mesh->mNumVertices, texCoords, GL_STATIC_DRAW);
glVertexAttribPointer(texCoordLoc, 2, GL_FLOAT, GL_FALSE, 0, 0);
}
// unbind buffers
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// create material uniform buffer
aiMaterial *mtl = sc->mMaterials[mesh->mMaterialIndex];
aiString texPath; //contains filename of texture
if (AI_SUCCESS == mtl->GetTexture(aiTextureType_DIFFUSE, 0, &texPath)){
//bind texture
unsigned int texId = textureIdMap[texPath.data];
aMesh.texIndex = texId;
aMat.texCount = 1;
} else {
aMat.texCount = 0;
}
float c[4];
set_float4(c, 0.8f, 0.8f, 0.8f, 1.0f);
aiColor4D diffuse;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse))
color4_to_float4(&diffuse, c);
memcpy(aMat.diffuse, c, sizeof(c));
set_float4(c, 0.2f, 0.2f, 0.2f, 1.0f);
aiColor4D ambient;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_AMBIENT, &ambient))
color4_to_float4(&ambient, c);
memcpy(aMat.ambient, c, sizeof(c));
set_float4(c, 0.0f, 0.0f, 0.0f, 1.0f);
aiColor4D specular;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_SPECULAR, &specular))
color4_to_float4(&specular, c);
memcpy(aMat.specular, c, sizeof(c));
set_float4(c, 0.0f, 0.0f, 0.0f, 1.0f);
aiColor4D emission;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_EMISSIVE, &emission))
color4_to_float4(&emission, c);
memcpy(aMat.emissive, c, sizeof(c));
float shininess = 0.0;
unsigned int max;
aiGetMaterialFloatArray(mtl, AI_MATKEY_SHININESS, &shininess, &max);
aMat.shininess = shininess;
glGenBuffers(1, &(aMesh.uniformBlockIndex));
glBindBuffer(GL_UNIFORM_BUFFER, aMesh.uniformBlockIndex);
glBufferData(GL_UNIFORM_BUFFER, sizeof(aMat), (void *)(&aMat), GL_STATIC_DRAW);
myMeshes.push_back(aMesh);
}
}
Rendering model:
void Mesh::recursive_render(const aiScene *sc, const aiNode* nd){
// draw all meshes assigned to this node
for (unsigned int n = 0; n < nd->mNumMeshes; ++n){
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, myMeshes[nd->mMeshes[n]].texIndex);
// bind VAO
glBindVertexArray(myMeshes[nd->mMeshes[n]].vao);
// draw
glDrawElements(GL_TRIANGLES, myMeshes[nd->mMeshes[n]].numFaces * 3, GL_UNSIGNED_INT, 0);
}
// draw all children
for (unsigned int n = 0; n < nd->mNumChildren; ++n){
recursive_render(sc, nd->mChildren[n]);
}
}
Any other relevant code parts can be found in my open github project https://github.com/kwek20/StrategyGame/tree/master/Strategy
Mesh.cpp is relevant, as well as main.cpp and Camera.cpp.
As far as I understaind I followed the guidelines well, created a VAO, created VBOs, added data and enabled the proper vertex array attriute tot render the scene with.
I have checked all the data variables and everything is filled according to plan
Could anyone here spot the mistake I have made and or explain it?
Some links are typed weird because of the limit I have :(
It would help if you posted your shaders also.
I can post some rendering code with textures if that helps you out:
Generating the texture for opengl and loading a grayscale (UC8) image with width and height into the GPU
void GLRenderer::getTexture(unsigned char * image, int width, int height)
{
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &mTextureID);
glBindTexture(GL_TEXTURE_2D, mTextureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGB8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGR, GL_UNSIGNED_BYTE, image);
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
glBindTexture(GL_TEXTURE_2D, 0);
}
Loading the vertices from assimp onto the gpu
//** buffer a obj file-style model, initialize the VAO
void GLRenderer::bufferModel(float* aVertexArray, int aNumberOfVertices, float* aNormalArray, int aNumberOfNormals, float* aUVList, int aNumberOfUVs, unsigned int* aIndexList, int aNumberOfIndices)
{
//** just to be sure we are current
glfwMakeContextCurrent(mWin);
//** Buffer all data in VBOs
glGenBuffers(1, &mVertex_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mVertex_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfVertices * 3, aVertexArray, GL_STATIC_DRAW);
glGenBuffers(1, &mNormal_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mNormal_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfNormals * 3, aNormalArray, GL_STATIC_DRAW);
glGenBuffers(1, &mUV_buffer_object);
glBindBuffer(GL_ARRAY_BUFFER, mUV_buffer_object);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * aNumberOfUVs * 2, aUVList, GL_STATIC_DRAW);
glGenBuffers(1, &mIndex_buffer_object);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndex_buffer_object);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * aNumberOfIndices, aIndexList, GL_STATIC_DRAW);
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
//** VAO tells our shaders how to match up data from buffer to shader input variables
glGenVertexArrays(1, &mVertex_array_object);
glBindVertexArray(mVertex_array_object);
//** vertices first
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mVertex_buffer_object);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
//** normals next
if (aNumberOfNormals > 0){
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, mNormal_buffer_object);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, NULL);
}
//** UVs last
if (aNumberOfUVs > 0){
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, mUV_buffer_object);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, NULL);
}
//** indexing for reusing vertices in triangle-meshes
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndex_buffer_object);
//** check errors and store the number of vertices
if (aux::checkGlErrors(__LINE__, __FILE__))assert(false);
mNumVert = aNumberOfVertices;
mNumNormals = aNumberOfNormals;
mNumUVs = aNumberOfUVs;
mNumIndices = aNumberOfIndices;
}
The code above is called like:
//read vertices from file
std::vector<float> vertex, normal, uv;
std::vector<unsigned int> index;
//assimp-wrapping function to load obj to vectors
aux::loadObjToVectors("Resources\\vertices\\model.obj", vertex, normal, index, uv);
mPtr->bufferModel(&vertex[0], static_cast<int>(vertex.size()) / 3, &normal[0], static_cast<int>(normal.size()) / 3, &uv[0], static_cast<int>(uv.size()) / 2, &index[0], static_cast<int>(index.size()));
Then comes the shader-part:
In the vertex shader you just hand-through the UV-coordinate layer
#version 400 core
layout (location = 0) in vec3 vertexPosition_modelspace;
layout (location = 1) in vec3 vertexNormal_modelspace;
layout (location = 2) in vec2 vertexUV;
out vec2 UV;
[... in main then ...]
UV = vertexUV;
While in the fragment shader you assign the value to the pixel:
#version 400 core
in vec2 UV;
uniform sampler2D textureSampler;
layout(location = 0) out vec4 outColor;
[... in main then ...]
// you probably want to calculate lighting here then too, so its just the simplest way to get the texture inside
outColor = vec4(texture2D(textureSampler, UV).rgb, cosAngle);
//you can also check whether the UV coords are correctly bound by using:
outColor = vec4(UV.x, UV.y,1,1);
//and then checking the pixel-values in the resulting image (e.g. render it to a PBO and then download it onto the CPU for)
In the rendering loop also make sure that all the uniforms are correctly bound (especially texture related ones) and that the texture is active and bound
if (mTextureID != -1) {
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTextureID);
}
GLint textureLocation = glGetUniformLocation(mShaderProgram, "textureSampler");
glUniform1i(textureLocation, 0);
//**set the poligon mode
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
//**drawElements because of indexing
glDrawElements(GL_TRIANGLES, mNumIndices, GL_UNSIGNED_INT, 0);
I hope I could help you!
Kind regards,
VdoP
I've been having problems storing texture coordinate points in a VBO, and then telling OpenGL to use it when it's time to render. In the code below, what I should be getting is a nice 16x16 texture on a square I am making using quads. However what I do get is the first top left pixel of the image instead which is red, so I get a big red square. Please tell me what I am doing wrong in great detail.
public void start() {
try {
Display.setDisplayMode(new DisplayMode(800,600));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 800, 0, 600, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
//loadTextures();
TextureManager.init();
makeCube();
// init OpenGL here
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
// render OpenGL here
renderCube();
Display.update();
}
Display.destroy();
}
public static void main(String[] argv) {
Screen screen = new Screen();
screen.start();
}
int cube;
int texture;
private void makeCube() {
FloatBuffer cubeBuffer;
FloatBuffer textureBuffer;
//Tried using 0,0,16,0,16,16,0,16 for textureData did not work.
float[] textureData = new float[]{
0,0,
1,0,
1,1,
0,1};
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texture);
glBufferData(GL_ARRAY_BUFFER, textureBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
float[] cubeData = new float[]{
/*Front Face*/
100, 100,
100 + 200, 100,
100 + 200, 100 + 200,
100, 100 + 200};
cubeBuffer = BufferUtils.createFloatBuffer(cubeData.length);
cubeBuffer.put(cubeData);
cubeBuffer.flip();
cube = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, cube);
glBufferData(GL_ARRAY_BUFFER, cubeBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
private void renderCube(){
TextureManager.texture.bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, texture);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, cube);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I believe your problem is in the argument to textureBuffer.put() in this code fragment:
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture is a variable of type int, which has not even been initialized yet. You later use it as a buffer name. The argument should be textureData instead:
textureBuffer.put(textureData);
I normally try to focus on functionality over style when answering questions here, but I can't help it this time: IMHO, texture is a very unfortunate name for a buffer name. It's not only a style and readability question. If you used descriptive names for the variables, you most likely would have spotted this problem immediately.
Say you named the variable for the buffer name bufferId (I call object identifiers "id", even though the official OpenGL terminology is "name"), and the buffer holding the texture coordinates textureCoordBuf. The statement in question would then become:
textureCoordBuf.put(bufferId);
which would jump out as highly suspicious from even a very superficial look at the code.
It seems every time I try to get texturing working in my OpenGL apps I miss something obvious, and I end up spending many hours debugging. Well it happened again and even though I've been trying to compare my code with an older working app for quite some time, I'm still missing something because the texture just isn't displaying at all. The mesh shows up black. I'm not getting any OpenGL errors.
I would be very grateful for some help narrowing down the problem.
Here is part of my code, I'm pretty sure my mistake is in these areas of the code:
OpenGL.cpp:
bool OpenGL::enable_attribs_and_uniforms(GLuint program_id)
{
glGenVertexArrays(1, &this->vao_id);
glBindVertexArray(this->vao_id);
this->pos_a_loc = glGetAttribLocation(program_id, "pos_a");
this->uv_a_loc = glGetAttribLocation(program_id, "uv_a");
if (this->pos_a_loc == -1 || /*this->color_a_loc == -1 ||*/ this->uv_a_loc == -1)
{
std::cerr << "OpenGL Warning: Failed to get attribute locations, probably because at least one attribute isn't being used in the shader program. Attributes get optimized out if they don't affect the output of the shader program in any way.\n\n";
}
glEnableVertexAttribArray(this->pos_a_loc);
glEnableVertexAttribArray(this->uv_a_loc);
this->p_mat_u_loc = glGetUniformLocation(program_id, "p_mat_u");
this->v_mat_u_loc = glGetUniformLocation(program_id, "v_mat_u");
this->m_mat_u_loc = glGetUniformLocation(program_id, "m_mat_u");
this->mvp_mat_u_loc = glGetUniformLocation(program_id, "mvp_mat_u");
this->tex_samp_u_loc = glGetUniformLocation(program_id, "tex_samp_u");
if (this->p_mat_u_loc == -1 || this->v_mat_u_loc == -1 ||
this->m_mat_u_loc == -1 || this->mvp_mat_u_loc == -1 ||
this->tex_samp_u_loc == -1)
{
std::cerr << "OpenGL Warning: Failed to get all uniform locations, probably because at least one uniform isn't being used in the shader program. Uniforms get optimized out if they don't affect the output of the shader program in any way.\n\n";
}
check_gl_error();
return true;
}
void OpenGL::fill_buffers(std::vector<Object*>& objs)
{
std::vector<Object*>::iterator it;
for (it = objs.begin(); it != objs.end(); ++it)
{
GLuint vb_pos_id;
glGenBuffers(1, &vb_pos_id);
glBindBuffer(GL_ARRAY_BUFFER, vb_pos_id);
glBufferData(GL_ARRAY_BUFFER, (*it)->v_pos.size() * sizeof(glm::vec3), &(*it)->v_pos[0], GL_STATIC_DRAW);
this->vb_pos_ids.push_back(vb_pos_id);
(*it)->tex_filenames[0] = "some_texture.png";
SDL_Surface* tex_surface = IMG_Load((*it)->tex_filenames[0].c_str());
if (!tex_surface)
{
std::cerr << "Texture Error: Couldn't open texture file: " << (*it)->tex_filenames[0] << "\n\n";
return;
}
GLuint tex_samp_id;
glGenTextures(1, &tex_samp_id);
glBindTexture(GL_TEXTURE_2D, tex_samp_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, tex_surface->w, tex_surface->h, 0,
GL_RGBA, GL_UNSIGNED_BYTE, tex_surface->pixels);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
this->tex_samp_ids.push_back(tex_samp_id);
GLuint vb_uv_id;
glGenBuffers(1, &vb_uv_id);
glBindBuffer(GL_ARRAY_BUFFER, vb_uv_id);
glBufferData(GL_ARRAY_BUFFER, (*it)->v_uv.size() * sizeof(glm::vec2), &(*it)->v_uv[0], GL_STATIC_DRAW);
this->vb_uv_ids.push_back(vb_uv_id);
}
check_gl_error();
return;
}
void OpenGL::draw(std::vector<Object*>& objs) // draws GL_TRIANGLES
{
std::vector<Object*>::iterator objs_it;
std::vector<GLuint>::iterator vb_pos_it;
std::vector<GLuint>::iterator vb_col_it;
std::vector<GLuint>::iterator vb_uv_it;
unsigned int i = 0;
for (objs_it = objs.begin(), vb_pos_it = this->vb_pos_ids.begin(),
vb_col_it = this->vb_col_ids.begin(), vb_uv_it = this->vb_uv_ids.begin();
objs_it != objs.end(); ++objs_it, ++vb_pos_it, ++vb_col_it, ++vb_uv_it, ++i)
{
(*objs_it)->m_mat = glm::rotate((*objs_it)->m_mat, 1.0f, glm::vec3(0.0f, 1.0f, 0.0f));
this->mvp_mat = this->p_mat * this->v_mat * (*objs_it)->m_mat;
// send attribs to vertex shader
glBindBuffer(GL_ARRAY_BUFFER, (*vb_pos_it));
glVertexAttribPointer(this->pos_a_loc, 3, GL_FLOAT, GL_FALSE, 0, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->tex_samp_ids[i]);
glBindBuffer(GL_ARRAY_BUFFER, (*vb_uv_it));
glVertexAttribPointer(this->uv_a_loc, 2, GL_FLOAT, GL_FALSE, 0, 0);
// send uniforms to vertex shader
glUniformMatrix4fv(this->p_mat_u_loc, 1, GL_FALSE, &this->p_mat[0][0]);
glUniformMatrix4fv(this->v_mat_u_loc, 1, GL_FALSE, &this->v_mat[0][0]);
glUniformMatrix4fv(this->m_mat_u_loc, 1, GL_FALSE, &(*objs_it)->m_mat[0][0]);
glUniformMatrix4fv(this->mvp_mat_u_loc, 1, GL_FALSE, &this->mvp_mat[0][0]);
glUniform1i(this->tex_samp_u_loc, 0);
// draw
glDrawArrays(GL_TRIANGLES, 0, (*objs_it)->v_pos.size());
check_gl_error();
glGetError(); // clear GL error before next iteration
}
return;
}
simple.vsh:
#version 330 core
in vec3 pos_a;
in vec2 uv_a;
uniform mat4 p_mat_u;
uniform mat4 v_mat_u;
uniform mat4 m_mat_u;
uniform mat4 mvp_mat_u;
out vec2 uv_v;
void main()
{
gl_Position = mvp_mat_u * vec4(pos_a, 1.0);
uv_v = uv_a;
}
simple.fsh:
#version 330 core
smooth in vec2 uv_v;
uniform sampler2D tex_samp_u;
out vec4 color_o;
void main()
{
color_o = texture2D(tex_samp_u, uv_v);
}
Well, it turned out the code wasn't the problem, I was exporting from Blender incorrectly. I guess the downvote was appropriate.
This code works to display the texture, but it shows up flipped the wrong way, so I'll have to flip it after loading the image, and I still have to figure out how to get Blender's collada exporter to export with -Z forward, Y up orientation... but that's not for this question.
I'm currently working on a procedural planet generation tool that works by taking a cube, mapping it to a sphere and then applying a heightmap to each face to generate terrain.
I'm using a VBO for each face which is created using the following method:
void Planet::setVertexBufferObject()
{
Vertex* vertices;
int currentVertex;
Vertex* vertex;
for(int i = 0; i < 6; i++)
{
// bottom face
if(i == 0)
{
glBindBuffer(GL_ARRAY_BUFFER, bottomVBO);
}
// top face
else if(i == 1)
{
glBindBuffer(GL_ARRAY_BUFFER, topVBO);
}
// front face
else if(i == 2)
{
glBindBuffer(GL_ARRAY_BUFFER, frontVBO);
}
// back face
else if(i == 3)
{
glBindBuffer(GL_ARRAY_BUFFER, backVBO);
}
// left face
else if(i == 4)
{
glBindBuffer(GL_ARRAY_BUFFER, leftVBO);
}
// right face
else
{
glBindBuffer(GL_ARRAY_BUFFER, rightVBO);
}
vertices = (Vertex*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
currentVertex = 0;
for(int x = 0; x < size; x++)
{
for(int z = 0; z < size; z++)
{
currentVertex = z * size + x;
vertex = &vertices[currentVertex];
vertex->xTextureCoord = (x * 1.0f) / 512.0f;
vertex->zTextureCoord = (z * 1.0f) / 512.0f;
Vector3 normal;
vertex->xNormal = normal.x;
vertex->yNormal = normal.y;
vertex->zNormal = normal.z;
vertex->x = heightMapCubeFace[i][x][z][0];
vertex->y = heightMapCubeFace[i][x][z][1];
vertex->z = heightMapCubeFace[i][x][z][2];
vertex->x *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
vertex->y *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
vertex->z *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
}
}
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
}
I have left out the setIndexBufferObject() method as that is working OK.
I am then rendering the sphere using this method:
void Planet::render()
{
// bottom face
glBindBuffer(GL_ARRAY_BUFFER, bottomVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bottomIBO);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(6 * sizeof(float)));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(3 * sizeof(float)));
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(0));
TextureManager::Inst()->BindTexture(textueIDs[0]);
glClientActiveTexture(GL_TEXTURE0+textueIDs[0]);
glDrawElements(GL_TRIANGLE_STRIP, numberOfIndices, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// next face and so on...
The textures are being loaded in using free image, as you can see from the above code I am just using the example texture manager that came with freeimage.
Why is binding the texture not working?
Why is binding the texture not working?
Describing very specifically the behavior you're seeing, or better attaching a URL to a screenshot, goes a long way when trying to get through graphics questions.
Next, you need to include the gl error state or at least demonstrate that within the code that you're checking and failing if glGetError() doesn't return GL_NO_ERROR.
Your VBO / draw element state looks reasonable though you've deliberately omitted the definition of the index buffer object so we'll take it on faith that you're sending real indices into your glDrawElements() call. For a sanity check, do the glDrawElements() call with the raw indices pointer rather than binding a VBO for the elements.
You've also omitted the type definition of Vertex which is required to know if the offsets you're supplying to glTexCoordPointer() are consistent with the struct definition.
Lastly, I'm guessing most people on the forum don't know "free image" which I believe is what you've stated you're using to load textures with. If there's a texturing problem it's impossible to see because of the opaque nature of using this 3rd party library to set up texturing on your behalf.
If they've confused texture IDs, set up a wrap mode that isn't supported, not set the minification filter consistent with the expected mipmapping (on/off), not enabled texturing, or set the texture environment mode such that the modulation with the base geometry color is not as you expect - all of these would make texturing not work / appear to be working.
To troubleshoot just use the library for texture load and use the texture ID they provide. Then set up the filter modes and texture environment yourself. Turn off mipmapping while troubleshooting. Incomplete mipmap chains is a very common error when texturing.
You can also turn off per-fragment operations to simplify your troubleshooting. Disable blending, depth testing, and scissoring.
Assuming your texture is power of two dimensions or your implementation supports NPOT, try these settings immediately before drawing:
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_SCISSOR_TEST);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBindTexture(GL_TEXTURE_2D, texID);
glTexEnvi(GL_TEXTURE_ENV_MODE, GL_REPLACE); // turn off modulation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // turn off mipmapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // turn off repeats
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glClientActiveTexture(GL_TEXTURE0+textueIDs[0]);
What are you trying to do here?
The active texture unit has nothing to do with a random texture object.