I'm very new to openGL and I am doing a mini project where I experiment with the depth buffer. I got to the stage of displaying it to the screen. However I want to draw it as screen coordinates instead of converting to floats. I read somewhere that I need to use a projection matrix. I have looked for ages and tested loads of different options but I can't seem to get it right.
Can anyone point me to a useful resource or explain how I would go about doing this?
EDIT
At the moment my matrix looks like this:
projectionMat = glm::ortho(0.0f, (float)_cols, 0.0f, (float)_rows, 0.0f, (float)_maxDepthVal);
projection = glGetUniformLocation(_program, "Projection");
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 2
With some fiddling I found that cols had to be negative for some strange reason before it would display. I twill now display correctly on the screen but for some reason it his a gap around the sides opposite the origin, why is this? Even a small move in the camera position and target cause all of it to vanish so I don't think that would be the problem.
Pixel Art Representation!!
OOOO!!
OOOO!!
OOOO!!
!!!!!!!!!!!!!!
New code
glm::mat4 Projection = glm::ortho(0.0f, -static_cast<float>(_cols), 0.0f, static_cast<float>(_rows), 0.0f, static_cast<float>(_maxDepthVal));
projection = glGetUniformLocation(_program, "Projection");
glm::mat4 View = glm::lookAt(
glm::vec3(0.0f, 0.0f, -0.1f),
glm::vec3(0.0f , 0.0f, 0.0f), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
projectionMat = Projection * View * Model;
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 3
I can translate it using the Model matrix but it has a gap of 5 pixels around it that I can't get rid of, any help on that would be appreciated but thanks for taken an interest.
UPDATE
As per request my draw code
glUseProgram(_program);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_ALWAYS);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
SDL_GL_SwapWindow(_window);
glPointSize(1);
glEnableVertexAttribArray(0);
//Insert matrix here
glVertexAttribPointer(0, 3, GL_UNSIGNED_INT, GL_FALSE, 0, 0);
glDrawArrays(GL_POINTS, 0, _dataCount)
glDisableVertexAttribArray(0);
my vbo:
glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _dataCount * 4 * sizeof(unsigned int), NULL, GL_STATIC_DRAW);
if(_vbo == 0 || glGetError() != GL_NO_ERROR)
{
_errorMessage = "VBO COULD NOT BE CREATED";
error();
}
checkCudaErrors(cudaGraphicsGLRegisterBuffer(&vbo, _vbo, cudaGraphicsMapFlagsNone));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);
I'm also having issues with the write as when it converts to floats(for drawing) it loses precision so if I read the value out again it rounds to the nearest factor(0, 256, 512 etc.). Is there another way to do it that stores it as unsigned int. (I realize this is getting slightly off topic but any help would be appreciated)
The issue appeared to be with the cols variable, it needed to be inverted to work otherwise it was off the screen.
Related
I'm trying to visualize normals of triangles.
I have created a triangle to use as the visual representation of the normal but I'm having trouble aligning it to the normal.
I have tried using glm::lookAt but the triangle ends up in some weird position and rotation after that. I am able to move the triangle in the right place with glm::translate though.
Here is my code to create the triangle which is used for the visualization:
// xyz rgb
float vertex_data[] =
{
0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, 0.025f, 0.0f, 1.0f, 1.0f,
0.25f, 0.0f, -0.025f, 0.0f, 1.0f, 1.0f,
};
unsigned int index_data[] = {0, 1, 2};
glGenVertexArrays(1, &nrmGizmoVAO);
glGenBuffers(1, &nrmGizmoVBO);
glGenBuffers(1, &nrmGizmoEBO);
glBindVertexArray(nrmGizmoVAO);
glBindBuffer(GL_ARRAY_BUFFER, nmrGizmoVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertex_data), vertex_data, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, nrmGizmoEBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(index_data), index_data, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
glBindVertexArray(0);
and here is the code to draw the visualizations:
for(unsigned int i = 0; i < worldTriangles->size(); i++)
{
Triangle *tri = &worldTriangles->at(i);
glm::vec3 wp = tri->worldPosition;
glm::vec3 nrm = tri->normal;
nrmGizmoMatrix = glm::mat4(1.0f);
//nrmGizmoMatrix = glm::translate(nrmGizmoMatrix, wp);
nrmGizmoMatrix = glm::lookAt(wp, wp + nrm, glm::vec3(0.0f, 1.0f, 0.0f));
gizmoShader.setMatrix(projectionMatrix, viewMatrix, nrmGizmoMatrix);
glBindVertexArray(nrmGizmoVAO);
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
When using only glm::translate, the triangles appear in right positions but all point in the same direction. How can I rotate them so that they point in the direction of the normal vector?
Your code doesn't work because lookAt is intended to be used as the view matrix, thus it returns the transform from world space to local (camera) space. In your case you want the reverse -- from local (triangle) to world space. Taking an inverse of lookAt should solve that.
However, I'd take a step back and look at (haha) the bigger picture. What I notice about your approach:
It's very inefficient -- you issue a separate call with a different model matrix for every single normal.
You don't even need the entire model matrix. A triangle is a 2-d shape, so all you need is two basis vectors.
I'd instead generate all the vertices for the normals in a single array, and then use glDrawArrays to draw that. For the actual calculation, observe that we have one degree of freedom when it comes to aligning the triangle along the normal. Your lookAt code resolves that DoF rather arbitrary. A better way to resolve that is to constrain it by requiring that it faces towards the camera, thus maximizing the visible area. The calculation is straightforward:
// inputs: vertices output array, normal position, normal direction, camera position
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n, const vec3 &c) {
static const float length = 0.25f, width = 0.025f;
vec3 t = normalize(cross(n, c - p)); // tangent
v.push_back(p);
v.push_back(p + length*n + width*t);
v.push_back(p + length*n - width*t);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal, camera_position);
}
// ... create VAO for normals ...
glDrawArrays(GL_TRIANGLES, 0, normals.size());
Note, however, that this would make the normal mesh camera-dependent -- which is desirable when rendering normals with triangles. Most CAD software draws normals with lines instead, which is much simpler and avoids many problems:
void emit_normal(std::vector<vec3> &v, const vec3 &p, const vec3 &n) {
static const float length = 0.25f;
v.push_back(p);
v.push_back(p + length*n);
}
// ... in your code, generate normals through:
std::vector<vec3> normals;
for(unsigned int i = 0; i < worldTriangles->size(); i++) {
Triangle *tri = &worldTriangles->at(i);
emit_normal(normals, tri->worldPosition, tri->normal);
}
// ... create VAO for normals ...
glDrawArrays(GL_LINES, 0, normals.size());
I’m trying to do an ortho projection onto a plane, which represents a map – think “floor plan”. I’m running into trouble because openGL 4 is new to me (I last used 1.1, and the world has changed) and because what I’m trying to do isn’t much like common examples online. My problem is scaling and translating.
The data that describes the map is a series of lines with endpoints are in what I’ll call “dungeon coordinates units”. When I render the image I want to have a fixed rule of “1 unit is 1 pixel”.
My coordinates are all in the first quadrant, with (0,0) representing the lower left of the map. I’d like (0,0) to show up in the lower left of the screen.
Now for the tricky bits. When I render the “floor” in the fragment shader, I’m being handed gl_FragCoord, which is ideal. It’s effectively a pixel location, which means for my purposes it is equivalent to a dungeon coordinate. I can look up all the information I passed to the shader (also in dungeon coordinates) and figure out how to paint (or discard) that pixel. It works, except… it draws (0,0) is in the center of the screen, not the low left.
Worse, There are some things, like lines (“walls”), that I render with skinny triangles in dungeon coordinates in a second pass. They don’t show up where I want them. (In fact I’m pretty sure that the triangles I’m using to tile the floor are also wrong and are only covering the screen by coincidence.)
I really, really need openGL to use a coordinate system that puts 0,0 at the lower left of the image and lets me specify triangle vertices in my units, which happen to map straight to pixels.
This seems like a simple case of scaling and translating. But I’m obviously applying the scale and translate incorrectly.
The vertex code is simple:
#version 430
layout (location = 0) in vec3 Position;
uniform mat4 gWorld;
out vec4 Color; //unused; the fragment shader caslculates all colors
void main()
{
gl_Position = gWorld * vec4(Position, 1.0);
}
Building the 2 triangles for the map floor (a simple rectangle for now) seems simple:
Vector3f Vertices[4];
Vertices[0] = Vector3f(0.f, 0.f, 0.0f);
Vertices[1] = Vector3f(0.f, mapEdges.maxs.y, 0.0f);
Vertices[2] = Vector3f(mapEdges.maxs.x, 0.f, 0.0f);
Vertices[3] = Vector3f(mapEdges.maxs.x, mapEdges.maxs.y, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
unsigned int Indices[] = { 0, 1, 2,
1, 2, 3 };
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
and I use an indexed draw for them.
The C++ code (using glm) sets up the world matrix:
glUseProgram(ShaderProgram); //this selects the shader
gWorldLocation = glGetUniformLocation(ShaderProgram, "gWorld");
assert(gWorldLocation != 0xFFFFFFFF);
...and when rendering…
//try to fix openGL’s desire to think my buffer is -1 to 1 across
float scale = 1/1024.f; //test map is about 1024 units across
glm::mat4 sm = glm::scale(
glm::mat4( 1.0f ),
glm::vec3( scale, scale, 1.0f )
);
glm::mat4 ts = glm::translate(
sm,
glm::vec3( -512.0f, -512.0f, 0.0f ) //shove left and down
);
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
Since my test map is about 1024 units across, I’d have thought this would have shoved things into position. But no. The floor (which, remember, is using gl_FragCoord to decide where and what to draw) is painted from screen center and up and right, though it otherwise looks as I’d expect. The walls, which are painted by skinny triangles in dungeon coordinates, are nowhere to be seen, probably scaled off into the aether somewhere.
Basically I’m not convincing openGL that I want x=0 to be the left edge of the image and my scaling is obviously completely wrong. Sadly I had one version that (incorrectly) drew some walls on the screen at one point, but I don’t have that code anymore. Still, it tells me that I’m not completely off in generating the walls, just laying them down.
How do I get openGL to use my units?
You transpose the matrix when you set the matrix uniform. Since the vector is multiplied to the matrix from the right in your shader program, this is wrong. See GLSL Programming/Vector and Matrix Operations
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, &ts[0][0]);
Instead of scaling and translating the vertices you can set an orthographic projection with matrix with glm::ortho:
glm::mat4 projection = glm::ortho(0.0f, 1024.0f, 0.0f, 1024.0f, -1.0f, 1.0f);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, glm::value_ptr(projection));
Hi, I am trying to display two objects using OpenGL viz., 1) a rotating cube with a mix of two textures (a wooden crate pattern and a smiley) in the foreground and 2) rectangular plate with just one texture (dark grey wood) as a background. When I comment out the part of the code governing the display of rectangular plate, the rotating cube displays both the textures (wooden crate and smiley). Otherwise, the cube displays only the wooden crate texture and the dark grey wood texture is also displayed on the rectangular plate, i.e. the smiley texture disappears from the rotating cube. Please find the images 1) http://oi68.tinypic.com/2la4r3c.jpg (with the rectangular plate portion of code commented) and 2) http://i67.tinypic.com/9u9rpf.jpg (without the rectangular plate portion of code commented). The relavant portion of the code is pasted below
// Rotating Cube ===================================================
// Texture of wooden crate
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture1"), 0);
// Texture of a smiley
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture2"), 1);
// lets use the box shader for the cube
ourShader_box.Use();
// transformations for the rotating cube ---------------------------------
glm::mat4 model_box, model1, model2;
glm::mat4 view_box;
glm::mat4 perspective;
perspective = glm::perspective(45.0f, (GLfloat)width_screen/(GLfloat)height_screen, 0.1f, 200.0f);
model1 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.5f, 1.0f, 0.0f));
model2 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.0f, 1.0f, 0.5f));
model_box = model1 * model2;
view_box= glm::translate(view_box, glm::vec3(1.0f, 0.0f, -3.0f));
GLint modelLoc_box = glGetUniformLocation(ourShader_box.Program, "model");
GLint viewLoc_box = glGetUniformLocation(ourShader_box.Program, "view");
GLint projLoc_box = glGetUniformLocation(ourShader_box.Program, "perspective");
glUniformMatrix4fv(modelLoc_box, 1, GL_FALSE, glm::value_ptr(model_box));
glUniformMatrix4fv(viewLoc_box, 1, GL_FALSE, glm::value_ptr(view_box));
glUniformMatrix4fv(projLoc_box, 1, GL_FALSE, glm::value_ptr(perspective));
// --------------------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_box);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
// Rectangular Plate =====================================================
// Background Shader
ourShader_bg.Use();
// Texture of dark grey wood
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, texture_wood);
glUniform1i(glGetUniformLocation(ourShader_bg.Program, "ourTexture3"), 2);
// Transformations -------------------------------------------
glm::mat4 model_bg;
glm::mat4 view_bg;
GLint modelLoc_bg = glGetUniformLocation(ourShader_bg.Program, "model");
GLint viewLoc_bg= glGetUniformLocation(ourShader_bg.Program, "view");
GLint projLoc_bg = glGetUniformLocation(ourShader_bg.Program, "perspective");
glUniformMatrix4fv(modelLoc_bg, 1, GL_FALSE, glm::value_ptr(model_bg));
glUniformMatrix4fv(viewLoc_bg, 1, GL_FALSE, glm::value_ptr(view_bg));
glUniformMatrix4fv(projLoc_bg, 1, GL_FALSE, glm::value_ptr(perspective));
// -----------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_bg);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
// =================================================================
I have a two questions regarding this code.
Why is the smiley disappearing?
Is this how multiple objects are supposed to be rendered? I know OpenGL does not care about objects, it only cares about vertices, but in this case these are separate, disjoint objects. So, should I be organizing them as two VBO's bound to a single VAO or as separate VBO's each bound to two VAO's for each object? Or is the case such that, either way is fine - depends on coder's choice and elegance of code?
You are using the same shader, same matrices and you have the same geometry type for the two objects (triangles), so why set the shader twice ?
Did you try to;
Set shader
Bind buffer #1
Bind texture #1
Draw object #1
Bind buffer #2
Bind texture #2
Draw object #2
I am trying to do some simple graphics processing with OpenGL, but I am having trouble having 2 objects where one of them is static and the other moves. The objects are a simple cube and a square that represents the floor. I want the cube to move down until it touches the floor (as if it were moving). I can render the falling cube on its own, and I can get the floor on its own. But when I want to have them both in the same scene I am having issues as they either both fall down (the cubes behaviour), or both stay in the same place (the floors behaviour). Which one of these two options occurs is due to whether I push and pop my model matrix - when I do pop and push, they stay static, when I don't, they fall down (I guess this makes sense as I draw the floor then the cube.
This is my code in the draw phase of the program:
//Clear the screen to the colour specified earlier, as well as the depth
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(programID); // Use our shader
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
//RENDER THE FLOOR
pushMat(Model); //PUSH - WHEN ACTIVE, CUBE AND FLOOR FALL. WHEN COMMENTED OUT, BOTH FLOOR AND CUBE ARE STATIC
MVP = Projection * View * Model; //These are all matrices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, floorVertexBuffer);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
//FLOOR COLOURS
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer2);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, 2*3); //2 triangles, of 3 vertices each
Model = popMat(); //POP - WHEN ACTIVE, CUBE AND FLOOR FALL. WHEN COMMENTED OUT, BOTH FLOOR AND CUBE ARE STATIC
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
//RENDER THE CUBE
pushMat(Model); //PUSH - WHEN ACTIVE, CUBE AND FLOOR FALL. WHEN COMMENTED OUT, BOTH FLOOR AND CUBE ARE STATIC
Model = translate(Model, vec3(0.0f, deltaY, 0.0f)); //deltaY is the change in the y position of the cube, it is calculated earlier in this draw loop
MVP = Projection * View * Model;
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, cubeVertexBuffer);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
//CUBE COLOURS
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, colorbuffer);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
//DRAW CUBE
glDrawArrays(GL_TRIANGLES, 0, 12 * 3); //12 triangle, of 3 vertices each
Model = popMat(); //FINAL POP - WHEN ACTIVE, CUBE AND FLOOR FALL. WHEN COMMENTED OUT, BOTH FLOOR AND CUBE ARE STATIC
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
Code for my push and pop functions:
stack<mat4> modelViewStack; //This is initialised with the identity matrix in the main function
void pushMat(mat4 m)
{
modelViewStack.push(m);
}
mat4 popMat()
{
mat4 temp = modelViewStack.top();
modelViewStack.pop();
return temp;
}
Any clues as to how I get it so the floor stays in one place and the cube moves down? I'm happy to help explain any code, provide more of my code, or answer any questions in general. Thanks for any help.
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
This call needs to appear before each call to glDrawArrays. Right now, it's only being called once, before you render everything, which means both objects are receiving the same MVP matrix.
Also, I would reconsider the logic of implementing this logic using a Matrix Stack. That was how it worked in Legacy OpenGL (because everything depended on Global State, and some other reasons) but it's not obvious that this is the best solution today, when we can simply associate Matrices with individual objects and simply bind them as needed.
I have been looking at this problem for days and I can't figure out why my model takes my skybox's texture. It has just been bugging me forever. Here is what it looks like.This is what my model looks like after the skybox is loaded in. What my model looks like before my skybox is loaded into the scene.
core.cpp main loop
while (!m_Window.closed())
{
m_Window.varUpdate();
m_Window.Do_Movement();
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
m_Window.clear();
m_ModelShader.enable();
glm::mat4 model;
model = glm::scale(model, glm::vec3(0.2f));
//model = glm::translate(model, glm::vec3(5.0f));
glm::mat4 view = m_Window.m_Camera.GetViewMatrix();
glm::mat4 projection = glm::perspective(m_Window.m_Camera.Zoom, (float)m_Window.getWindowX() / (float)m_Window.getWindowY(), 0.1f, 100.0f);
glUniformMatrix4fv(glGetUniformLocation(m_ModelShader.m_ProgramID, "model"), 1, GL_FALSE, glm::value_ptr(model));
glUniformMatrix4fv(glGetUniformLocation(m_ModelShader.m_ProgramID, "view"), 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(glGetUniformLocation(m_ModelShader.m_ProgramID, "projection"), 1, GL_FALSE, glm::value_ptr(projection));
glUniform3f(glGetUniformLocation(m_ModelShader.m_ProgramID, "cameraPos"), m_Window.m_Camera.Position.x, m_Window.m_Camera.Position.y, m_Window.m_Camera.Position.z);
m_Model.Draw(m_ModelShader);
glDepthFunc(GL_LEQUAL);
m_SkyboxShader.enable();
glm::mat4 projectionS = glm::perspective(m_Window.m_Camera.Zoom, (float)m_Window.getWindowX() / (float)m_Window.getWindowY(), 0.1f, 100.0f);
glm::mat4 viewS = m_Window.m_Camera.GetViewMatrix(); //This is usually set to glm::mat4(glm::mat3(m_Window.m_Camera.GetViewMatrix())); to center on the camera.
glUniformMatrix4fv(glGetUniformLocation(m_SkyboxShader.m_ProgramID, "projection"), 1, GL_FALSE, glm::value_ptr(projectionS));
glUniformMatrix4fv(glGetUniformLocation(m_SkyboxShader.m_ProgramID, "view"), 1, GL_FALSE, glm::value_ptr(viewS));
glBindVertexArray(sVAO);
glActiveTexture(GL_TEXTURE0);
glUniform1i(glGetUniformLocation(m_SkyboxShader.m_ProgramID, "skybox"), 0);
glBindTexture(GL_TEXTURE_CUBE_MAP, skyboxTex);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
glDepthFunc(GL_LESS);
m_Window.update();
}
OpenGL is a state machine, i.e. it remembers everything you do and keeps using the very last configuration you set. When you draw your model (m_Model.Draw) OpenGL still has the skybox texture bound and active from the previous drawing iteration… and hence applies it. It's good practice to either
clean up OpenGL state at the end of rendering a frame
clean up OpenGL state at the beginning of rendering a frame
clean up OpenGL state set for a particular drawing batch right after the batch
or
set/unset all OpenGL state to what's required for the next drawing batch right before drawing that particular batch.
In your case I suggest you unbind the texture after drawing the skybox.