I have a cube that I am loading from an OBJ file. When I make its position (0, 0, 0) everything works fine. The cube renders, and my function that gives it a velocity moves the cube across the screen. However if I change the position of the cube to something other than (0, 0, 0) before entering my while loop where I render and calculate velocity changes, the cube never renders. This is the first time I have tried to reload my vertices every time I render a frame, and I am assuming I messed up something there - but I've looked over other code and can't figure out what.
Here is my main function:
int main()
{
#ifdef TESTING
testing();
exit(0);
#endif
setupAndInitializeWindow(768, 480, "Final Project");
TriangleTriangleCollision collisionDetector;
Asset cube1("cube.obj", "vertexShader.txt", "fragmentShader.txt");
cube1.position = glm::vec3(0.0, 2.0, 0.0);
cube1.velocity = glm::vec3(0.0, -0.004, 0.0);
MVP = projection * view * model;
do{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
moveAsset(cube1);
renderAsset(cube1);
glfwSwapBuffers(window);
glfwPollEvents();
} while (glfwGetKey(window, GLFW_KEY_ESCAPE) != GLFW_PRESS &&
glfwWindowShouldClose(window) == 0);
glfwTerminate();
return 0;
}
my moveAsset function:
void moveAsset(Asset &asset)
{
double currentTime = glfwGetTime();
asset.position.x += (asset.velocity.x * (currentTime - asset.lastTime));
asset.position.y += (asset.velocity.y * (currentTime - asset.lastTime));
asset.position.z += (asset.velocity.z * (currentTime - asset.lastTime));
for (glm::vec3 &vertex : asset.vertices)
{
glm::vec4 transformedVector = glm::translate(glm::mat4(1.0f), asset.position) * glm::vec4(vertex.x, vertex.y, vertex.z, 1);
vertex = glm::vec3(transformedVector.x, transformedVector.y, transformedVector.z);
}
asset.lastTime = glfwGetTime();
}
void renderAsset(Asset asset)
{
glUseProgram(asset.programID);
GLuint MatrixID = glGetUniformLocation(asset.programID, "MVP");
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, asset.vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, asset.vertices.size() * sizeof(glm::vec3), &asset.vertices[0], GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glDrawArrays(GL_TRIANGLES, 0, asset.vertices.size());
glDisableVertexAttribArray(0);
}
my model, view and projection matrices are defined as:
glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = glm::lookAt(glm::vec3(5, 5, 10),
glm::vec3(0, 0, 0),
glm::vec3(0, 1, 0));
glm::mat4 projection = glm::perspective(45.0f, (float) _windowWidth / _windowHeight, 0.1f, 100.0f);
and finally, my Asset struct:
struct Asset
{
Asset() { }
Asset(std::string assetOBJFile, std::string vertexShader, std::string fragmentShader)
{
glGenVertexArrays(1, &vertexArrayID);
glBindVertexArray(vertexArrayID);
programID = LoadShaders(vertexShader.c_str(), fragmentShader.c_str());
// Read our .obj file
std::vector<glm::vec2> uvs;
std::vector<glm::vec3> normals;
loadOBJ(assetOBJFile.c_str(), vertices, uvs, normals);
// Load it into a VBO
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), &vertices[0], GL_STATIC_DRAW);
//velocity = glm::vec3(0.0, 1.0, 1.0);
velocity = glm::vec3(0.0, 0.0, 0.0);
position = glm::vec3(0.0, 0.0, 0.0);
lastTime = glfwGetTime();
}
GLuint vertexArrayID;
GLuint programID;
GLuint vertexbuffer;
std::vector<glm::vec3> faces;
std::vector<glm::vec3> vertices;
glm::vec3 velocity;
double lastTime;
glm::vec3 position;
};
It looks like you're adding the current asset.position to your vertex positions on every iteration, replacing the previous positions. From the moveAsset() function:
for (glm::vec3 &vertex : asset.vertices)
{
glm::vec4 transformedVector = glm::translate(glm::mat4(1.0f), asset.position) *
glm::vec4(vertex.x, vertex.y, vertex.z, 1);
vertex = glm::vec3(transformedVector.x, transformedVector.y, transformedVector.z);
}
Neglecting the velocity for a moment, and assuming that you have an original vertex at (0, 0, 0), you would move it to asset.position on the first iteration. Then add asset.position again on the second iteration, which places it at 2 * asset.position. Then on the third iteration, add asset.position to this current position again, resulting in 3 * asset.position. So after n steps, the vertices will be around n * asset.position. Even if your object might be visible initially, it would move out of the visible range before you can blink.
To get your original strategy working, the most straightforward approach is to have two lists of vertices. One list contains your original object coordinates, which you never change. Then before you draw, you build a second list of vertices, calculated as the sum of the original vertices plus the current asset.position, and use that second list for rendering.
The whole thing is... not very OpenGL. There's really no need to modify the vertex coordinates on the CPU. You can make the translation part of the transformation applied in your vertex shader. You already have a model matrix in place. You can simply put the translation by asset.position into the model matrix, and recalculate the MVP matrix. You already have the glUniformMatix4fv() call to pass the new matrix to the shader program in your renderAsset() function.
Related
I'm brand new to OpenGL and am having some difficulty rendering multiple objects.
I have a vector each of which has its own VertexBuffer. Then, in the while loop I draw each shape on its own.
It's all well and good when I have many of the same objects (multiple cubes etc.) however, when I add a triangle mesh everything gets all out of whack.
I can have many cubes
I can have a single triangle mesh:
But, when I try to have a cube and then a triangle mesh I get:
I'm totally at a loss for what's going on. The code for my loop is provided below.
while (!glfwWindowShouldClose(window))
{
// Get the size of the window
int width, height;
glfwGetWindowSize(window, &width, &height);
float aspect_ratio = 1 * float(height)/float(width); // corresponds to the necessary width scaling
double xpos, ypos;
glfwGetCursorPos(window, &xpos, &ypos);
// Clear the framebuffer
glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Enable depth test
glEnable(GL_DEPTH_TEST);
glUniform3f(program.uniform("triangleColor"), 1.0f, 1.0f, 1.0f);
glUniformMatrix4fv(program.uniform("proj"), 1, GL_FALSE, projection.data());
glUniformMatrix4fv(program.uniform("view"), 1, GL_FALSE, view.data());
int tally = 0;
for (int i = 0; i < surfaces.size(); i++) {
Surface *s = surfaces[i];
Vector3f color = s->getColor();
int tempIndex = triangleIndex;
Matrix4f model = s->getModel();
// Convert screen position to world coordinates
double xworld = ((xpos/double(width))*2)-1;
double yworld = (((height-1-ypos)/double(height))*2)-1; // NOTE: y axis is flipped in glfw
if (isPressed && mode == "translate") {
if(tempIndex == i) {
Vector4f center = s->getCenter() + model.col(3);
Vector4f displacement = Vector4f(xworld, yworld, 0, 1) - center;
Matrix4f translation = translateMatrix(displacement(0), displacement(1), displacement(2));
model = translation * s->getModel();
s->setModel(model);
}
}
glUniform3f(program.uniform("triangleColor"), color(0), color(1), color(2));
glUniformMatrix4fv(program.uniform("model"), 1, GL_FALSE, model.data());
glDrawArrays(GL_TRIANGLES, 0, s->getVertices().size());
}
And I initialize each VBO when making the object as
VertexBufferObject VBO;
VBO.init();
VBO.update(Vertices);
program.bindVertexAttribArray("position", VBO);
Surface* s = new Surface(VBO, Vertices, percentScale, 0, transformedCenter, SmoothNormals, FlatNormals, color);
s->setModel(model);
surfaces.push_back(s);
And where Program::bindVertexAttribArray is defined as
GLint Program::bindVertexAttribArray(
const std::string &name, VertexBufferObject& VBO) const
{
GLint id = attrib(name);
if (id < 0)
return id;
if (VBO.id == 0)
{
glDisableVertexAttribArray(id);
return id;
}
VBO.bind();
glEnableVertexAttribArray(id);
glVertexAttribPointer(id, VBO.rows, GL_FLOAT, GL_FALSE, 0, 0);
check_gl_error();
return id;
}
You're not binding any buffers before the draw call. You're probably simply drawing whatever buffer you last bound when you initialised them. You'll need something like this at the end of your loop before glDrawArrays:
...
program.bindVertexAttribArray("position", VBO); // where VBO is the buffer of surface s
glUniform3f(program.uniform("triangleColor"), color(0), color(1), color(2));
glUniformMatrix4fv(program.uniform("model"), 1, GL_FALSE, model.data());
glDrawArrays(GL_TRIANGLES, 0, s->getVertices().size());
my OpenGL version is 4.0. I would like to draw a sphere through latitude and longitude. I use this method:
x=ρsinϕcosθ
y=ρsinϕsinθ
z=ρcosϕ
This is a part of my code:
glm::vec3 buffer[1000];
glm::vec3 outer;
buffercount = 1000;
float section = 10.0f;
GLfloat alpha, beta;
int index = 0;
for (alpha = 0.0 ; alpha <= PI; alpha += PI/section)
{
for (beta = 0.0 ; beta <= 2* PI; beta += PI/section)
{
outer.x = radius*cos(beta)*sin(alpha);
outer.y = radius*sin(beta)*sin(alpha);
outer.z = radius*cos(alpha);
buffer[index] = outer;
index = index +1;
}
}
GLuint sphereVBO, sphereVAO;
glGenVertexArrays(1, &sphereVAO);
glGenBuffers(1,&sphereVBO);
glBindVertexArray(sphereVAO);
glBindBuffer(GL_ARRAY_BUFFER,sphereVBO);
glBufferData(GL_ARRAY_BUFFER,sizeof(glm::vec3) *buffercount ,&buffer[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
...
while (!glfwWindowShouldClose(window))
{
...
...
for (GLuint i = 0; i < buffercount; i++)
{
...
...
glm::mat4 model;
model = glm::translate(model, buffer[i]);
GLfloat angle = 10.0f * i;
model = glm::rotate(model, angle, glm::vec3(1.0f, 0.3f, 0.5f));
glUniformMatrix4fv(modelMat, 1, GL_FALSE, glm::value_ptr(model));
}
glDrawArrays(GL_TRIANGLE_FAN, 0, 900);
glfwSwapBuffers(window);
}
if section = 5, the performance is like this:
if section = 20. the performance is like this:
I think that I might have logic problem in my code. I am struggle in this problem...
-----update-----
I edited my code, It doesn't have any error, but I got a blank screen. I guess that something wrong in my vertex shader. I might pass wrong variables to vertex sheder. Please help me.
gluperspective is deprecated in my OpenGL 4.1
I switch to :
float aspect=float(4.0f)/float(3.0f);
glm::mat4 projection_matrix = glm::perspective(60.0f/aspect,aspect,0.1f,100.0f);
It shows that this error: constant expression evaluates to -1 which cannot be narrowed to type 'GLuint'(aka 'unsigned int')
GLuint sphere_vbo[4]={-1,-1,-1,-1};
GLuint sphere_vao[4]={-1,-1,-1,-1};
I'm not sure how to revise it...I switch to:
GLuint sphere_vbo[4]={1,1,1,1};
GLuint sphere_vao[4]={1,1,1,1};
I put Spektre's code in spherer.h file
This is a part of my main.cpp file:
...
...
Shader shader("basic.vert", "basic.frag");
sphere_init();
while (!glfwWindowShouldClose(window))
{
glfwPollEvents();
glClearColor(0.2f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
shader.Use();
GLuint MatrixID = glGetUniformLocation(shader.Program, "MVP");
GLfloat radius = 10.0f;
GLfloat camX = sin(glfwGetTime()) * radius;
GLfloat camZ = cos(glfwGetTime()) * radius;
// view matrix
glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, 0.0, camZ), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
glm::mat4 view_matrix = view;
// projection matrix
float aspect=float(4.0f)/float(3.0f);
glm::mat4 projection_matrix = glm::perspective(60.0f/aspect,aspect,0.1f,100.0f);
// model matrix
glm::mat4 model_matrix = glm::mat4(1.0f);// identity
//ModelViewProjection
glm::mat4 model_view_projection = projection_matrix * view_matrix * model_matrix;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &model_view_projection[0][0]);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-10.0);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
sphere_draw();
glFlush();
glfwSwapBuffers(window);
}
sphere_exit();
glfwTerminate();
return 0;
}
This is my vertex shader file:
#version 410 core
uniform mat4 MVP;
layout(location = 0) in vec3 vertexPosition_modelspace;
out vec4 vertexColor;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vertexColor = vec4(0, 1, 0, 1.0);
}
I added error-check function get_log in my shader.h file.
...
...
vertex = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex, 1, &vShaderCode, NULL);
glCompileShader(vertex);
checkCompileErrors(vertex, "VERTEX");
get_log(vertex);
...
...
void get_log(GLuint shader){
GLint isCompiled = 0;
GLchar infoLog[1024];
glGetShaderiv(shader, GL_COMPILE_STATUS, &isCompiled);
if(isCompiled == GL_FALSE)
{
printf("----error--- \n");
GLint maxLength = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &maxLength);
glGetShaderInfoLog(shader, 1024, NULL, infoLog);
std::cout << "| ERROR::::" << &infoLog << "\n| -- ------------------ --------------------------------- -- |" << std::endl;
glDeleteShader(shader); // Don't leak the shader.
}else{
printf("---no error --- \n");
}
}
I tested both fragment shader and vertex shader, it both showed ---no error---
As I mentioned in the comments you need to add indices to your mesh VAO/VBO. Not sure why GL_QUADS is not implemented on your machine that makes no sense as it is basic primitive so to make this easy to handle I use only GL_TRIANGLES which is far from ideal but what to heck ... Try this:
//---------------------------------------------------------------------------
const int na=36; // vertex grid size
const int nb=18;
const int na3=na*3; // line in grid size
const int nn=nb*na3; // whole grid size
GLfloat sphere_pos[nn]; // vertex
GLfloat sphere_nor[nn]; // normal
//GLfloat sphere_col[nn]; // color
GLuint sphere_ix [na*(nb-1)*6]; // indices
GLuint sphere_vbo[4]={-1,-1,-1,-1};
GLuint sphere_vao[4]={-1,-1,-1,-1};
void sphere_init()
{
// generate the sphere data
GLfloat x,y,z,a,b,da,db,r=3.5;
int ia,ib,ix,iy;
da=2.0*M_PI/GLfloat(na);
db= M_PI/GLfloat(nb-1);
// [Generate sphere point data]
// spherical angles a,b covering whole sphere surface
for (ix=0,b=-0.5*M_PI,ib=0;ib<nb;ib++,b+=db)
for (a=0.0,ia=0;ia<na;ia++,a+=da,ix+=3)
{
// unit sphere
x=cos(b)*cos(a);
y=cos(b)*sin(a);
z=sin(b);
sphere_pos[ix+0]=x*r;
sphere_pos[ix+1]=y*r;
sphere_pos[ix+2]=z*r;
sphere_nor[ix+0]=x;
sphere_nor[ix+1]=y;
sphere_nor[ix+2]=z;
}
// [Generate GL_TRIANGLE indices]
for (ix=0,iy=0,ib=1;ib<nb;ib++)
{
for (ia=1;ia<na;ia++,iy++)
{
// first half of QUAD
sphere_ix[ix]=iy; ix++;
sphere_ix[ix]=iy+1; ix++;
sphere_ix[ix]=iy+na; ix++;
// second half of QUAD
sphere_ix[ix]=iy+na; ix++;
sphere_ix[ix]=iy+1; ix++;
sphere_ix[ix]=iy+na+1; ix++;
}
// first half of QUAD
sphere_ix[ix]=iy; ix++;
sphere_ix[ix]=iy+1-na; ix++;
sphere_ix[ix]=iy+na; ix++;
// second half of QUAD
sphere_ix[ix]=iy+na; ix++;
sphere_ix[ix]=iy-na+1; ix++;
sphere_ix[ix]=iy+1; ix++;
iy++;
}
// [VAO/VBO stuff]
GLuint i;
glGenVertexArrays(4,sphere_vao);
glGenBuffers(4,sphere_vbo);
glBindVertexArray(sphere_vao[0]);
i=0; // vertex
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_pos),sphere_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
i=1; // indices
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(sphere_ix),sphere_ix,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,4,GL_UNSIGNED_INT,GL_FALSE,0,0);
i=2; // normal
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_nor),sphere_nor,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
/*
i=3; // color
glBindBuffer(GL_ARRAY_BUFFER,sphere_vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(sphere_col),sphere_col,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
*/
glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,0);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glDisableVertexAttribArray(3);
}
void sphere_exit()
{
glDeleteVertexArrays(4,sphere_vao);
glDeleteBuffers(4,sphere_vbo);
}
void sphere_draw()
{
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glBindVertexArray(sphere_vao[0]);
// glDrawArrays(GL_POINTS,0,sizeof(sphere_pos)/sizeof(GLfloat)); // POINTS ... no indices for debug
glDrawElements(GL_TRIANGLES,sizeof(sphere_ix)/sizeof(GLuint),GL_UNSIGNED_INT,0); // indices (choose just one line not both !!!)
glBindVertexArray(0);
}
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
float aspect=float(xs)/float(ys);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0/aspect,aspect,0.1,100.0);
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-10.0);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
sphere_draw();
glFlush();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
Usage is simple after OpenGL context is created and extensions loaded call sphere_init() before closing app call sphere_exit() (while OpenGL context is still running) and when you want to render call sphere_draw(). I make an gl_draw() example with some settings and here the preview of it:
The point is to create 2D grid of points covering whole surface of sphere (via spherical long,lat a,b angles) and then just create triangles covering whole grid...
I'm trying to create a solar system in OpenGL. I have the basic code for earth spinning on its axis and im trying to set the camera to move with the arrow keys.
using namespace std;
using namespace glm;
const int windowWidth = 1024;
const int windowHeight = 768;
GLuint VBO;
int NUMVERTS = 0;
bool* keyStates = new bool[256]; //Create an array of boolean values of length 256 (0-255)
float fraction = 0.1f; //Fraction for navigation speed using keys
// Transform uniforms location
GLuint gModelToWorldTransformLoc;
GLuint gWorldToViewToProjectionTransformLoc;
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
// TextureSampler uniform location
GLuint gTextureSamplerLoc;
// Texture ID
GLuint gTextureObject[11];
//Navigation variables
float posX;
float posY;
float posZ;
float viewX = 0.0f;
float viewY = 0.0f;
float viewZ = 0.0f;
float dirX;
float dirY;
float dirZ;
vec3 cameraPos = vec3(0.0f,0.0f,5.0f);
vec3 cameraView = vec3(viewX,viewY,viewZ);
vec3 cameraDir = vec3(0.0f,1.0f,0.0f);
These are all my variables that im using to edit the camera.
static void renderSceneCallBack()
{
// Clear the back buffer and the z-buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create our world space to view space transformation matrix
mat4 worldToViewTransform = lookAt(
cameraPos, // The position of your camera, in world space
cameraView, // where you want to look at, in world space
cameraDir // Camera up direction (set to 0,-1,0 to look upside-down)
);
// Create out projection transform
mat4 projectionTransform = perspective(45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f);
// Combine the world space to view space transformation matrix and the projection transformation matrix
mat4 worldToViewToProjectionTransform = projectionTransform * worldToViewTransform;
// Update the transforms in the shader program on the GPU
glUniformMatrix4fv(gWorldToViewToProjectionTransformLoc, 1, GL_FALSE, &worldToViewToProjectionTransform[0][0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)12);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)24);
// Set the material properties
glUniform1f(gKaLoc, 0.8f);
glUniform1f(gKdLoc, 0.8f);
// Bind the texture to the texture unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTextureObject[0]);
// Set our sampler to user Texture Unit 0
glUniform1i(gTextureSamplerLoc, 0);
// Draw triangle
mat4 modelToWorldTransform = mat4(1.0f);
static float angle = 0.0f;
angle+=1.0f;
modelToWorldTransform = rotate(modelToWorldTransform, angle, vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &modelToWorldTransform[0][0]);
glDrawArrays(GL_TRIANGLES, 0, NUMVERTS);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glutSwapBuffers();
}
This is the function that draws the earth onto the screen and determines where the camera is at.
void keyPressed (unsigned char key, int x, int y)
{
keyStates[key] = true; //Set the state of the current key to pressed
cout<<"keyPressed ";
}
void keyUp(unsigned char key, int x, int y)
{
keyStates[key] = false; //Set the state of the current key to released
cout<<"keyUp ";
}
void keyOperations (void)
{
if(keyStates['a'])
{
viewX += 0.5f;
}
cout<<"keyOperations ";
}
These are the functions I'm trying to use to edit the camera variables dynamically
// Create a vertex buffer
createVertexBuffer();
glutKeyboardFunc(keyPressed); //Tell Glut to use the method "keyPressed" for key events
glutKeyboardUpFunc(keyUp); //Tell Glut to use the method "keyUp" for key events
keyOperations();
glutMainLoop();
Finally here's the few lines in my main method where I'm trying to call the key press functions. In the console I see it detects that im pressing them but the planet doesnt move at all, I think I may be calling the keyOperations in the wrong place but I'm not sure.
You are correct, key operations is being called in the wrong place. Where it is now is called once then never again. It needs to go in your update code where you update the rotation of the planet. That way it is called at least once per frame.
I'm having a really weird issue with depth testing here.
I'm rendering a simple mesh in an OpenGL 3.3 core profile context on Windows, with depth testing enabled and glDepthFunc set to GL_LESS. On my machine (a laptop with a nVidia Geforce GTX 660M), everything is working as expected, the depth test is working, this is what it looks like:
Now, if I run the program on a different PC, a tower with a Radeon R9 280, it looks more like this:
Strange enough, the really weird thing is that when I call glEnable(GL_DEPTH_TEST) every frame before drawing, the result is correct on both machines.
As it's working when I do that, I figure the depth buffer is correctly created on both machines, it just seems that the depth test is somehow being disabled before rendering when I enable it only once at initialization.
Here's the minimum code that could somehow be part of the problem:
Code called at initialization, after a context is created and made current:
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
Code called every frame before the buffer swap:
glClearColor(0.4f, 0.6f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// mShaderProgram->getID() simply returns the handle of a simple shader program
glUseProgram(mShaderProgram->getID());
glm::vec3 myColor = glm::vec3(0.7f, 0.5f, 0.4f);
GLuint colorLocation = glGetUniformLocation(mShaderProgram->getID(), "uColor");
glUniform3fv(colorLocation, 1, glm::value_ptr(myColor));
glm::mat4 modelMatrix = glm::mat4(1.0f);
glm::mat4 viewMatrix = glm::lookAt(glm::vec3(0.0f, 3.0f, 5.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 projectionMatrix = glm::perspectiveFov(60.0f, (float)mWindow->getProperties().width, (float)mWindow->getProperties().height, 1.0f, 100.0f);
glm::mat4 inverseTransposeMVMatrix = glm::inverseTranspose(viewMatrix*modelMatrix);
GLuint mMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uModelMatrix");
GLuint vMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uViewMatrix");
GLuint pMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uProjectionMatrix");
GLuint itmvMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uInverseTransposeMVMatrix");
glUniformMatrix4fv(mMatrixLocation, 1, GL_FALSE, glm::value_ptr(modelMatrix));
glUniformMatrix4fv(vMatrixLocation, 1, GL_FALSE, glm::value_ptr(viewMatrix));
glUniformMatrix4fv(pMatrixLocation, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
glUniformMatrix4fv(itmvMatrixLocation, 1, GL_FALSE, glm::value_ptr(inverseTransposeMVMatrix));
// Similiar to the shader program, mMesh.gl_vaoID is simply the handle of a vertex array object
glBindVertexArray(mMesh.gl_vaoID);
glDrawArrays(GL_TRIANGLES, 0, mMesh.faces.size()*3);
With the above code, I'll get the wrong output on the Radeon.
Note: I'm using GLFW3 for context creation and GLEW for the function pointers (and obviously GLM for the math).
The vertex array object contains three attribute array buffers, for positions, uv coordinates and normals. Each of these should be correctly configured and send to the shaders, as everything is working fine when enabling the depth test every frame.
I should also mention that the Radeon machine runs Windows 8 while the nVidia machine runs Windows 7.
Edit: By request, here's the code used to load the mesh and create the attribute data. I do not create any element buffer objects as I am not using element draw calls.
std::vector<glm::vec3> positionData;
std::vector<glm::vec2> uvData;
std::vector<glm::vec3> normalData;
std::vector<meshFaceIndex> faces;
std::ifstream fileStream(path);
if (!fileStream.is_open()){
std::cerr << "ERROR: Could not open file '" << path << "!\n";
return;
}
std::string lineBuffer;
while (std::getline(fileStream, lineBuffer)){
std::stringstream lineStream(lineBuffer);
std::string typeString;
lineStream >> typeString; // Get line token
if (typeString == TOKEN_VPOS){ // Position
glm::vec3 pos;
lineStream >> pos.x >> pos.y >> pos.z;
positionData.push_back(pos);
}
else{
if (typeString == TOKEN_VUV){ // UV coord
glm::vec2 UV;
lineStream >> UV.x >> UV.y;
uvData.push_back(UV);
}
else{
if (typeString == TOKEN_VNORMAL){ // Normal
glm::vec3 normal;
lineStream >> normal.x >> normal.y >> normal.z;
normalData.push_back(normal);
}
else{
if (typeString == TOKEN_FACE){ // Face
meshFaceIndex faceIndex;
char interrupt;
for (int i = 0; i < 3; ++i){
lineStream >> faceIndex.positionIndex[i] >> interrupt
>> faceIndex.uvIndex[i] >> interrupt
>> faceIndex.normalIndex[i];
}
faces.push_back(faceIndex);
}
}
}
}
}
fileStream.close();
std::vector<glm::vec3> packedPositions;
std::vector<glm::vec2> packedUVs;
std::vector<glm::vec3> packedNormals;
for (auto f : faces){
Face face; // Derp derp;
for (auto i = 0; i < 3; ++i){
if (!positionData.empty()){
face.vertices[i].position = positionData[f.positionIndex[i] - 1];
packedPositions.push_back(face.vertices[i].position);
}
else
face.vertices[i].position = glm::vec3(0.0f);
if (!uvData.empty()){
face.vertices[i].uv = uvData[f.uvIndex[i] - 1];
packedUVs.push_back(face.vertices[i].uv);
}
else
face.vertices[i].uv = glm::vec2(0.0f);
if (!normalData.empty()){
face.vertices[i].normal = normalData[f.normalIndex[i] - 1];
packedNormals.push_back(face.vertices[i].normal);
}
else
face.vertices[i].normal = glm::vec3(0.0f);
}
myMesh.faces.push_back(face);
}
glGenVertexArrays(1, &(myMesh.gl_vaoID));
glBindVertexArray(myMesh.gl_vaoID);
GLuint positionBuffer; // positions
glGenBuffers(1, &positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedPositions.size(), &packedPositions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLuint uvBuffer; // uvs
glGenBuffers(1, &uvBuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec2)*packedUVs.size(), &packedUVs[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);
GLuint normalBuffer; // normals
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedNormals.size(), &packedNormals[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
The .obj loading routine is mostly adapted from this one:
http://www.limegarden.net/2010/03/02/wavefront-obj-mesh-loader/
This doesn't look like a depth testing issue to me, but more like misalignment in the vertex / index array data. Please show us the code in which you load the vertex buffer objects and the element buffer objects.
It is because of the function ChoosePixelFormat.
In my case the ChoosePixelFormat returns a pixelformat ID with value 8 which provides a depth buffer with 16 bits instead of the required 24 bits.
One simple fix was to set the ID manually to the value of 11 instead of 8 to get a suitable pixelformat for the application with 24 bits of depth-buffer.
I have written a basic program that loads a model and renders it to the screen. I'm using GLSL to transform the model appropriately, but the normals always seem to be incorrect after rotating them with every combination of model matrix, view matrix, inverse, transpose, etc that I could think of. The model matrix is just a rotation around the y-axis using glm:
angle += deltaTime;
modelMat = glm::rotate(glm::mat4(), angle, glm::vec3(0.f, 1.f, 0.f));
My current vertex shader code (I've modified the normal line many many times):
#version 150 core
uniform mat4 projMat;
uniform mat4 viewMat;
uniform mat4 modelMat;
in vec3 inPosition;
in vec3 inNormal;
out vec3 passColor;
void main()
{
gl_Position = projMat * viewMat * modelMat * vec4(inPosition, 1.0);
vec3 normal = normalize(mat3(inverse(modelMat)) * inNormal);
passColor = normal;
}
And my fragment shader:
#version 150 core
in vec3 passColor;
out vec4 outColor;
void main()
{
outColor = vec4(passColor, 1.0);
}
I know for sure that the uniform variables are being passed to the shader properly, as the model itself gets transformed properly, and the initial normals are correct if I do calculations such as directional lighting.
I've created a GIF of the rotating model, sorry about the low quality:
http://i.imgur.com/LgLKHCb.gif?1
What confuses me the most is how the normals appear to rotate on multiple axis, which I don't think should happen when multiplied by a simple rotation matrix on one axis.
Edit:
I've added some more of the client code below.
This is where the buffers get bound for the model, in the Mesh class (vao is GLuint, defined in the class):
GLuint vbo[3];
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(normals? (uvcoords? 3 : 2) : (uvcoords? 2 : 1), vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, vcount * 3 * sizeof(GLfloat), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
if(normals)
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, vcount * 3 * sizeof(GLfloat), normals, GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 0, 0);
glEnableVertexAttribArray(1);
}
if(uvcoords)
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
glBufferData(GL_ARRAY_BUFFER, vcount * 2 * sizeof(GLfloat), uvcoords, GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(2);
}
glBindVertexArray(0);
glGenBuffers(1, &ib);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ib);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, icount * sizeof(GLushort), indices, GL_STATIC_DRAW);
This is where the shaders are compiled after being loaded into memory with a simple readf(), in the Material class:
u32 vertexShader = glCreateShader(GL_VERTEX_SHADER);
u32 fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(vertexShader, 1, (const GLchar**)&vsContent, 0);
glCompileShader(vertexShader);
if(!validateShader(vertexShader)) return false;
glShaderSource(fragmentShader, 1, (const GLchar**)&fsContent, 0);
glCompileShader(fragmentShader);
if(!validateShader(fragmentShader)) return false;
programHandle = glCreateProgram();
glAttachShader(programHandle, vertexShader);
glAttachShader(programHandle, fragmentShader);
glBindAttribLocation(programHandle, 0, "inPosition");
glBindAttribLocation(programHandle, 1, "inNormal");
//glBindAttribLocation(programHandle, 2, "inUVCoords");
glLinkProgram(programHandle);
if(!validateProgram()) return false;
And the validateShader(GLuint) and validateProgram() functions:
bool Material::validateShader(GLuint shaderHandle)
{
char buffer[2048];
memset(buffer, 0, 2048);
GLsizei len = 0;
glGetShaderInfoLog(shaderHandle, 2048, &len, buffer);
if(len > 0)
{
Logger::log("ve::Material::validateShader: Failed to compile shader - %s", buffer);
return false;
}
return true;
}
bool Material::validateProgram()
{
char buffer[2048];
memset(buffer, 0, 2048);
GLsizei len = 0;
glGetProgramInfoLog(programHandle, 2048, &len, buffer);
if(len > 0)
{
Logger::log("ve::Material::validateProgram: Failed to link program - %s", buffer);
return false;
}
glValidateProgram(programHandle);
GLint status;
glGetProgramiv(programHandle, GL_VALIDATE_STATUS, &status);
if(status == GL_FALSE)
{
Logger::log("ve::Material::validateProgram: Failed to validate program");
return false;
}
return true;
}
Each Material instance has a std::map of Meshs, and get rendered as so:
void Material::render()
{
if(loaded)
{
glUseProgram(programHandle);
for(auto it = mmd->uniforms.begin(); it != mmd->uniforms.end(); ++it)
{
GLint loc = glGetUniformLocation(programHandle, (const GLchar*)it->first);
switch(it->second.type)
{
case E_UT_FLOAT3: glUniform3fv(loc, 1, it->second.f32ptr); break;
case E_UT_MAT4: glUniformMatrix4fv(loc, 1, GL_FALSE, it->second.f32ptr); break;
default: break;
}
}
for(Mesh* m : mmd->objects)
{
GLint loc = glGetUniformLocation(programHandle, "modelMat");
glUniformMatrix4fv(loc, 1, GL_FALSE, &m->getTransform()->getTransformMatrix()[0][0]);
m->render();
}
}
}
it->second.f32ptr would be a float pointer to &some_vec3[0] or &some_mat4[0][0].
I manually upload the model's transformation matrix before rendering, however (which is only a rotation matrix, the Transform class (returned by Mesh::getTransform()) will only do a glm::rotation() since I was trying to figure out the problem).
Lastly, the Mesh render code:
if(loaded)
{
glBindVertexArray(vao);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ib);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, 0);
}
I think this is all the necessary code, but I can post more if needed.
Your nomal matrix calculation is just wrong. The correct normal matrix would be the transpose of the inverse of the upper-left 3x3 submatrix of the model or modelview matrix (depending on which space you want to do your lighting calculations).
What you do is just inverting the full 4x4 matrix and taking the upper-left 3x3 submatrix of that, which is just totally wrong.
You should calculate transpose(inverse(mat3(modelMat))), but you really shouldn't do this in the shader, but calulate this toghether with the model matrix on the CPU to avoid letting the GPU calculate a quite expensive matrix inversion per vertex.
As long as your transformations consist of only rotations, translations, and uniform scaling, you can simply apply the rotation part of your transformations to the normals.
In general, it's the transposed inverse matrix that needs to be applied to the normals, using only the regular 3x3 linear transformation matrix, without the translation part that extends the matrix to 4x4.
For rotations and uniform scaling, the inverse-transpose is identical to the original matrix. So the matrix operations to invert and transpose matrices are only needed if you apply other types of transformations, like non-uniform scaling, or shear transforms.
Apparently, if the vertex normals of a mesh are incorrect, then strange rotation artifacts will occur. In my case, I had transformed the mesh in my 3D modelling program (Blender) by 90 degrees on the X axis, as Blender uses the z-axis as its vertical axis, whereas my program uses the y-axis as the vertical axis. However, the method I used to transform/rotate the mesh in Blender in my export script did not properly transform the normals, but only the positions of the vertices. Without any prior transformations, the program works as expected. I initially found out that the normals were incorrect by comparing the normalized positions and normals in a symmetrical object (I used a cube with smoothed normals), and saw that the normals were rotated. Thank you to #derhass and #Solkar for guiding me to the answer.
However, if anyone still wants to contribute, I would like to know why the normals don't rotate in one axis when multiplied by a single axis rotation matrix, even if they are incorrect.