OpenGL VBO drawing problems - c++

I'm having a problem currently with a particle engine I'm making. With the engine you can add more than one emitter in to the engine, the idea being that each particle system can emit its own particles.
The problem I'm getting however is that when I add a second particle system, the drawing of the first seems to be affected, by which I mean it's not drawn at all. The draw call of each particle system is being called correctly.
What I am thinking the issue is however is that although multiple VBOs are created, only one is actually used.
I'll show the important parts of my functions that affect the VBOs. My shader uses a uniform location to store WVP matrices. I should also mention each particle system should be using its own shader program.
This below is my initializeBuffers function called when the particle system is created:
void ParticleSystem::InitializeBuffers()
{
glGenVertexArrays(1, &VaoId);
glBindVertexArray(VaoId);
//glGenBuffers(1, &VboId);
glGenBuffers(1, &PositionBufferId);
glGenBuffers(1, &IndexBufferId);
glGenBuffers(1, &WVPId);
std::list<Particle>::iterator iterator = particles.begin();
//positions.reserve(5);
for (std::list<Particle>::iterator iterator = particles.begin(), end = particles.end(); iterator != end; ++iterator)
{
positions.push_back(iterator->GetPosition());
//verticesToDraw.insert(verticesToDraw.end(), iterator->GetVertices()->begin(), iterator->GetVertices()->end());
indicesToDraw.insert(indicesToDraw.end(), iterator->GetIndices()->begin(), iterator->GetIndices()->end());
}
//glBindBuffer(GL_ARRAY_BUFFER, VboId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBufferId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indicesToDraw[0]) * indicesToDraw.size(), &indicesToDraw[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, WVPId);
for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WVP_LOCATION + i);
glVertexAttribPointer(WVP_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(WVP_LOCATION + i, 1);
}
for(std::list<BaseBuildingBlock*>::iterator iterator = buildingBlocks.begin(), end = buildingBlocks.end(); iterator != end; ++iterator)
{
(*iterator)->InitializeBuffer(programId);
}
/*
glBindBuffer(GL_ARRAY_BUFFER, WorldId);
for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WORLD_LOCATION + i);
glVertexAttribPointer(WORLD_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(WORLD_LOCATION + i, 1);
}
*/
//return GLCheckError();
}
This is the draw function and the code that actually draws the instanced elements, the wvp matrices are formed by the particle system earlier in the function.
void ParticleSystem::Draw(Matrix4f perspectiveCameraMatrix)
{
// scale TEST
//GLint gScaleLocation = glGetUniformLocation(program, "gScale");
//assert(gScaleLocation != 0xFFFFFFFF);
//glUniform1f(gScaleLocation, scale);
//Pipeline p;
//Matrix4f* WVPMatrices = new Matrix4f[particles.size()];
//Matrix4f* WorldMatrices = new Matrix4f[particles.size()];
WVPMatrices.clear();
WorldMatrices.clear();
glUseProgram(0);
glUseProgram(programId);
//Matrix4f perspectiveMatrix;
//perspectiveMatrix.BuildPerspProjMat(90,1, 0.01, 200, 100 - 0 /*getWidth() / 32*/, 100 - 0 /*getHeight() / 32*/);
//********************************************************************************************************
// Method 1
// Think I need to next define a camera position.
if(particles.size() == 0)
{
return;
}
verticesToDraw.clear();
Matrix4f scaleMatrix;
Matrix4f worldMatrix;
Matrix4f rotateMatrix;
Matrix4f finalMatrix;
//ColourId = glGetUniformLocation(programId, "UniformColour");
int i = 0;
for (std::list<Particle>::iterator iterator = particles.begin(), end = particles.end(); iterator != end; ++iterator)
{
verticesToDraw = *iterator->GetVertices();
indicesToDraw = *iterator->GetIndices();
//positions.push_back(iterator->GetPosition());
worldMatrix.InitTranslationTransform(iterator->GetPosition().x, iterator->GetPosition().y, iterator->GetPosition().z);
rotateMatrix.InitRotateTransform(iterator->GetRotation().x, iterator->GetRotation().y, iterator->GetRotation().z);
scaleMatrix.InitScaleTransform(iterator->GetScale().x, iterator->GetScale().y, iterator->GetScale().z);
finalMatrix = perspectiveCameraMatrix * worldMatrix * rotateMatrix * scaleMatrix;
//p.WorldPos(iterator->GetPosition());
//p.Rotate(iterator->GetRotation());
WVPMatrices.push_back(finalMatrix.Transpose());
/*glUniform4f(ColourId, iterator->GetColour().r, iterator->GetColour().g, iterator->GetColour().b,
iterator->GetColour().a);*/
//WorldMatrices[i] = p.GetWorldTrans();
i++;
//iterator->Draw();
}
//glEnableVertexAttribArray(0);
if(colourOverLifeBuildingBlock != NULL)
{
colourOverLifeBuildingBlock->Test();
}
glBindBuffer(GL_ARRAY_BUFFER, VboId);
glBufferData(GL_ARRAY_BUFFER, verticesToDraw.size() * sizeof(verticesToDraw[0]), &verticesToDraw.front(), GL_STATIC_DRAW);
glEnableVertexAttribArray(POSITION_LOCATION);
glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);
int size = particles.size();
glBindBuffer(GL_ARRAY_BUFFER, WVPId);
glBufferData(GL_ARRAY_BUFFER, sizeof(Matrix4f) * size, &WVPMatrices.front(), GL_DYNAMIC_DRAW);
glDrawElementsInstanced(GL_TRIANGLES, indicesToDraw.size(), GL_UNSIGNED_BYTE, 0, particles.size());
glBindBuffer(GL_ARRAY_BUFFER, 0);
//glDisableVertexAttribArray(0);
//glFlush();
}
The particle system entire header is below:
#include <gl\glew.h>
#include <array>
#include <vector>
class ParticleSystem
{
public:
ParticleSystem(Vector3 pos, Quaternion rot, float spawnRate, int particlesToSpawn); // Constructs a particle system.
~ParticleSystem(); // Destructor.
void Update(float elapsedTime); // Updates the particle system.
void Draw(Matrix4f perspectiveMatrix); // Draw the particle system
void CreateShaders();
void InitializeBuffers();
// Long amount of get sets.
/*float* GetMinLifeTime();
void SetMinLifeTime(float lt);
float* GetMaxLifeTime();
void SetMaxLifeTime(float lt);*/
int* GetParticlesToSpawnAtATime();
void SetParticlesToSpawnAtATime(int particlesToSpawn);
float* GetSpawnRate();
void SetSpawnRate(float spawnRate);
Vector3* GetPosition();
void SetPosition(Vector3 newPosition);
Quaternion* GetRotation();
void SetRotation(Quaternion rotation);
std::list<BaseBuildingBlock*> GetBuildingBlocks();
VelocityBuildingBlock* GetVelocityBuilding();
ColourOverLifeBuildingBlock* GetColourOverLifeBuildingBlock();
LifeTimeBuildingBlock* GetLifeTimeBuildingBlock();
UniformColourBuildingBlock* GetUniformColourBuildingBlock();
ScaleBuildingBlock* GetScaleBuildingBlock();
/*Vector3* GetMinVelocity();
void SetMinVelocity(Vector3 min);
Vector3* GetMaxVelocity();
void SetMaxVelocity(Vector3 maxVelocity);*/
Vector3 GetMinParticleStartPoint();
void SetMinParticleStartPoint(Vector3 minParticleStartPoint);
Vector3 GetMaxParticleStartPoint();
void SetMaxParticleStartPoint(Vector3 maxParticleStartPoint);
bool CreateColourOverLifeBuildingBlock();
bool DeleteColourOverLifeBuildingBlock();
bool CreateUniformColourBuildingBlock();
bool DeleteUniformColourBuildingBlock();
bool CreateScaleBuildingBlock();
bool DeleteScaleBuildingBlock();
/*Colour GetStartColour();
void SetStartColour(Colour startColour);
Colour GetEndColour();
void SetEndColour(Colour endColour);*/
Vector3* GetMinParticleRotationAmountPerFrame();
void SetMinParticleRotationAmountPerFrame(Vector3 minParticleRotationAmount);
Vector3* GetMaxParticleRotationAmountPerFrame();
void SetMaxParticleRotationAmountPerFrame(Vector3 maxParticleRotationAmount);
void Save(TiXmlElement* element);
private:
// Spawns a particle.
void SpawnParticle();
GLuint VaoId;
GLuint VboId;
GLuint IndexBufferId;
GLuint PositionBufferId;
GLuint WVPId;
GLenum programId;
std::vector<GLfloat> verticesToDraw;
std::vector<GLubyte> indicesToDraw;
std::vector<Vector3> positions;
std::vector<Matrix4f> WVPMatrices;
std::vector<Matrix4f> WorldMatrices;
std::list<Particle> particles; // List of particles
Vector3 position; // position of the emitter
Quaternion rotation; // rotation of the emitter.
float spawnRate; // spawnrate of the emitter.
int particlesToSpawnAtATime; // The amount of particles to spawn at a time.
float minLifeTime; // The minimum time a particle can live for.
float maxLifeTime; // The maximum time a particle can live for.
float timer; // Timer
ShaderCreator* shaderCreator;
//Vector3 minVelocity; // The minimum velocity a particle can have.
//Vector3 maxVelocity; // The maximum velocity a particle can have/
//std::list<BaseBuildingBlock> buildingBlocks;
// I'm thinking of eventually making a list of baseBuildingBlocks.
std::list<BaseBuildingBlock*> buildingBlocks;
VelocityBuildingBlock* velocityBuildingBlock;
ColourOverLifeBuildingBlock* colourOverLifeBuildingBlock;
LifeTimeBuildingBlock* lifeTimeBuildingBlock;
UniformColourBuildingBlock* uniformColourBuildingBlock;
ScaleBuildingBlock* scaleBuildingBlock;
Vector3 minParticleStartPoint; // The minimum position a particle can start at.
Vector3 maxParticleStartPoint; // The maximum position a particle can start at.
Vector3 minParticleRotationAmountPerFrame; // The minimum amount of rotation that a particle can rotate every frame.
Vector3 maxParticleRotationAmountPerFrame; // The maximum amount of rotation that a particle can rotate every frame.
Colour startColour; // StartColour is the colour that a particle will start with.
Colour endColour; // EndColour is the colour that a particle will end with.
//TEST
float scale;
};
#endif
Now I'm wondering, is there some way I have to switch the active VBO? or am I totally on the wrong track. I used a shader debugger and both VBOs defiantely exist.

you'll need to correctly set up your vertex attribs before each draw call - i.e., you have to call glBindBuffer followed by glEnableVertexArray & glVertexAttribPointer for each of your attributes before each draw call. in the code you posted, this happens only for the particle position, but not for the 'WVP_LOCATION' attribute which apparently contains your transformation matrices ( you do upload the data to the GPU via glBufferData, but don't set up the attribute ) - meaning that once you have more than one particle system, only the transformation matrices of your second particle system are ever going to be accessed for rendering.
one a side not, what you're trying to do here seems to be quite inefficient - you're essentially pushing one transformation matrix to the GPU for each of your particles, per frame. Depending on how many particles you want, this is going to kill your performance - you should consider updating the particle's position etc. with a transform feedback.
edit: just realized that the opengl wiki link doen't really explain a lot. a transform feedback is a way to record vertex shader outputs ( or, if a geometry / tessellation shader were present, that output would be recorded instead ). The output variables are written into a VBO - afterwards, they can be used for rendering like any other vertex attribute. The whole concept is extremely similar to using a framebuffer object for recording fragment shader outputs; It allows for particle systems that exist entirely on the GPU, with a vertex shader computing the updated position, life time & other attributes in each frame. A very nice tutorial, which shows the basic setup of such a transform feedback, can be found here

Related

How to use GL_TRIANGLE_FAN to draw a circle in OpenGL?

Seasons Greetings everyone! I adapted the code from this tutorial to support using a VAO/VBO, but now I get this:
Instead of a nice round circle. Here's the code:
#define GLEW_STATIC
#include <GLEW/glew.h>
#include <GLFW/glfw3.h>
#include <corecrt_math_defines.h>
#include <cmath>
#include <vector>
#define SCREEN_WIDTH 3000
#define SCREEN_HEIGHT 1900
void drawPolygon(GLuint& vao, GLuint& vbo, GLfloat x,
GLfloat y, GLdouble radius, GLint numberOfSides);
int main(void) {
if (!glfwInit()) {
return -1;
}
GLFWwindow* window = glfwCreateWindow(SCREEN_WIDTH,
SCREEN_HEIGHT, "Hello World", NULL, NULL);
if (!window) {
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glewExperimental = GL_TRUE;
glewInit();
glGetError();
glViewport(0.0f, 0.0f, SCREEN_WIDTH, SCREEN_HEIGHT);
GLuint vao, vbo;
glGenVertexArrays(1, &vao);
glGenBuffers(1, &vbo);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, SCREEN_WIDTH, 0, SCREEN_HEIGHT, -1, 1);
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT);
drawPolygon(vao, vbo, SCREEN_WIDTH / 2,
SCREEN_HEIGHT / 2, 250.0f, 50);
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwTerminate();
return 0;
}
void drawPolygon(GLuint& vao, GLuint& vbo, GLfloat x,
GLfloat y, GLdouble radius, GLint numberOfSides) {
int numVertices = numberOfSides + 2;
GLdouble twicePi = 2.0f * M_PI;
vector<GLdouble> circleVerticesX;
vector<GLdouble> circleVerticesY;
circleVerticesX.push_back(x);
circleVerticesY.push_back(y);
for (int i = 1; i < numVertices; i++) {
circleVerticesX.push_back(x + (radius *
cos(i * twicePi / numberOfSides)));
circleVerticesY.push_back(y + (radius *
sin(i * twicePi / numberOfSides)));
}
vector<GLdouble> vertices;
for (int i = 0; i < numVertices; i++) {
vertices.push_back(circleVerticesX[i]);
vertices.push_back(circleVerticesY[i]);
}
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(
GLdouble), vertices.data(), GL_STATIC_DRAW);
glBindVertexArray(vao);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_DOUBLE, GL_FALSE,
2 * sizeof(GLdouble), (void*)0);
glDrawArrays(GL_TRIANGLE_FAN, 0, 27);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
What the heck have I done wrong?! Using the original code works, so I am absolutely baffled by this ridiculous result! MTIA to anyone who can help :-)
The size of your VBO is numVertices * sizeof(GLdouble), which is half the actual size of your vertex data (there is an x and a y component for each vertex). Thus, you end up drawing twice as many vertices as your VBO actually has vertex data for. Reading out of bounds of your VBO seems to result in just zeroes in your OpenGL implementation, which is why all vertices of the lower half of your circle are just the bottom left corner (this is not guaranteed unless you explicitly enable robust buffer access, just what your driver and GPU seem to be doing anyways)…
couple of notes:
You generally don't want to use double unless you need it. double takes twice the memory bandwidth, and arithmetic on double is generally at least a bit slower than float (yes, even on an x86 CPU since floating point arithmetic is not really done using the x87 FPU anymore nowadays). GPUs in particular are built for float arithmetic. Especially on consumer GPUs, double arithmetic is significantly (an order of magnitude) slower than float.
Why not simply push the vertex data directly into vertices rather than first into circleVerticesX and circleVerticesY and then copying it over into vertices from there?
You know exactly how many vertices are going to be generated. Thus, there's no need to dynamically grow your vertex container in the loop that generates the coordinates (among other things, the .push_back() will almost certainly prevent vectorization of the loop). I would suggest to at least .reserve() the corresponding number of elements (assuming this is an std::vector) before entering the loop. Personally, I would just allocate an array of the appropriate, fixed size via
auto vertex_data = std::unique_ptr<GLfloat[]> { new GLfloat[numVertices * 2] };
in this case.
You don't actually need a center point. Since a circle is a convex shape, you can simply use one of the points on the circle as the central vertex of your fan.
This is not necessarily the most efficient way to draw a circle (a lot of long, thin triangles; more on that here)
You probably don't want to generate and upload your vertex data again and again every frame unless something about it changes.
Apart from that, you will probably want to make your glDrawArrays call draw the actual number of vertices rather than just always 27…

OpenGL instanced rendering slower than glBegin/glEnd

I'm porting an older program using glBegin()/glEnd() (top picture) to glDrawArraysInstanced() (bottom picture). I expected some performance improvements, but I got the opposite. Now this is the first time I've tried using glDrawArraysInstanced() so I think I must have screwed up somewhere.
The two are basically identical and the only difference is how they draw the circles.
What have I done wrong? And if not, what makes it slower?
// This runs once at startup
std::vector<glm::mat4> transforms;
glGenBuffers(NUM_BUFFERS, VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO[TRANSFORM]);
for (int i = 0; i < 4; ++i) {
glEnableVertexAttribArray(1 + i);
glVertexAttribPointer(1 + i, 4, GL_FLOAT, GL_FALSE, sizeof(glm::mat4),
(GLvoid *)(i * sizeof(glm::vec4)));
glVertexAttribDivisor(1 + i, 1);
} // ---------
// This runs every frame
if (num_circles > transforms.size()) transforms.resize(num_circles);
int i = 0;
for (const auto &circle : circle_vec) {
transforms[i++] = circle.transform.getModel();
}
glBindBuffer(GL_ARRAY_BUFFER, VBO[TRANSFORM]);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::mat4) * num_circles, &transforms[0], GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(VAO);
glDrawArraysInstanced(GL_LINE_LOOP, 0, CIRCLE_NUM_VERTICES, num_circles);
glBindVertexArray(0);
// ---------
// And this is the vertex shader
#version 410
in vec3 position;
in mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1.0);
}
What I saw at my first glimpse is that you are creating a new vector on every frame. Consider caching it.
// This runs every frame
std::vector<glm::mat4> transforms;

OpenGL Vertices being clipped from the side

I'm having my vertices clipped on the edged as shown on this album:
http://imgur.com/a/VkCrJ
When my terrain size if 400 x 400 i get clipping, yet at 40x40 or anything less, i don't get any clipping.
This is my code to fill the position and indices:
void Terrain::fillPosition()
{
//start from the top right and work your way down to 1,1
double x = -1, y = 1, z = 1;
float rowValue = static_cast<float>((1.0f / _rows) * 2.0); // .05 if 40
float colValue = static_cast<float>((1.0f / _columns) * 2.0); // .05 if 40
for (y; y > -1; y -= colValue)
{
for (x; x < 1; x += rowValue)
{
_vertexPosition.emplace_back(glm::vec3(x, y, z));
}
x = -1;
}
}
This properly sets my position, I've tested it with GL_POINTS. It works fine at 400x400 and 40x40 and other values in between.
Index code:
void Terrain::fillIndices()
{
glm::ivec3 triangle1, triangle2;
for (int y = 0; y < _columns - 1; y++)
{
for (int x = 0; x < _rows - 1; x++)
{
// Triangle 1
triangle1.x = x + y * _rows;
triangle1.y = x + (y + 1) * _rows;
triangle1.z =(x + 1) + y * _rows;
// Triangle 2
triangle2.x = triangle1.y;
triangle2.y = (x + 1) + (y + 1) * _rows;
triangle2.z = triangle1.z;
// add our data to the vector
_indices.emplace_back(triangle1.x);
_indices.emplace_back(triangle1.y);
_indices.emplace_back(triangle1.z);
_indices.emplace_back(triangle2.x);
_indices.emplace_back(triangle2.y);
_indices.emplace_back(triangle2.z);
}
}
}
_indices is std::vector.I'm not sure what's causing this, But I'm pretty sure it's the way I'm filling the indices for the mesh. I've re-written my algorhithm and it ends up with the same result, small values work perfectly fine, and large values over ~144 get clipped. I fill my buffers like this:
void Terrain::loadBuffers()
{
// generate the buffers and vertex arrays
glGenVertexArrays(1, &_vao);
glGenBuffers(1, &_vbo);
glGenBuffers(1, &_ebo);
// bind the vertex array
glBindVertexArray(_vao);
// bind the buffer to the vao
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _vertexPosition.size() * sizeof(_vertexPosition[0]), _vertexPosition.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _indices.size() * sizeof(_indices[0]), _indices.data(), GL_STATIC_DRAW);
// enable the shader locations
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
// unbind our data
glBindVertexArray(0);
}
and my draw call:
void Terrain::renderTerrain(ResourceManager& manager, ResourceIdTextures id)
{
// set the active texture
glActiveTexture(GL_TEXTURE0);
// bind our texture
glBindTexture(GL_TEXTURE_2D, manager.getTexture(id).getTexture());
_shaders.use();
// send data the our uniforms
glUniformMatrix4fv(_modelLoc, 1, GL_FALSE, glm::value_ptr(_model));
glUniformMatrix4fv(_viewLoc, 1, GL_FALSE, glm::value_ptr(_view));
glUniformMatrix4fv(_projectionLoc, 1, GL_FALSE, glm::value_ptr(_projection));
glUniform1i(_textureLoc, 0);
glBindVertexArray(_vao);
// Draw our terrain;
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDrawElements(GL_TRIANGLES, _indices.size(), GL_UNSIGNED_INT, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glBindVertexArray(0);
_shaders.unuse();
}
I thought it was because of my transformations to the model, so i removed all transformations and it's the same result. I tried debugging by casting the glm::vec3 to_string but the data looks fine, My projectionMatrix is:
glm::perspective(glm::radians(_fov), _aspRatio, 0.1f, 1000.0f);
So i doubt it's my perspective doing the clipping. _aspRatio is 16/9.
It's really strange that it works fine with small rowsxcolumns and not large ones, I'm really not sure what the problem is.
I would check the length of _vertexPosition; I suspect the problem is that you are (depending on the number of _rows) generating an extra point at the end of your inner loop (and your outer loop too, depending on _columns).
The reason is that the termination condition of your vertex loops depends on the exact behavior of your floating point math. Specifically, you divide up the range [-1,1] into _rows segments, then add them together and use them as a termination test. It is unclear whether you expect a final point (yielding _rows+1 points per inner loop) or not (yielding a rectangle which doesn't cover the entire [-1,1] range). Unfortunately, floating point is not exact, so this is a recipe for unreliable behavior: depending on the direction of your floating point error, you might get one or the other.
For a larger number of _rows, you are adding more (and significantly smaller) numbers to the same initial value; this will aggravate your floating point error.
At any rate, in order to get reliable behavior, you should use integer loop variables to determine loop termination. Accumulate your floating point coordinates separately, so that exact accuracy is not required.

How to get keyboard navigation in OpenGL

I'm trying to create a solar system in OpenGL. I have the basic code for earth spinning on its axis and im trying to set the camera to move with the arrow keys.
using namespace std;
using namespace glm;
const int windowWidth = 1024;
const int windowHeight = 768;
GLuint VBO;
int NUMVERTS = 0;
bool* keyStates = new bool[256]; //Create an array of boolean values of length 256 (0-255)
float fraction = 0.1f; //Fraction for navigation speed using keys
// Transform uniforms location
GLuint gModelToWorldTransformLoc;
GLuint gWorldToViewToProjectionTransformLoc;
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
// TextureSampler uniform location
GLuint gTextureSamplerLoc;
// Texture ID
GLuint gTextureObject[11];
//Navigation variables
float posX;
float posY;
float posZ;
float viewX = 0.0f;
float viewY = 0.0f;
float viewZ = 0.0f;
float dirX;
float dirY;
float dirZ;
vec3 cameraPos = vec3(0.0f,0.0f,5.0f);
vec3 cameraView = vec3(viewX,viewY,viewZ);
vec3 cameraDir = vec3(0.0f,1.0f,0.0f);
These are all my variables that im using to edit the camera.
static void renderSceneCallBack()
{
// Clear the back buffer and the z-buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create our world space to view space transformation matrix
mat4 worldToViewTransform = lookAt(
cameraPos, // The position of your camera, in world space
cameraView, // where you want to look at, in world space
cameraDir // Camera up direction (set to 0,-1,0 to look upside-down)
);
// Create out projection transform
mat4 projectionTransform = perspective(45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f);
// Combine the world space to view space transformation matrix and the projection transformation matrix
mat4 worldToViewToProjectionTransform = projectionTransform * worldToViewTransform;
// Update the transforms in the shader program on the GPU
glUniformMatrix4fv(gWorldToViewToProjectionTransformLoc, 1, GL_FALSE, &worldToViewToProjectionTransform[0][0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)12);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)24);
// Set the material properties
glUniform1f(gKaLoc, 0.8f);
glUniform1f(gKdLoc, 0.8f);
// Bind the texture to the texture unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTextureObject[0]);
// Set our sampler to user Texture Unit 0
glUniform1i(gTextureSamplerLoc, 0);
// Draw triangle
mat4 modelToWorldTransform = mat4(1.0f);
static float angle = 0.0f;
angle+=1.0f;
modelToWorldTransform = rotate(modelToWorldTransform, angle, vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &modelToWorldTransform[0][0]);
glDrawArrays(GL_TRIANGLES, 0, NUMVERTS);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glutSwapBuffers();
}
This is the function that draws the earth onto the screen and determines where the camera is at.
void keyPressed (unsigned char key, int x, int y)
{
keyStates[key] = true; //Set the state of the current key to pressed
cout<<"keyPressed ";
}
void keyUp(unsigned char key, int x, int y)
{
keyStates[key] = false; //Set the state of the current key to released
cout<<"keyUp ";
}
void keyOperations (void)
{
if(keyStates['a'])
{
viewX += 0.5f;
}
cout<<"keyOperations ";
}
These are the functions I'm trying to use to edit the camera variables dynamically
// Create a vertex buffer
createVertexBuffer();
glutKeyboardFunc(keyPressed); //Tell Glut to use the method "keyPressed" for key events
glutKeyboardUpFunc(keyUp); //Tell Glut to use the method "keyUp" for key events
keyOperations();
glutMainLoop();
Finally here's the few lines in my main method where I'm trying to call the key press functions. In the console I see it detects that im pressing them but the planet doesnt move at all, I think I may be calling the keyOperations in the wrong place but I'm not sure.
You are correct, key operations is being called in the wrong place. Where it is now is called once then never again. It needs to go in your update code where you update the rotation of the planet. That way it is called at least once per frame.

Normal Rotation in GLSL

I have written a basic program that loads a model and renders it to the screen. I'm using GLSL to transform the model appropriately, but the normals always seem to be incorrect after rotating them with every combination of model matrix, view matrix, inverse, transpose, etc that I could think of. The model matrix is just a rotation around the y-axis using glm:
angle += deltaTime;
modelMat = glm::rotate(glm::mat4(), angle, glm::vec3(0.f, 1.f, 0.f));
My current vertex shader code (I've modified the normal line many many times):
#version 150 core
uniform mat4 projMat;
uniform mat4 viewMat;
uniform mat4 modelMat;
in vec3 inPosition;
in vec3 inNormal;
out vec3 passColor;
void main()
{
gl_Position = projMat * viewMat * modelMat * vec4(inPosition, 1.0);
vec3 normal = normalize(mat3(inverse(modelMat)) * inNormal);
passColor = normal;
}
And my fragment shader:
#version 150 core
in vec3 passColor;
out vec4 outColor;
void main()
{
outColor = vec4(passColor, 1.0);
}
I know for sure that the uniform variables are being passed to the shader properly, as the model itself gets transformed properly, and the initial normals are correct if I do calculations such as directional lighting.
I've created a GIF of the rotating model, sorry about the low quality:
http://i.imgur.com/LgLKHCb.gif?1
What confuses me the most is how the normals appear to rotate on multiple axis, which I don't think should happen when multiplied by a simple rotation matrix on one axis.
Edit:
I've added some more of the client code below.
This is where the buffers get bound for the model, in the Mesh class (vao is GLuint, defined in the class):
GLuint vbo[3];
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
glGenBuffers(normals? (uvcoords? 3 : 2) : (uvcoords? 2 : 1), vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, vcount * 3 * sizeof(GLfloat), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(0);
if(normals)
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, vcount * 3 * sizeof(GLfloat), normals, GL_STATIC_DRAW);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 0, 0);
glEnableVertexAttribArray(1);
}
if(uvcoords)
{
glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
glBufferData(GL_ARRAY_BUFFER, vcount * 2 * sizeof(GLfloat), uvcoords, GL_STATIC_DRAW);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(2);
}
glBindVertexArray(0);
glGenBuffers(1, &ib);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ib);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, icount * sizeof(GLushort), indices, GL_STATIC_DRAW);
This is where the shaders are compiled after being loaded into memory with a simple readf(), in the Material class:
u32 vertexShader = glCreateShader(GL_VERTEX_SHADER);
u32 fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(vertexShader, 1, (const GLchar**)&vsContent, 0);
glCompileShader(vertexShader);
if(!validateShader(vertexShader)) return false;
glShaderSource(fragmentShader, 1, (const GLchar**)&fsContent, 0);
glCompileShader(fragmentShader);
if(!validateShader(fragmentShader)) return false;
programHandle = glCreateProgram();
glAttachShader(programHandle, vertexShader);
glAttachShader(programHandle, fragmentShader);
glBindAttribLocation(programHandle, 0, "inPosition");
glBindAttribLocation(programHandle, 1, "inNormal");
//glBindAttribLocation(programHandle, 2, "inUVCoords");
glLinkProgram(programHandle);
if(!validateProgram()) return false;
And the validateShader(GLuint) and validateProgram() functions:
bool Material::validateShader(GLuint shaderHandle)
{
char buffer[2048];
memset(buffer, 0, 2048);
GLsizei len = 0;
glGetShaderInfoLog(shaderHandle, 2048, &len, buffer);
if(len > 0)
{
Logger::log("ve::Material::validateShader: Failed to compile shader - %s", buffer);
return false;
}
return true;
}
bool Material::validateProgram()
{
char buffer[2048];
memset(buffer, 0, 2048);
GLsizei len = 0;
glGetProgramInfoLog(programHandle, 2048, &len, buffer);
if(len > 0)
{
Logger::log("ve::Material::validateProgram: Failed to link program - %s", buffer);
return false;
}
glValidateProgram(programHandle);
GLint status;
glGetProgramiv(programHandle, GL_VALIDATE_STATUS, &status);
if(status == GL_FALSE)
{
Logger::log("ve::Material::validateProgram: Failed to validate program");
return false;
}
return true;
}
Each Material instance has a std::map of Meshs, and get rendered as so:
void Material::render()
{
if(loaded)
{
glUseProgram(programHandle);
for(auto it = mmd->uniforms.begin(); it != mmd->uniforms.end(); ++it)
{
GLint loc = glGetUniformLocation(programHandle, (const GLchar*)it->first);
switch(it->second.type)
{
case E_UT_FLOAT3: glUniform3fv(loc, 1, it->second.f32ptr); break;
case E_UT_MAT4: glUniformMatrix4fv(loc, 1, GL_FALSE, it->second.f32ptr); break;
default: break;
}
}
for(Mesh* m : mmd->objects)
{
GLint loc = glGetUniformLocation(programHandle, "modelMat");
glUniformMatrix4fv(loc, 1, GL_FALSE, &m->getTransform()->getTransformMatrix()[0][0]);
m->render();
}
}
}
it->second.f32ptr would be a float pointer to &some_vec3[0] or &some_mat4[0][0].
I manually upload the model's transformation matrix before rendering, however (which is only a rotation matrix, the Transform class (returned by Mesh::getTransform()) will only do a glm::rotation() since I was trying to figure out the problem).
Lastly, the Mesh render code:
if(loaded)
{
glBindVertexArray(vao);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ib);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, 0);
}
I think this is all the necessary code, but I can post more if needed.
Your nomal matrix calculation is just wrong. The correct normal matrix would be the transpose of the inverse of the upper-left 3x3 submatrix of the model or modelview matrix (depending on which space you want to do your lighting calculations).
What you do is just inverting the full 4x4 matrix and taking the upper-left 3x3 submatrix of that, which is just totally wrong.
You should calculate transpose(inverse(mat3(modelMat))), but you really shouldn't do this in the shader, but calulate this toghether with the model matrix on the CPU to avoid letting the GPU calculate a quite expensive matrix inversion per vertex.
As long as your transformations consist of only rotations, translations, and uniform scaling, you can simply apply the rotation part of your transformations to the normals.
In general, it's the transposed inverse matrix that needs to be applied to the normals, using only the regular 3x3 linear transformation matrix, without the translation part that extends the matrix to 4x4.
For rotations and uniform scaling, the inverse-transpose is identical to the original matrix. So the matrix operations to invert and transpose matrices are only needed if you apply other types of transformations, like non-uniform scaling, or shear transforms.
Apparently, if the vertex normals of a mesh are incorrect, then strange rotation artifacts will occur. In my case, I had transformed the mesh in my 3D modelling program (Blender) by 90 degrees on the X axis, as Blender uses the z-axis as its vertical axis, whereas my program uses the y-axis as the vertical axis. However, the method I used to transform/rotate the mesh in Blender in my export script did not properly transform the normals, but only the positions of the vertices. Without any prior transformations, the program works as expected. I initially found out that the normals were incorrect by comparing the normalized positions and normals in a symmetrical object (I used a cube with smoothed normals), and saw that the normals were rotated. Thank you to #derhass and #Solkar for guiding me to the answer.
However, if anyone still wants to contribute, I would like to know why the normals don't rotate in one axis when multiplied by a single axis rotation matrix, even if they are incorrect.