I have the output of calculation result (basically the certain amount of cuboids in the certain rotations) stored in the std::vector Box, based on which I am creating the model matrices for OpenGl visualization:
std::vector<glm::mat4> modelMatrices;
for (int32_t i = 0; i < Box.number_of_cuboids(); i++)
{
float rx, ry, rz, teta;
Box.cuboid(i).get_rotation(rx, ry, rz, teta, j);
float x, y, z;
Box.cuboid(i).position(x, y, z, j);
glm::mat4 model = glm::translate(glm::mat4(1.0f), glm::vec3(x, y, z))
* glm::rotate(glm::mat4(1.0f), teta, glm::vec3(rx, ry, rz))
* glm::scale(glm::mat4(1.0f), glm::vec3(
Box.cuboid(i).width(), Box.cuboid(i).length(j), Box.cuboid.height()));
modelMatrices.push_back(model);
}
}
and I can successfully visualise them like that:
while (!glfwWindowShouldClose(window))
{
processInput(window, Box.size_x(), Box.size_y(), Box.size_z());
glClearColor(0.95f, 0.95f, 0.95f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
/* shaders part <...>*/
for (int32_t i = 0; i < modelMatrices.size(); i++)
{
ourShader.setMat4("model", modelMatrices[i]);
glDrawArrays(GL_TRIANGLES, 0, 36);
}
glfwSwapBuffers(window);
glfwPollEvents();
}
My problem is that Box is already the final output of the calculations. I would like to see the all the iterations steps, so basically what I would like to do:
function ManyCalculations (std::vector<Box>)
{
at every iteration I save the current status Box in the vector, i.e.
}
so basically after lets say 10000 iterations I end up with the same amount of Box elements, and now I would like to run such vector as frames/animation(video?) in my OpenGl function, and so I could see the evolving calculation of the contents.
To animate your objects with each frame you would need to keep updating you Vertex Buffer Objects(VBOs).
Either you can keep adding each new box with a frame or change the Translation Matrices.
Than set the new data in your VBO before drawing.
If you know the maximum size of your data in that case you can build a large VBO and keep updating the data with glBufferSubData.
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
glBufferData(GL_ARRAY_BUFFER, Box.number_of_cuboids() * sizeof(cuboids), &Box[0], GL_DYNAMIC_DRAW);
Related
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I'm having my vertices clipped on the edged as shown on this album:
http://imgur.com/a/VkCrJ
When my terrain size if 400 x 400 i get clipping, yet at 40x40 or anything less, i don't get any clipping.
This is my code to fill the position and indices:
void Terrain::fillPosition()
{
//start from the top right and work your way down to 1,1
double x = -1, y = 1, z = 1;
float rowValue = static_cast<float>((1.0f / _rows) * 2.0); // .05 if 40
float colValue = static_cast<float>((1.0f / _columns) * 2.0); // .05 if 40
for (y; y > -1; y -= colValue)
{
for (x; x < 1; x += rowValue)
{
_vertexPosition.emplace_back(glm::vec3(x, y, z));
}
x = -1;
}
}
This properly sets my position, I've tested it with GL_POINTS. It works fine at 400x400 and 40x40 and other values in between.
Index code:
void Terrain::fillIndices()
{
glm::ivec3 triangle1, triangle2;
for (int y = 0; y < _columns - 1; y++)
{
for (int x = 0; x < _rows - 1; x++)
{
// Triangle 1
triangle1.x = x + y * _rows;
triangle1.y = x + (y + 1) * _rows;
triangle1.z =(x + 1) + y * _rows;
// Triangle 2
triangle2.x = triangle1.y;
triangle2.y = (x + 1) + (y + 1) * _rows;
triangle2.z = triangle1.z;
// add our data to the vector
_indices.emplace_back(triangle1.x);
_indices.emplace_back(triangle1.y);
_indices.emplace_back(triangle1.z);
_indices.emplace_back(triangle2.x);
_indices.emplace_back(triangle2.y);
_indices.emplace_back(triangle2.z);
}
}
}
_indices is std::vector.I'm not sure what's causing this, But I'm pretty sure it's the way I'm filling the indices for the mesh. I've re-written my algorhithm and it ends up with the same result, small values work perfectly fine, and large values over ~144 get clipped. I fill my buffers like this:
void Terrain::loadBuffers()
{
// generate the buffers and vertex arrays
glGenVertexArrays(1, &_vao);
glGenBuffers(1, &_vbo);
glGenBuffers(1, &_ebo);
// bind the vertex array
glBindVertexArray(_vao);
// bind the buffer to the vao
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _vertexPosition.size() * sizeof(_vertexPosition[0]), _vertexPosition.data(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _indices.size() * sizeof(_indices[0]), _indices.data(), GL_STATIC_DRAW);
// enable the shader locations
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
// unbind our data
glBindVertexArray(0);
}
and my draw call:
void Terrain::renderTerrain(ResourceManager& manager, ResourceIdTextures id)
{
// set the active texture
glActiveTexture(GL_TEXTURE0);
// bind our texture
glBindTexture(GL_TEXTURE_2D, manager.getTexture(id).getTexture());
_shaders.use();
// send data the our uniforms
glUniformMatrix4fv(_modelLoc, 1, GL_FALSE, glm::value_ptr(_model));
glUniformMatrix4fv(_viewLoc, 1, GL_FALSE, glm::value_ptr(_view));
glUniformMatrix4fv(_projectionLoc, 1, GL_FALSE, glm::value_ptr(_projection));
glUniform1i(_textureLoc, 0);
glBindVertexArray(_vao);
// Draw our terrain;
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glDrawElements(GL_TRIANGLES, _indices.size(), GL_UNSIGNED_INT, 0);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glBindVertexArray(0);
_shaders.unuse();
}
I thought it was because of my transformations to the model, so i removed all transformations and it's the same result. I tried debugging by casting the glm::vec3 to_string but the data looks fine, My projectionMatrix is:
glm::perspective(glm::radians(_fov), _aspRatio, 0.1f, 1000.0f);
So i doubt it's my perspective doing the clipping. _aspRatio is 16/9.
It's really strange that it works fine with small rowsxcolumns and not large ones, I'm really not sure what the problem is.
I would check the length of _vertexPosition; I suspect the problem is that you are (depending on the number of _rows) generating an extra point at the end of your inner loop (and your outer loop too, depending on _columns).
The reason is that the termination condition of your vertex loops depends on the exact behavior of your floating point math. Specifically, you divide up the range [-1,1] into _rows segments, then add them together and use them as a termination test. It is unclear whether you expect a final point (yielding _rows+1 points per inner loop) or not (yielding a rectangle which doesn't cover the entire [-1,1] range). Unfortunately, floating point is not exact, so this is a recipe for unreliable behavior: depending on the direction of your floating point error, you might get one or the other.
For a larger number of _rows, you are adding more (and significantly smaller) numbers to the same initial value; this will aggravate your floating point error.
At any rate, in order to get reliable behavior, you should use integer loop variables to determine loop termination. Accumulate your floating point coordinates separately, so that exact accuracy is not required.
I know there are a lot of resources about this on the internet but they didn't quite seem to help me.
What I want to achieve:
I am baking a mesh from data which stores the vertices inside a vector<Vector3>.
(Vector3 is a sctruct containg float x, y, z)
It stores triangles in a map<int, vector<int>>
(the key of the map is the submesh and the vector<int> the triangles)
the uv inside a vector<Vector2>
(Vector2 is a struct containing float x, y)
and a color value in vector<Color>
(the color value applies to vertices like the uv does)
Now I want to write a code that can read that data and draw it to the screen with maximum performance
What I got:
static void renderMesh(Mesh mesh, float x, float y, float z) {
if (mesh.triangles.empty()) return;
if (mesh.vertices.empty()) return;
if (mesh.uvs.empty()) return;
glColor3f(1, 1, 1);
typedef std::map<int, std::vector<int>>::iterator it_type;
for (it_type iterator = mesh.triangles.begin(); iterator != mesh.triangles.end(); iterator++) {
int submesh = iterator->first;
if (submesh < mesh.textures.size()) glBindTexture(GL_TEXTURE_2D, mesh.textures[submesh].id);
else glBindTexture(GL_TEXTURE_2D, 0);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
for (int i = 0; i < iterator->second.size(); i += 3) {
int t0 = iterator->second[i + 0];
int t1 = iterator->second[i + 1];
int t2 = iterator->second[i + 2];
Vector3 v0 = mesh.vertices[t0];
Vector3 v1 = mesh.vertices[t1];
Vector3 v2 = mesh.vertices[t2];
Color c0 = mesh.vertexColors[t0];
Color c1 = mesh.vertexColors[t1];
Color c2 = mesh.vertexColors[t2];
Vector2 u0 = mesh.uvs[t0];
Vector2 u1 = mesh.uvs[t1];
Vector2 u2 = mesh.uvs[t2];
glBegin(GL_TRIANGLES);
glColor4f(c0.r / 255.0f, c0.g / 255.0f, c0.b / 255.0f, c0.a / 255.0f); glTexCoord2d(u0.x, u0.y); glVertex3f(v0.x + x, v0.y + y, v0.z + z);
glColor4f(c1.r / 255.0f, c1.g / 255.0f, c1.b / 255.0f, c1.a / 255.0f); glTexCoord2d(u1.x, u1.y); glVertex3f(v1.x + x, v1.y + y, v1.z + z);
glColor4f(c2.r / 255.0f, c2.g / 255.0f, c2.b / 255.0f, c2.a / 255.0f); glTexCoord2d(u2.x, u2.y); glVertex3f(v2.x + x, v2.y + y, v2.z + z);
glEnd();
glColor3f(1, 1, 1);
}
}
}
The problem:
I found out that the way I render is not the best way and that you can achieve higher performance with glDrawArrays (I think it was called).
Could you help me rewriting my code to fit with glDrawArrays, since what I found so far on the internet did not help me too much.
Thanks, and if there is any more information needed just ask.
The use of functions like glBegin and glEnd is deprecated. Functions like glDrawArrays have a better performance, but slightly more complicated to use.
The problem of glBegin render techniques is you have to communicate each vertex one by one each time you want to draw something. Today, graphic cards are able to render thousands of vertices very quickly, but if you give it one by one, the render will become laggy regardless your graphic card performance.
The main advantage of glDrawArrays is you have to initialize your arrays once, and then draw it with one call. So first, you need to fill at the start of your program an array for each attribute. In your case: positions, colors and texture coords. It must be float arrays, something like this:
std::vector<float> vertices;
std::vector<float> colors;
std::vector<float> textureCoords;
for (int i = 0; i < iterator->second.size(); i += 3) {
int t0 = iterator->second[i + 0];
int t1 = iterator->second[i + 1];
int t2 = iterator->second[i + 2];
vertices.push_back(mesh.vertices[t0].x);
vertices.push_back(mesh.vertices[t0].y);
vertices.push_back(mesh.vertices[t0].z);
vertices.push_back(mesh.vertices[t1].x);
vertices.push_back(mesh.vertices[t1].y);
vertices.push_back(mesh.vertices[t1].z);
vertices.push_back(mesh.vertices[t2].x);
vertices.push_back(mesh.vertices[t2].y);
vertices.push_back(mesh.vertices[t2].z);
// [...] Same for colors and texture coords.
}
Then, in another function set only for display, you can use these arrays in order to draw it:
// Enable everything you need
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Set your used arrays
glVertexPointer(3, GL_FLOAT, 0, vertices.data());
glColorPointer(4, GL_FLOAT, 0, colors.data());
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords.data());
// Draw your mesh
glDrawArrays(GL_TRIANGLES, 0, size); // 'size' is the number of your vertices.
// Reset initial state
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
Of course, you'll have to enable other attributes you want to use, like texture or blending.
NOTE:
If you wish to learn about performance, there are also other functions using indices in order to reduce the size of data used, like glDrawElements.
There are also other more advanced OpenGL techniques that allows you to increase performance by saving your data directly on the graphic card memory, like Vertex Buffer Objects.
I'm having a problem currently with a particle engine I'm making. With the engine you can add more than one emitter in to the engine, the idea being that each particle system can emit its own particles.
The problem I'm getting however is that when I add a second particle system, the drawing of the first seems to be affected, by which I mean it's not drawn at all. The draw call of each particle system is being called correctly.
What I am thinking the issue is however is that although multiple VBOs are created, only one is actually used.
I'll show the important parts of my functions that affect the VBOs. My shader uses a uniform location to store WVP matrices. I should also mention each particle system should be using its own shader program.
This below is my initializeBuffers function called when the particle system is created:
void ParticleSystem::InitializeBuffers()
{
glGenVertexArrays(1, &VaoId);
glBindVertexArray(VaoId);
//glGenBuffers(1, &VboId);
glGenBuffers(1, &PositionBufferId);
glGenBuffers(1, &IndexBufferId);
glGenBuffers(1, &WVPId);
std::list<Particle>::iterator iterator = particles.begin();
//positions.reserve(5);
for (std::list<Particle>::iterator iterator = particles.begin(), end = particles.end(); iterator != end; ++iterator)
{
positions.push_back(iterator->GetPosition());
//verticesToDraw.insert(verticesToDraw.end(), iterator->GetVertices()->begin(), iterator->GetVertices()->end());
indicesToDraw.insert(indicesToDraw.end(), iterator->GetIndices()->begin(), iterator->GetIndices()->end());
}
//glBindBuffer(GL_ARRAY_BUFFER, VboId);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IndexBufferId);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indicesToDraw[0]) * indicesToDraw.size(), &indicesToDraw[0], GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, WVPId);
for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WVP_LOCATION + i);
glVertexAttribPointer(WVP_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(WVP_LOCATION + i, 1);
}
for(std::list<BaseBuildingBlock*>::iterator iterator = buildingBlocks.begin(), end = buildingBlocks.end(); iterator != end; ++iterator)
{
(*iterator)->InitializeBuffer(programId);
}
/*
glBindBuffer(GL_ARRAY_BUFFER, WorldId);
for (unsigned int i = 0; i < 4 ; i++) {
glEnableVertexAttribArray(WORLD_LOCATION + i);
glVertexAttribPointer(WORLD_LOCATION + i, 4, GL_FLOAT, GL_FALSE, sizeof(Matrix4f), (const GLvoid*)(sizeof(GLfloat) * i * 4));
glVertexAttribDivisor(WORLD_LOCATION + i, 1);
}
*/
//return GLCheckError();
}
This is the draw function and the code that actually draws the instanced elements, the wvp matrices are formed by the particle system earlier in the function.
void ParticleSystem::Draw(Matrix4f perspectiveCameraMatrix)
{
// scale TEST
//GLint gScaleLocation = glGetUniformLocation(program, "gScale");
//assert(gScaleLocation != 0xFFFFFFFF);
//glUniform1f(gScaleLocation, scale);
//Pipeline p;
//Matrix4f* WVPMatrices = new Matrix4f[particles.size()];
//Matrix4f* WorldMatrices = new Matrix4f[particles.size()];
WVPMatrices.clear();
WorldMatrices.clear();
glUseProgram(0);
glUseProgram(programId);
//Matrix4f perspectiveMatrix;
//perspectiveMatrix.BuildPerspProjMat(90,1, 0.01, 200, 100 - 0 /*getWidth() / 32*/, 100 - 0 /*getHeight() / 32*/);
//********************************************************************************************************
// Method 1
// Think I need to next define a camera position.
if(particles.size() == 0)
{
return;
}
verticesToDraw.clear();
Matrix4f scaleMatrix;
Matrix4f worldMatrix;
Matrix4f rotateMatrix;
Matrix4f finalMatrix;
//ColourId = glGetUniformLocation(programId, "UniformColour");
int i = 0;
for (std::list<Particle>::iterator iterator = particles.begin(), end = particles.end(); iterator != end; ++iterator)
{
verticesToDraw = *iterator->GetVertices();
indicesToDraw = *iterator->GetIndices();
//positions.push_back(iterator->GetPosition());
worldMatrix.InitTranslationTransform(iterator->GetPosition().x, iterator->GetPosition().y, iterator->GetPosition().z);
rotateMatrix.InitRotateTransform(iterator->GetRotation().x, iterator->GetRotation().y, iterator->GetRotation().z);
scaleMatrix.InitScaleTransform(iterator->GetScale().x, iterator->GetScale().y, iterator->GetScale().z);
finalMatrix = perspectiveCameraMatrix * worldMatrix * rotateMatrix * scaleMatrix;
//p.WorldPos(iterator->GetPosition());
//p.Rotate(iterator->GetRotation());
WVPMatrices.push_back(finalMatrix.Transpose());
/*glUniform4f(ColourId, iterator->GetColour().r, iterator->GetColour().g, iterator->GetColour().b,
iterator->GetColour().a);*/
//WorldMatrices[i] = p.GetWorldTrans();
i++;
//iterator->Draw();
}
//glEnableVertexAttribArray(0);
if(colourOverLifeBuildingBlock != NULL)
{
colourOverLifeBuildingBlock->Test();
}
glBindBuffer(GL_ARRAY_BUFFER, VboId);
glBufferData(GL_ARRAY_BUFFER, verticesToDraw.size() * sizeof(verticesToDraw[0]), &verticesToDraw.front(), GL_STATIC_DRAW);
glEnableVertexAttribArray(POSITION_LOCATION);
glVertexAttribPointer(POSITION_LOCATION, 3, GL_FLOAT, GL_FALSE, 0, 0);
int size = particles.size();
glBindBuffer(GL_ARRAY_BUFFER, WVPId);
glBufferData(GL_ARRAY_BUFFER, sizeof(Matrix4f) * size, &WVPMatrices.front(), GL_DYNAMIC_DRAW);
glDrawElementsInstanced(GL_TRIANGLES, indicesToDraw.size(), GL_UNSIGNED_BYTE, 0, particles.size());
glBindBuffer(GL_ARRAY_BUFFER, 0);
//glDisableVertexAttribArray(0);
//glFlush();
}
The particle system entire header is below:
#include <gl\glew.h>
#include <array>
#include <vector>
class ParticleSystem
{
public:
ParticleSystem(Vector3 pos, Quaternion rot, float spawnRate, int particlesToSpawn); // Constructs a particle system.
~ParticleSystem(); // Destructor.
void Update(float elapsedTime); // Updates the particle system.
void Draw(Matrix4f perspectiveMatrix); // Draw the particle system
void CreateShaders();
void InitializeBuffers();
// Long amount of get sets.
/*float* GetMinLifeTime();
void SetMinLifeTime(float lt);
float* GetMaxLifeTime();
void SetMaxLifeTime(float lt);*/
int* GetParticlesToSpawnAtATime();
void SetParticlesToSpawnAtATime(int particlesToSpawn);
float* GetSpawnRate();
void SetSpawnRate(float spawnRate);
Vector3* GetPosition();
void SetPosition(Vector3 newPosition);
Quaternion* GetRotation();
void SetRotation(Quaternion rotation);
std::list<BaseBuildingBlock*> GetBuildingBlocks();
VelocityBuildingBlock* GetVelocityBuilding();
ColourOverLifeBuildingBlock* GetColourOverLifeBuildingBlock();
LifeTimeBuildingBlock* GetLifeTimeBuildingBlock();
UniformColourBuildingBlock* GetUniformColourBuildingBlock();
ScaleBuildingBlock* GetScaleBuildingBlock();
/*Vector3* GetMinVelocity();
void SetMinVelocity(Vector3 min);
Vector3* GetMaxVelocity();
void SetMaxVelocity(Vector3 maxVelocity);*/
Vector3 GetMinParticleStartPoint();
void SetMinParticleStartPoint(Vector3 minParticleStartPoint);
Vector3 GetMaxParticleStartPoint();
void SetMaxParticleStartPoint(Vector3 maxParticleStartPoint);
bool CreateColourOverLifeBuildingBlock();
bool DeleteColourOverLifeBuildingBlock();
bool CreateUniformColourBuildingBlock();
bool DeleteUniformColourBuildingBlock();
bool CreateScaleBuildingBlock();
bool DeleteScaleBuildingBlock();
/*Colour GetStartColour();
void SetStartColour(Colour startColour);
Colour GetEndColour();
void SetEndColour(Colour endColour);*/
Vector3* GetMinParticleRotationAmountPerFrame();
void SetMinParticleRotationAmountPerFrame(Vector3 minParticleRotationAmount);
Vector3* GetMaxParticleRotationAmountPerFrame();
void SetMaxParticleRotationAmountPerFrame(Vector3 maxParticleRotationAmount);
void Save(TiXmlElement* element);
private:
// Spawns a particle.
void SpawnParticle();
GLuint VaoId;
GLuint VboId;
GLuint IndexBufferId;
GLuint PositionBufferId;
GLuint WVPId;
GLenum programId;
std::vector<GLfloat> verticesToDraw;
std::vector<GLubyte> indicesToDraw;
std::vector<Vector3> positions;
std::vector<Matrix4f> WVPMatrices;
std::vector<Matrix4f> WorldMatrices;
std::list<Particle> particles; // List of particles
Vector3 position; // position of the emitter
Quaternion rotation; // rotation of the emitter.
float spawnRate; // spawnrate of the emitter.
int particlesToSpawnAtATime; // The amount of particles to spawn at a time.
float minLifeTime; // The minimum time a particle can live for.
float maxLifeTime; // The maximum time a particle can live for.
float timer; // Timer
ShaderCreator* shaderCreator;
//Vector3 minVelocity; // The minimum velocity a particle can have.
//Vector3 maxVelocity; // The maximum velocity a particle can have/
//std::list<BaseBuildingBlock> buildingBlocks;
// I'm thinking of eventually making a list of baseBuildingBlocks.
std::list<BaseBuildingBlock*> buildingBlocks;
VelocityBuildingBlock* velocityBuildingBlock;
ColourOverLifeBuildingBlock* colourOverLifeBuildingBlock;
LifeTimeBuildingBlock* lifeTimeBuildingBlock;
UniformColourBuildingBlock* uniformColourBuildingBlock;
ScaleBuildingBlock* scaleBuildingBlock;
Vector3 minParticleStartPoint; // The minimum position a particle can start at.
Vector3 maxParticleStartPoint; // The maximum position a particle can start at.
Vector3 minParticleRotationAmountPerFrame; // The minimum amount of rotation that a particle can rotate every frame.
Vector3 maxParticleRotationAmountPerFrame; // The maximum amount of rotation that a particle can rotate every frame.
Colour startColour; // StartColour is the colour that a particle will start with.
Colour endColour; // EndColour is the colour that a particle will end with.
//TEST
float scale;
};
#endif
Now I'm wondering, is there some way I have to switch the active VBO? or am I totally on the wrong track. I used a shader debugger and both VBOs defiantely exist.
you'll need to correctly set up your vertex attribs before each draw call - i.e., you have to call glBindBuffer followed by glEnableVertexArray & glVertexAttribPointer for each of your attributes before each draw call. in the code you posted, this happens only for the particle position, but not for the 'WVP_LOCATION' attribute which apparently contains your transformation matrices ( you do upload the data to the GPU via glBufferData, but don't set up the attribute ) - meaning that once you have more than one particle system, only the transformation matrices of your second particle system are ever going to be accessed for rendering.
one a side not, what you're trying to do here seems to be quite inefficient - you're essentially pushing one transformation matrix to the GPU for each of your particles, per frame. Depending on how many particles you want, this is going to kill your performance - you should consider updating the particle's position etc. with a transform feedback.
edit: just realized that the opengl wiki link doen't really explain a lot. a transform feedback is a way to record vertex shader outputs ( or, if a geometry / tessellation shader were present, that output would be recorded instead ). The output variables are written into a VBO - afterwards, they can be used for rendering like any other vertex attribute. The whole concept is extremely similar to using a framebuffer object for recording fragment shader outputs; It allows for particle systems that exist entirely on the GPU, with a vertex shader computing the updated position, life time & other attributes in each frame. A very nice tutorial, which shows the basic setup of such a transform feedback, can be found here