Why does my program translate all of my vertices? - opengl

I have two classes with their own model coordinates, colors, etc. I also have two shader programs that are logically the same. First I execute one shader program, edit the uniforms with the traditional view and projection matrices, and then I call the class to edit the model matrix uniquely, and then draw it's primitives. Immediately afterwards, I do the exact same thing, but with the second shader program, edit the uniforms again, and call the second class to draw it's primitives and it's own unique model matrix coordinates.
In the second class, I translate the model matrix each iteration, but not in the first class. For some reason it translates the model matrix in the first class as well, and I dont know why?
Source code:
//First shader program, update view and proj matrix, and have first class draw it's vertices
executable.Execute();
GLuint viewMatrix = glGetUniformLocation(executable.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
GLuint projMatrix = glGetUniformLocation(executable.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp.useClass(executable);
//Second Shader program, update view and proj matrix, and have second class draw it's vertices
executable2.Execute();
viewMatrix = glGetUniformLocation(executable2.getComp(), "viewMatrix");
glUniformMatrix4fv(viewMatrix, 1, GL_FALSE, glm::value_ptr(freeView.getFreeView()));
projMatrix = glGetUniformLocation(executable2.getComp(), "projectionMatrix");
glUniformMatrix4fv(projMatrix, 1, GL_FALSE, glm::value_ptr(projectionMatrix.getProjectionMatrix()));
temp2.useClass(executable2);
VertexShader:
#version 330 core
layout(location = 0) in vec3 positions;
layout(location = 1) in vec3 colors;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec3 color;
void main()
{
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(positions, 1.0f);
color = colors;
}
The second vertex shader is logically the same, with just different variable names, and the fragment shader just outputs color.
useClass function (from class one):
glBindVertexArray(tempVAO);
glm::mat4 modelMat;
modelMat = glm::mat4();
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(modelMat));
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0);
useClass function (from class two):
glBindVertexArray(tempVAO);
for(GLuint i = 0; i < 9; i++)
{
model[i] = glm::translate(model[i], gravity);
GLuint modelMatrix = glGetUniformLocation(exe.getComp(), "modelMatrix");
glUniformMatrix4fv(modelMatrix, 1, GL_FALSE, glm::value_ptr(model[i]));
glDrawArrays(GL_POINTS, 0, 1);
}
glBindVertexArray(0);
Both classes have data protection, and I just don't understand how translating the model matrix in one class, makes the model matrix in another class get translated as well, when using two shader programs? When I use one shader program for both classes, the translating works out fine, but not so much when I use two shader programs(one for each class)...
EDIT: After working on my project a little more, I figure out that the same problem happens when I compile and link two different shader programs with the same exact vertex and fragment shader, and just use each shader program before I draw from each class. So now the question I have is more along the lines of: Why does using two identical shader programs in between draws cause all of the vertices/model matrices to get translated?

I figured out what the problem was. Basically, since there is not really a way to directly exit the execution of a shader, my program was getting confused when I passed shaders getting executed through functions into other parts of the program. For some reason the program was thinking two shader programs were getting executed at the same time, hence why the model matrix was not getting reset consistently. To fix this issue, I limited the scope of each individual shader. Instead of having shaders executed in the same function and then passed through to other classes, I put each shader in the respective class that it gets used in.

Related

Translate two objects in the same 3D space in different ways?

I am trying to move objects with my 3D world in different ways but I can't move one object without affecting the entire scene. I tried using a second shader with different uniform names and I had some very strange results like objects disappearing and other annoying stuff.
I tried linking and unlinking programs but everything seems to translate together when I apply different matrices to the different shaders in hopes of seeing them move differently.
The TRANSLATE matrix is just a rotation * scale * translation matrix.
Edit - here is how set my uniforms:
//All of my mat4's
// Sorry for not initialising any of the vec3 or mat4's don't want the code to be too lengthy
perspectiveproj = glm::perspective(glm::radians(95.0f), static_cast<float>(width)/height , 0.01f, 150.0f);
views = glm::lookAt(position, position + viewdirection, UP);
trans1 = glm::rotate(trans1, 0.0f, glm::vec3(0.0f, 1.0f, 0.0f));
trans1 = glm::scale(trans1, glm::vec3(0.0f, 0.0f, 0.0f));
trans1 = glm::translate(trans1, glm::vec3(1.0f, 0.0f, 1.0f));
//These are the uniforms for my perspective matrix per shader
int persp = glGetUniformLocation(shader_one, "perspective");
glUniformMatrix4fv(persp, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
int persp2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(persp2, 1, GL_FALSE, glm::value_ptr(perspectiveproj));
//These are the uniforms for my lookAt matrix per shader
int Look = glGetUniformLocation(shader_one, "lookAt");
glUniformMatrix4fv(Look, 1, GL_FALSE, glm::value_ptr(views));
int Look2 = glGetUniformLocation(shader_two, "perspective");
glUniformMatrix4fv(Look2, 1, GL_FALSE, glm::value_ptr(views));
//This is the one uniform for my Translation to one shader object matrix
moving Shader two
//objects differently than shader one
int Moveoneshader = glGetUniformLocation(shader_two, "TRANSLATE");
glUniformMatrix4fv(Moveoneshader, 1, GL_FALSE, glm::value_ptr(trans1))
shader one:
gl_Positions = perspective * lookAt * vec4(position.x, position.y, position.z, 1.0);
shader two:
gl_Positions = perspective * lookAt * TRANSLATE * vec4(position.x, position.y, position.z, 1.0);
linking and drawing:
glUseProgram(shader_one);
glBindVertexArray(vao_one);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_one);
glUseProgram(shader_two);
glBindVertexArray(vao_two);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr);
glDeleteProgram(shader_two);
It seems that you are having trouble understanding the mechanics behind using a shader.
A shader is supposed to be a set of instructions that can run on multiple inputs, e.g. objects.
Let's first call the TRANSLATE matrix model matrix, since it holds all transformations that affect our model directly. The model matrix can have different values for different objects. So instead of using different shaders, you can use one generalized shader that calculates:
gl_Position = perspective * view * model * vec4(position, 1.0);
where view equals lookAt. I have exchanged the names of your matrices to follow naming conventions. I advise you to use these names so that you can find more information during research.
When creating a model matrix, you have to be careful about the order of matrix multiplication as well. In most cases, you want your model matrix to be composed like this
model = translate * rotate * scale
to avoid distortions of your object.
To be able to render multiple objects with their own respective model matrix, you have to loop over all objects and update the matrix value in the shader before drawing the object. A simplified example would be:
std::string name = "model";
for (Object obj : objects)
{
glUniformMatrix4fv(glGetUniformLocation(shaderID, name.c_str()), 1,
GL_FALSE, glm::value_ptr(model));
// draw object
}
You can read more about this here https://learnopengl.com/Getting-started/Coordinate-Systems.
Related to your problem, objects can disappear if you draw them with multiple shaders. This is related to how shaders write their data to your screen. By default, the active shader writes on all pixels of your screen. This means that when switching shaders to draw with the second shader after drawing with the first shader, the result of the first shader will be overwritten.
To combine multiple images, you can use Framebuffers. Instead of writing directly on your screen, you can use them to write into images first. Later, these images can be combined in a third shader.
However, this will cost way too much memory and will be too computationally inefficient to consider for your scenario. These techniques are usually applied when rendering post-processing effects.

Simple GL fragment shader behaves strangely on newer GPU

I am tearing my hair out at this problem! I have a simple vertex and fragment shader that worked perfectly (and still does) on an old Vaio laptop. It's for a particle system, and uses point sprites and a single texture to render particles.
The problem starts when I run the program on my desktop, with a much newer graphics card (Nvidia GTX 660). I'm pretty sure I've narrowed it down to the fragment shader, as if I ignore the texture and simply pass inColor out again, everything works as expected.
When I include the texture in the shader calculations like you can see below, all points drawn while that shader is in use appear in the center of the screen, regardless of camera position.
You can see a whole mess of particles dead center using the suspect shader, and untextured particles rendering correctly to the right.
Vertex Shader to be safe:
#version 150 core
in vec3 position;
in vec4 color;
out vec4 Color;
uniform mat4 view;
uniform mat4 proj;
uniform float pointSize;
void main() {
Color = color;
gl_Position = proj * view * vec4(position, 1.0);
gl_PointSize = pointSize;
}
And the fragment shader I suspect to be the issue, but really can't see why:
#version 150 core
in vec4 Color;
out vec4 outColor;
uniform sampler2D tex;
void main() {
vec4 t = texture(tex, gl_PointCoord);
outColor = vec4(Color.r * t.r, Color.g * t.g, Color.b * t.b, Color.a * t.a);
}
Untextured particles use the same vertex shader, but the following fragment shader:
#version 150 core
in vec4 Color;
out vec4 outColor;
void main() {
outColor = Color;
}
Main Program has a loop processing SFML window events, and calling 2 functions, draw and update. Update doesn't touch GL at any point, draw looks like this:
void draw(sf::Window* window)
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
sf::Texture::bind(&particleTexture);
for (ParticleEmitter* emitter : emitters)
{
emitter->useShader();
camera.applyMatrix(shaderProgram, window);
emitter->draw();
}
}
emitter->useShader() is just a call to glUseShader() using a GLuint pointing to a shader program that is stored in the emitter object on creation.
camera.applyMatrix() :
GLuint projUniform = glGetUniformLocation(program, "proj");
glUniformMatrix4fv(projUniform, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
...
GLint viewUniform = glGetUniformLocation(program, "view");
glUniformMatrix4fv(viewUniform, 1, GL_FALSE, glm::value_ptr(viewMatrix));
emitter->draw() in it's entirity:
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Build a new vertex buffer object
int vboSize = particles.size() * vboEntriesPerParticle;
std::vector<float> vertices;
vertices.reserve(vboSize);
for (unsigned int particleIndex = 0; particleIndex < particles.size(); particleIndex++)
{
Particle* particle = particles[particleIndex];
particle->enterVertexInfo(&vertices);
}
// Bind this emitter's Vertex Buffer
glBindBuffer(GL_ARRAY_BUFFER, vbo);
// Send vertex data to GPU
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * vertices.size(), &vertices[0], GL_STREAM_DRAW);
GLint positionAttribute = glGetAttribLocation(shaderProgram, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute,
3,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
0);
GLint colorAttribute = glGetAttribLocation(shaderProgram, "color");
glEnableVertexAttribArray(colorAttribute);
glVertexAttribPointer(colorAttribute,
4,
GL_FLOAT,
GL_FALSE,
7 * sizeof(float),
(void*)(3 * sizeof(float)));
GLuint sizePointer = glGetUniformLocation(shaderProgram, "pointSize");
glUniform1fv(sizePointer, 1, &pointSize);
// Draw
glDrawArrays(GL_POINTS, 0, particles.size());
And finally, particle->enterVertexInfo()
vertices->push_back(x);
vertices->push_back(y);
vertices->push_back(z);
vertices->push_back(r);
vertices->push_back(g);
vertices->push_back(b);
vertices->push_back(a);
I'm pretty sure this isn't an efficient way to do all this, but this was a piece of coursework I wrote a semester ago. I'm only revisiting it to record a video of it in action.
All shaders compile and link without error. By playing with the fragment shader, I've confirmed that I can use gl_PointCoord to vary a solid color across particles, so that is working as expected. When particles draw in the center of the screen, the texture is drawn correctly, albeit in the wrong place, so that is loaded and bound correctly as well. I'm by no means a GL expert, so that's about as much debugging as I could think to do myself.
This wouldn't be annoying me so much if it didn't work perfectly on an old laptop!
Edit: Included a ton of code
As turned out in the comments, the shaderProgram variable which was used for setting the camera-related uniforms did not depend on the actual program in use. As a result, the uniform locations were queried for a different program when drawing the textured particles.
The uniform location assignment is totally implementation specific, nvidia for example tends to assign them by the alphabetical order of the uniform names, so view's location would change depending if tex is actually present (and acttively used) or not. If the other implementation just assigns them by the order they appear in the code or some other scheme, things might work by accident.

Changing shaders dynamically in OpenGL application

I want to change color of objects in my program with shaders - fragment shader to be precise.
I have two shader programs: box, triangle (names are random - just for easier reference). For both programs I use this same vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
vec3 pos;
void main()
{
gl_Position = projection * view * model * vec4(position, 1.0f);
}
and then i am using my box_shader program:
box_shader.Use();
// Create camera transformation
view = camera.GetViewMatrix();
glm::mat4 projection;
projection = glm::perspective(camera.Zoom, (float)WIDTH/(float)HEIGHT, 0.1f, 100.0f);
// Get the uniform locations
GLint modelLoc = glGetUniformLocation(box_shader.Program, "model");
GLint viewLoc = glGetUniformLocation(box_shader.Program, "view");
GLint projLoc = glGetUniformLocation(box_shader.Program, "projection");
// Pass the matrices to the shader
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
later in the program I'd like to use triangle_shader program. What I was trying is:
triangl_shader.Use()
DrawTraingles();
So I don't call again glGetUniformLocation, instead use those created earlier. Unfortunately with this I don't see triangles drawn with DrawTraingles(), although, when I don't switch shader program they appear.
For my shaders loading and use I use this class: learnopengl so everything is there regarding Use() function.
Can someone tell me what I should do to make use of different shaders?
EDIT:
What I've figured out was add glGetUniformLocations after packet_shader.Use(), so it looks now like this:
packet_shader.Use();
// Get the uniform locations
modelLoc = glGetUniformLocation(packet_shader.Program, "model");
viewLoc = glGetUniformLocation(packet_shader.Program, "view");
projLoc = glGetUniformLocation(packet_shader.Program, "projection");
// Pass the matrices to the shader
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
Although, I am not sure if this is the best idea in the matter of performance. Can someone tell me is it ok?
Uniforms are per-program state in the GL. So there are two issues here:
The uniform locations might be completely different in each program. For each program, you'll have to query the uniform locations, and have to store them separately for late use.
The uniform values have to be set for each program seperately. Even if you consider two programs "sharing" the same uniform, this is not the case. All the glUniform*() setters affect only the program currently in use. Since each of your program has its own model, view and projection uniform, you have to set these for each program, everytime they change. Currently, it looks like you never set those for the second program, so they are left at ther initial defaults of all zeros.
If you want to share uniforms between different programs, you might consider looking into Uniform Buffer Objects (UBOs).

How to use GLM with OpenGL?

I am trying to render an object using GLM for matrix transformations, but I'm getting this:
EDIT: Forgot to mention that the object I'm trying to render is a simple Torus.
I did a lot of digging around and one thing I noticed is glGetUniformLocation(program, "mvp") returns -1. The docs says it will return -1 if the uniform variable isn't used in the shader, even if it is declared. As you can see below, it has been declared and is being used in the vertex shader. I've checked against program to make sure it is valid, and such.
So my questions are:
Question 1:
Why is glGetUniformLocation(program, "mvp") returning -1 even though it is declared and is being used in the vertex shader?
Question 2: (Which I think may be related to Q1)
Another thing I'm not particularly clear on. My GameObject class has a struct called Mesh with variables GLuint vao and GLuint[4] vbo (Vertex Array Object) and (Vertex Buffer Object). I am using Assimp, and my GameObject class is based on this tutorial. The meshes are rendered in the same way as the tutorial, using:
glBindVertexArray(vao);
glDrawElements(GL_TRIANGLES, elementCount, GL_UNSIGNED_INT, NULL);
I'm not sure how VAO's and VBO's work. What I've found is that VAO's are used if you want access to the vertex arrays throughout your program, and VBO's are used if you just want to send it to the graphics card and not touch it again (Correct me if I'm wrong here). So why does the tutorial mix them? In the constructor for a mesh, it creates and binds a VAO then doesn't touch it for the rest of the constructor (unless creating and binding VBO's have an effect on the currently bound VAO). It then goes on and creates and binds VBO's for the vertex buffer, normal buffer, texture coordinate buffer, and index buffer. To render the object it binds the VAO and calls glDrawElements. What I'm confused about is how/where does OpenGL access the VBO's, and if it can't with the setup in the tutorial, which I'm pretty sure it can, what needs to change?
Source
void GameObject::render() {
GLuint program = material->shader->program;
glUseProgram(program);
glm::mat4 mvp = Game::camera->mvpMatrix(this->position);
GLuint mvpLoc = glGetUniformLocation(program, "mvp");
printf("MVP Location: %d\n", mvpLoc); // prints -1
glUniformMatrix4fv(mvpLoc, 1, GL_FALSE, &mvp[0][0]);
for (unsigned int i = 0; i < meshes.size(); i++) {
meshes.at(i)->render(); // renders element array for each mesh in the GameObject
}
}
Vertex shader (simple unlit red color):
#version 330 core
layout(location = 0) in vec3 position;
uniform mat4 mvp;
out vec3 vertColor;
void main(void) {
gl_Position = mvp * vec4(position, 1);
vertColor = vec3(1, 0, 0);
}
Fragment shader:
#version 330 core
in vec3 vertColor;
out vec3 color;
void main(void) {
color = vertColor;
}
Question 1
You've pretty much answered this one yourself. glGetUniformLocation(program, name) gets the location of the uniform "mvp" in the shader program program and returns -1 if the uniform is not declared (or not used: if you don't use it, it doesn't get compiled in). Your shader does declare and use mvp, which strongly suggests there is an issue with compiling the program. Are you sure you are using this shader in the program?
Question 2
A VBO stores the data values that the GPU will use. These could be colour values, normals, texture coordinates, whatever you like.
A VAO is used to express the layout of your VBOs - think of it like a map, indicating to your program where to find the data in the VBOs.
The example program does touch the VAO whenever it calls glVertexAttribPointer, e.g.
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
This is not related to your missing uniform.

How do i display 2 or more objects in openGL (model - view - projection matrices and shaders)

It's all ok when i want to draw one object, for example a cube. I create vertices for cube, i create the buffer, i create the MVP matrix and send it to shader and it works nice.
But, what to do when i want to draw 2 or more objects, for example both cube and a triangle? I believe that View and Projection matrices should be same both for triangle and cube, i only need different Model matrix, right?
So that means that i will have two MVPs?
//Example (using GLM):
glm::mat4 MVPC = Projection * View * ModelCube;
glm::mat4 MVPT = Projection * View * ModelTriangle;
So what do i do with those two now? This is the vertex shader that works good for cube
//vertex shader
#version 330 core
layout(location = 0) in vec3 verticesCube;
uniform mat4 MVPC;
void main(){
gl_Position = MVPC * vec4(verticesCube,1);
}
And what should i do with MVPT (triangle) in shader, i tried messing around with different things, but i can't get it to work, i can't display both the cube and the triangle at the same time.
The confusion comes from thinking that the shader controls multiple vertex arrays at once, when it should be thought of as a universal entity. A vertex array is passed to the shader, then the object is drawn. And the process is repeated.
For example, let's say we assign the variable matrixID to the uniform MVP:
// get handle for our "MVP" uniform
GLuint matrixID = glGetUniformLocation(programID, "MVP");
When we're ready to draw an object, we set matrixID to the object's MVP:
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &cubeMVP[0][0]);
Then bind the vertex buffer, set the attribute pointer, and draw it:
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, cubeVerteciesBuffer);
glVertexAttribPointer(
0, // shader layout location
3,
GL_FLOAT,
GL_FALSE,
0,
(void *)0
);
glDrawArrays(GL_TRIANGLES, 0, 12*3); // draw cube
Now we move on to the triangle and repeat the process - set matrixID to the object's MVP, bind the vertex buffer, set the attribute pointer, and draw it:
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &triMVP[0][0]);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, triangleVerteciesBuffer);
glVertexAttribPointer(
0, // shader layout location
3,
GL_FLOAT,
GL_FALSE,
0,
(void *)0
);
glDrawArrays(GL_TRIANGLES, 0, 3); // draw triangle
The corresponding vertex shader code:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertecies_modelspace;
uniform mat4 MVP;
void main(){
gl_Position = MVP * vec4(vertecies_modelspace, 1);
}
OpenGL is not a scene graph. It draws things according to the current state and then forgets about it.
So if you want to draw different geometries, with different transformations just set the corresponding transformation matrix (uniform), draw the object and repeat this for each object you want to draw. After geometry has been drawn, the following operations will have no further effect on it, other than it might be overdrawn.
An alternative that may also work, would be to do the 'ModelViewProjection' matrix calculation in the vertex shader. You can do this by making uniform model, view, and projection matrix variables in the vertex shader. You can then globally calcluate view and projection matrices, and send them to the shader. You can then just calculate the model matrix for your cube and triangle (or whatever objects you need to render) individually, and send those model matrices to the shader as well.
View and projection matrix calculations; this could be in a separate 'camera' class:
glm::mat4 viewMatrix = glm::lookAt(
glm::vec3(0, -5, 0), // camera location in world
glm::vec3(0, 0, 0), // point camera is looking at
glm::vec3(0, 1, 0) // orientation of camera, change 1 to -1 to flip camera upside down
);
glm::mat4 projectionMatrix = glm::perspective(35.0f, displayWidth / displayHeight, 0.1f, 100.0f);
// send view and projection matrices to the shader
glUseProgram(shaderProgram);
GLint viewMatrixId = glGetUniformLocation(shaderProgram, "view");
GLint projectionMatrixId = glGetUniformLocation(shaderProgram, "projection");
glUniformMatrix4fv(viewMatrixId, 1, GL_FALSE, &viewMatrix[0][0]);
glUniformMatrix4fv(projectionMatrixId, 1, GL_FALSE, &projectionMatrix[0][0]);
glUseProgram(0);
Model matrix calculation; this code can go in a separate class, and you can instantiate it for each object that you want to render:
// this can go after where you initialize your cube or triangle vertex information
glUseProgram(shaderProgram);
modelMatrixId = glGetUniformLocation(shaderProgram, "model"); //modelMatrixId can be a global GLint
glUniformMatrix4fv(modelMatrixId, 1, GL_FALSE, &modelMatrix[0][0]); //modelMatrix can be a global glm::mat4
glUseProgram(0);
//use this for every render frame
glUseProgram(shaderProgram);
glUniformMatrix4fv(modelMatrixId, 1, GL_FALSE, &modelMatrix[0][0]);
// code to bind vertices and draw you objects goes here
glUseProgram(0);
New vertex shader:
//vertex shader
#version 330 core
layout(location = 0) in vec3 vertices;
uniform mat4 model, view, projection;
void main(){
gl_Position = projection * view * model * vec4(vertices, 1.0);
}
Have two arrays of vertices. Lets say array1 for cube, array2 for circle.
Create 2 vaos and 2 vbos. vao1 and vbo1 for cube. vao2 and vbo2 for circle.
Bind vao1, bind vbo1, fill up vbo1 buffer with array1. glUserProgram(program) which is a program for shaders, set up vertexattripointer.
call glDrawArray()
Do the same thing for other vao and vbo.