I'm currently refactoring my OpenGL program (used to be one single enormous file) to use C++ classes. The basic framework looks like this:
I have an interface Drawable with the function virtual void Render(GLenum type) const = 0; and a bunch of classes implementing this interface (Sphere, Cube, Grid, Plane, PLYMesh and OBJMesh).
In my main.cpp I'm setting up a scene containing multiple of these objects, each with its own shader program. After setting uniform buffer objects and each program's individual uniforms, I'm calling glutMainLoop().
In my Display function called each frame, the first thing I'm doing is setting up all the transformation matrices and finally call the above mentioned Render function for every object in the scene:
void Display()
{
// Clear framebuffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
modelViewMatrix = glm::mat4(1.0);
projectionMatrix = glm::mat4(1.0);
normalMatrix = glm::mat4(1.0);
modelViewMatrix = glm::lookAt(glm::vec3(0.0, 0.0, mouse_translate_z), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_x, glm::vec3(1.0f, 0.0f, 0.0f));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_y, glm::vec3(0.0f, 1.0f, 0.0f));
projectionMatrix = glm::perspective(45.0f, (GLfloat)WINDOW_WIDTH / (GLfloat)WINDOW_HEIGHT, 1.0f, 10000.f);
// No non-uniform scaling (only use mat3(normalMatrix in shader))
normalMatrix = modelViewMatrix;
glBindBuffer(GL_UNIFORM_BUFFER, ubo_global_matrices);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 1 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(projectionMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 2 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(normalMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
// ************************************************** //
// **************** DRAWING COMMANDS **************** //
// ************************************************** //
// Grid
if (grid->GetIsRendered())
{
program_GRID_NxN->Use();
grid->Render(GL_LINES);
program_GRID_NxN->UnUse();
}
// Plane
...
// Sphere
...
// Swap front and back buffer and redraw scene
glutSwapBuffers();
glutPostRedisplay();
}
My question now is the following: With the current code, I'm using the same ModelView matrix for every object. What if I wanna translate only the sphere, or rotate only the plane without changing the vertex positions? Where is the best place to store the model matrix in a large OpenGL program? What about putting a protected member variable glam::mat4 modelMatrix into the Drawable interface? Also, should the model and the view matrix be split (for example using a Camera class containing the view matrix only)?
My answer is mainly based off Tom Dalling's excellent tutorial, but with some minor changes.
Firstly all your view and projection matrix operations should go in the Camera class. Camera will provide a convenient way of getting the view and projection matrix by calling the matrix() method.
glm::mat4 Camera::matrix() const {
return projection() * view();
}
Camera.cpp
Then for this example you'd have an Model Asset, which contains everything you need to render the geometry. This asset should be unique and stored in a ResourceManager or something similar.
struct ModelAsset {
Shader* shader;
Texture* texture;
GLuint vbo;
GLuint vao;
GLenum drawType;
GLint drawStart;
GLint drawCount;
};
Then you have an Model Instance, which has a pointer to the assest plus a unique transform matrix. This way you can create as many instances of a particular asset each one with its own unique transformation.
struct ModelInstance {
ModelAsset* asset;
glm::mat4 transform;
};
ModelInstance cube;
cube.asset = &asset; // An asset that you created somewhere else (e.g. ResourceManager)
cube.transform = glm::mat4(); // Your unique transformation for this instance
To render an instance you pass the view and model matrix as uniforms to the shader, and shader does the rest of the work.
shaders->setUniform("camera", camera.matrix());
shaders->setUniform("model", cube.transform);
Finally it's best when all your instances are grouped nicely in some resizable container.
std::vector<ModelInstance> instances;
instances.push_back(cube);
instances.push_back(sphere);
instances.push_back(pyramid);
for (ModelInstance i : instances) {
i.transform = glm::rotate(i.transform, getTime(), glm::vec3(0.0f, 1.0f, 0.0f));
}
Related
I have what I believed to be a basic need: from "2D position of the mouse on the screen", I need to get "the closest 3D point in the 3D world". Looks like ray-tracing common problematic (even if it's not mine).
I googled / read a lot: looks like the topic is messy and lots of things gets unfortunately quickly intricated. My initial problem / need involves lots of 3D points what I do not know (meshes or point cloud from the internet), so, it's impossible to understand what result you should expect! Thus, I decided to create simple shapes (triangle, quadrangle, cube) with points that I know (each coord of each point is 0.f or 0.5f in local frame), and, try to see if I can "recover" 3D point positions from the mouse cursor when I move it on the screen.
Note: all coord of all points of all shapes are known values like 0.f or 0.5f. For example, with the triangle:
float vertices[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.0f, 0.5f, 0.0f
};
What I do
I have a 3D OpenGL renderer where I added a GUI to have controls on the rendered scene
Transformations: tx, ty, tz, rx, ry, rz are controls that enables to change the model matrix. In code
// create transformations: model represents local to world transformation
model = glm::mat4(1.0f); // initialize matrix to identity matrix first
model = glm::translate(model, glm::vec3(tx, ty, tz));
model = glm::rotate(model, glm::radians(rx), glm::vec3(1.0f, 0.0f, 0.0f));
model = glm::rotate(model, glm::radians(ry), glm::vec3(0.0f, 1.0f, 0.0f));
model = glm::rotate(model, glm::radians(rz), glm::vec3(0.0f, 0.0f, 1.0f));
ourShader.setMat4("model", model);
model changes only the position of the shape in the world and has no connection with the position of the camera (that's what I understand from tutorials).
Camera: from here, I ended-up with a camera class that holds view and proj matrices. In code
// get view and projection from camera
view = cam.getViewMatrix();
ourShader.setMat4("view", view);
proj = cam.getProjMatrix((float)SCR_WIDTH, (float)SCR_HEIGHT, near, 100.f);
ourShader.setMat4("proj", proj);
The camera is a fly-like camera that can be moved when moving the mouse or using keyboard arrows and that does not act on model, but only on view and proj (that's what I understand from tutorials).
The shader then uses model, view and proj this way:
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
void main()
{
// note that we read the multiplication from right to left
gl_Position = proj * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
Screen to world: as using glm::unProject didn't always returned results I expected, I added a control to not use it (back-projecting by-hand). In code, first I get the cursor mouse position frame3DPos following this
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouseCursorCallback(GLFWwindow* window, double xposIn, double yposIn)
{
// screen to world transformation
xposScreen = xposIn;
yposScreen = yposIn;
int windowWidth = 0, windowHeight = 0; // size in screen coordinates.
glfwGetWindowSize(window, &windowWidth, &windowHeight);
int frameWidth = 0, frameHeight = 0; // size in pixel.
glfwGetFramebufferSize(window, &frameWidth, &frameHeight);
glm::vec2 frameWinRatio = glm::vec2(frameWidth, frameHeight) /
glm::vec2(windowWidth, windowHeight);
glm::vec2 screen2DPos = glm::vec2(xposScreen, yposScreen);
glm::vec2 frame2DPos = screen2DPos * frameWinRatio; // window / frame sizes may be different.
frame2DPos = frame2DPos + glm::vec2(0.5f, 0.5f); // shift to GL's center convention.
glm::vec3 frame3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
frame3DPos.x = frame2DPos.x;
frame3DPos.y = frameHeight - 1.0f - frame2DPos.y; // GL's window origin is at the bottom left
frame3DPos.z = 0.f;
glReadPixels((GLint) frame3DPos.x, (GLint) frame3DPos.y, // CAUTION: cast to GLint.
1, 1, GL_DEPTH_COMPONENT,
GL_FLOAT, &zbufScreen); // CAUTION: GL_DOUBLE is NOT supported.
frame3DPos.z = zbufScreen; // z-buffer.
And then I can call glm::unProject or not (back-projecting by-hand) according to controls in GUI
glm::vec3 world3DPos = glm::vec3(0.0f, 0.0f, 0.0f);
if (screen2WorldUsingGLM) {
glm::vec4 viewport(0.0f, 0.0f, (float) frameWidth, (float) frameHeight);
world3DPos = glm::unProject(frame3DPos, view * model, proj, viewport);
} else {
glm::mat4 trans = proj * view * model;
glm::vec4 frame4DPos(frame3DPos, 1.f);
frame4DPos = glm::inverse(trans) * frame4DPos;
world3DPos.x = frame4DPos.x / frame4DPos.w;
world3DPos.y = frame4DPos.y / frame4DPos.w;
world3DPos.z = frame4DPos.z / frame4DPos.w;
}
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
Z-buffering is always allowed whatever the shape is 2D (triangle, quadrangle) or 3D (cube). In code
glEnable(GL_DEPTH_TEST); // Enable z-buffer.
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // also clear the z-buffer
In picture I get
The camera is positioned at (0., 0., 0.) and looks "ahead" (front = -z as z-axis is positive from screen to me). The shape is positioned (using tx, ty, tz, rx, ry, rz) "in front of the camera" with tz = -5 (5 units following the front vector of the camera)
What I get
Triangle in initial setting
I have correct xpos and ypos in world frame but incorrect zpos = 0. (z-buffering is allowed). I expected zpos = -5 (as tz = -5).
Question: why zpos is incorrect?
If I do not use glm::unProject, I get outer space results
Question: why "back-projecting" by-hand doesn't return consistent results compared to glm::unProject? Is this logical? Arethey different operations? (I believed they should be equivalent but they are obviously not)
Triangle moved with translation
After translation of about tx = 0.5 I still get same coordinates (local frame) where I expected to have previous coord translated along x-axis. Not using glm::unProject returns oute-space results here too...
Question: why translation (applied by model - not view nor proj) is ignored?
Cube in initial setting
I get correct xpos, ypos and zpos?!... So why is this not working the same way with the "2D" triangle (which is "3D" one to me, so, they should behave the same)?
Cube moved with translation
Translated along ty this time seems to have no effect (still get same coordinates - local frame).
Question: like with triangle, why translation is ignored?
What I'd like to get
The main question is why the model transformation is ignored? If this is to be expected, I'd like to understand why.
If there's a way to recover the "true" position of the shape in the world (including model transformation) from the position of the mouse cursor, I'd like to understand how.
Question: glm::unProject doc says Map the specified window coordinates (win.x, win.y, win.z) into object coordinates, but, I am not sure to understand what are object coordinates. Does object coordinates refers to local, world, view or clip space described here?
As I am new to OpenGL, I didn't get that object coordinates from glm::unProject doc is another way to refer to local space. Solution: pass view*model to glm::unProject and apply model again, or, pass view to glm::unProject as explained here: Screen Coordinates to World Coordinates.
This fixes all weird behaviors I observed.
I'm trying to make a system that allows you to type in a position and scale and it will create a vector that automatically generates all the vertices. The problem is when I try to draw my object it just won't show up. I have used OpenGL's built-in debugging system but it didn't say anything was wrong. So then I tried to manually debug myself but everything seemed to draw just fine.
Renderer::createQuad() method:
Shape Renderer::createQuad(glm::vec2 position, glm::vec2 scale, Shader shader, Texture texture)
{
float x = position.x;
float y = position.y;
float width = scale.x;
float height = scale.y;
std::vector<float> vertices =
{
x+width, y+height, 1.0f, 1.0f, // TR
x+width, y-height, 1.0f, 0.0f, // BR
x-width, y-height, 0.0f, 0.0f, // BL
x-width, y+height, 0.0f, 1.0f // TL
};
std::vector<uint32_t> indices =
{
0, 1, 3,
1, 2, 3
};
m_lenVertices = vertices.size();
m_lenIndices = indices.size();
// these Create methods should be fine as OpenGL does not give me any error
// also I have another function that requires you to pass in the vertex data and indices that works just fine
// I bind the thing I am creating
createVAO();
createVBO(vertices);
createEBO(indices);
createTexture(texture);
createShader(shader.getVertexShader(), shader.getFragmentShader());
Shape shape;
glm::mat4 model(1.0f);
glUniformMatrix4fv(glGetUniformLocation(m_shader, "model"), 1, GL_FALSE, glm::value_ptr(model));
shape.setShader(m_shader);
shape.setVAO(m_VAO);
shape.setTexture(m_texture);
shape.setPosition(position);
return shape;
}
Renderer::draw() method:
void Renderer::draw(Shape shape)
{
if (!m_usingIndices)
{
// Unbinds any other shapes
glBindVertexArray(0);
glUseProgram(0);
shape.bindShader();
shape.bindVAO();
shape.bindTexture();
glDrawArrays(GL_TRIANGLES, 0, m_lenVertices);
}
else
{
// Unbinds any other shapes
glBindVertexArray(0);
glUseProgram(0);
shape.bindShader();
shape.bindVAO();
shape.bindTexture();
glDrawElements(GL_TRIANGLES, m_lenIndices, GL_UNSIGNED_INT, 0);
}
}
Projection matrix:
glm::mat4 m_projectionMat = glm::ortho(-Window::getWidth(), Window::getWidth(), -Window::getHeight(), Window::getHeight, 0.1f, 100.0f);
Creating then rendering the Quad:
// Creates the VBO, VAO, EBO, etc.
quad = renderer.createQuad(glm::vec2(500.0f, 500.0f), glm::vec2(200.0F, 200.0f), LoadFile::loadShader("Res/Shader/VertShader.glsl", "Res/Shader/FragShader.glsl"), LoadFile::loadTexture("Res/Textures/Lake.jpg"));
// In the main game loop we render the quad
quad.setCamera(camera); // Sets the View and Projection matrix for the quad
renderer.draw(quad);
Output:
Output of the code before
I have followed the majority of this vulkan tutorial:
https://vulkan-tutorial.com/
I currently have a vulkan program that can load multiple 3D models using OBJ files however I only have one model matrix which controls all of the 3D models, for instance if I load in 2 cubes and then apply a rotation matrix to the model matrix both cubes will rotate.
I want to have a model matrix for each 3D model so that I can rotate, translate and scale then individually.
While following the tutorial I created the following function which is called "UpdateUniformBuffer". This function contains code which applies a rotation matrix to the model matrix every second.
It also uses a struct I created called "UniformBufferObject".
UpdateUniformBuffer function
void updateUniformBuffer(uint32_t currentImage) {
static auto startTime = std::chrono::high_resolution_clock::now();
auto currentTime = std::chrono::high_resolution_clock::now();
float time = std::chrono::duration<float, std::chrono::seconds::period>(currentTime - startTime).count();
UniformBufferObject ubo{};
ubo.model = glm::rotate(glm::mat4(1.0f), time * glm::radians(90.0f), glm::vec3(0.0f, 0.0f, 1.0f));
ubo.view = glm::lookAt(glm::vec3(2.0f, 2.0f, 2.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f));
ubo.proj = glm::perspective(glm::radians(45.0f), swapChainExtent.width / (float) swapChainExtent.height, 0.1f, 10.0f);
ubo.proj[1][1] *= -1;
void* data;
vkMapMemory(device, uniformBuffersMemory[currentImage], 0, sizeof(ubo), 0, &data);
memcpy(data, &ubo, sizeof(ubo));
vkUnmapMemory(device, uniformBuffersMemory[currentImage]); }
UniformBufferObject Struct
struct UniformBufferObject {
glm::mat4 model;
glm::mat4 view;
glm::mat4 proj;
};
There are different approaches, but the most straightforward one that is just an extension of what you already do is having multiple uniform buffers for each object, and separate descriptors for those uniform buffers. At draw time you then bind the appropriate descriptor set for the object to draw that points to it's uniform buffer.
This could look like this:
vkCmdBindVertexBuffers(commandBuffer, 0, 1, &vertexBuffer, offsets);
vkCmdBindIndexBuffer(commandBuffer, indexBuffer, 0, VK_INDEX_TYPE_UINT32);
for (Object object : objects)
{
vkCmdBindDescriptorSets(commandBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayout, 0, 1, &object.descriptorSet[currentImage], 0, nullptr);
vkCmdDrawIndexed(commandBuffer, object.indexCount, 1, object.firstIndex, 0, 0);
}
And your object definition may look like this:
struct Object {
uint32_t indexCount;
uint32_t firstIndex;
VkBuffer buffer;
VkDescriptorSet descriptorSet;
};
std::vector<Object> objects;
So instead of having just one uniform buffer and one descriptor set that points to it, you create one set for each object and then also update the buffers separate from each other:
for (Object object : objects) {
memcpy(bufferPtr[currentImage], &ubo, sizeof(ubo));
}
A small note: You don't need to map and unmap your buffers on every frame. You can safely map buffers once after creation ("persistent mapping").
While this may not be the perfect way of having uniform buffers per object it's a good start building on what you learned through vulkan tutorial.
There are other ways of passing data per object, e.g. via the mentioned push constants (which requires command buffers to be rebuild upon change though), but as usual the way to go depends on your use case.
Also note that you want to keep memory/buffer allocations to a minimum due to implementation specific limitations. So if you plan to draw a lot of objects you should look at dynamic uniform buffers or sub allocate instead of allocating separate buffers for each object.
I'm trying to create a solar system in OpenGL. I have the basic code for earth spinning on its axis and im trying to set the camera to move with the arrow keys.
using namespace std;
using namespace glm;
const int windowWidth = 1024;
const int windowHeight = 768;
GLuint VBO;
int NUMVERTS = 0;
bool* keyStates = new bool[256]; //Create an array of boolean values of length 256 (0-255)
float fraction = 0.1f; //Fraction for navigation speed using keys
// Transform uniforms location
GLuint gModelToWorldTransformLoc;
GLuint gWorldToViewToProjectionTransformLoc;
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
// TextureSampler uniform location
GLuint gTextureSamplerLoc;
// Texture ID
GLuint gTextureObject[11];
//Navigation variables
float posX;
float posY;
float posZ;
float viewX = 0.0f;
float viewY = 0.0f;
float viewZ = 0.0f;
float dirX;
float dirY;
float dirZ;
vec3 cameraPos = vec3(0.0f,0.0f,5.0f);
vec3 cameraView = vec3(viewX,viewY,viewZ);
vec3 cameraDir = vec3(0.0f,1.0f,0.0f);
These are all my variables that im using to edit the camera.
static void renderSceneCallBack()
{
// Clear the back buffer and the z-buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create our world space to view space transformation matrix
mat4 worldToViewTransform = lookAt(
cameraPos, // The position of your camera, in world space
cameraView, // where you want to look at, in world space
cameraDir // Camera up direction (set to 0,-1,0 to look upside-down)
);
// Create out projection transform
mat4 projectionTransform = perspective(45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f);
// Combine the world space to view space transformation matrix and the projection transformation matrix
mat4 worldToViewToProjectionTransform = projectionTransform * worldToViewTransform;
// Update the transforms in the shader program on the GPU
glUniformMatrix4fv(gWorldToViewToProjectionTransformLoc, 1, GL_FALSE, &worldToViewToProjectionTransform[0][0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)12);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)24);
// Set the material properties
glUniform1f(gKaLoc, 0.8f);
glUniform1f(gKdLoc, 0.8f);
// Bind the texture to the texture unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTextureObject[0]);
// Set our sampler to user Texture Unit 0
glUniform1i(gTextureSamplerLoc, 0);
// Draw triangle
mat4 modelToWorldTransform = mat4(1.0f);
static float angle = 0.0f;
angle+=1.0f;
modelToWorldTransform = rotate(modelToWorldTransform, angle, vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &modelToWorldTransform[0][0]);
glDrawArrays(GL_TRIANGLES, 0, NUMVERTS);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glutSwapBuffers();
}
This is the function that draws the earth onto the screen and determines where the camera is at.
void keyPressed (unsigned char key, int x, int y)
{
keyStates[key] = true; //Set the state of the current key to pressed
cout<<"keyPressed ";
}
void keyUp(unsigned char key, int x, int y)
{
keyStates[key] = false; //Set the state of the current key to released
cout<<"keyUp ";
}
void keyOperations (void)
{
if(keyStates['a'])
{
viewX += 0.5f;
}
cout<<"keyOperations ";
}
These are the functions I'm trying to use to edit the camera variables dynamically
// Create a vertex buffer
createVertexBuffer();
glutKeyboardFunc(keyPressed); //Tell Glut to use the method "keyPressed" for key events
glutKeyboardUpFunc(keyUp); //Tell Glut to use the method "keyUp" for key events
keyOperations();
glutMainLoop();
Finally here's the few lines in my main method where I'm trying to call the key press functions. In the console I see it detects that im pressing them but the planet doesnt move at all, I think I may be calling the keyOperations in the wrong place but I'm not sure.
You are correct, key operations is being called in the wrong place. Where it is now is called once then never again. It needs to go in your update code where you update the rotation of the planet. That way it is called at least once per frame.
I'm learning from this tutotrials:
http://en.wikibooks.org/wiki/Category:OpenGL_Programming
http://www.opengl-tutorial.org/
I have modified the 7.th lesson from http://www.opengl-tutorial.org/ so that the cube rotate, now what I want to do is to have two or tree cubes each at different places and make them rotate(the cubes), but I really don't know how to do that. So I'm asking and hoping for some help.
The rotation is made by this code:
glm::vec3 axis_y(0, 1, 0);
glm::mat4 anim = glm::rotate(glm::mat4(1.0f), angle, axis_y);
...
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * anim;
I didn't go through the details of the tutorial, but in principle, you need to create a model matrix for each of the cubes, and then render each cube with its own value of MVP constructed from the cube's model matrix (and the global view & projection matrices).
The above can give you three identical cubes in different positions, rotations and scales. If you want three different objects, you'll need to load each of them separately, preferably into its own buffer object.
EDIT
I don't know the libraries the tutorial uses, but the principle of coding this could be along these lines:
for (int idxCube = 0; idxCube < 3; ++idxCube) {
glm::mat4 offset = glm::translate(10 * idxCube, 0, 0);
glm::mat4 MVP = ProjectionMatrix * ViewMatrix * ModelMatrix * offset * anim;
glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);
glDrawArrays(...);
}
This would give 3 cubes at positions (0, 0, 0), (10, 0, 0) and (20, 0, 0).
More generally, you'd just have one ModelMatrix for each cube.