How to rotate a model around a global axis in openGL? - c++

In OpenGL i want to rotate a Model around a global Axis.
The object I am trying to rotate looks like this:
class Object {
public:
inline Object()
: vao(0),
positionBuffer(0),
colorBuffer(0),
indexBuffer(0),
elements(0)
{}
inline ~Object() { // GL context must exist on destruction
glDeleteVertexArrays(1, &vao);
glDeleteBuffers(1, &indexBuffer);
glDeleteBuffers(1, &colorBuffer);
glDeleteBuffers(1, &positionBuffer);
}
GLuint vao; // vertex-array-object ID
GLuint positionBuffer; // ID of vertex-buffer: position
GLuint colorBuffer; // ID of vertex-buffer: color
GLuint indexBuffer; // ID of index-buffer
GLuint elements; // Number of Elements
glm::mat4x4 model; // model matrix
};
The function to initiate an object looks like:
void initObject(Object &obj, vector<glm::vec3> &vertices, vector<glm::vec3> &colors, vector<GLushort> &indices, glm::vec3 offset)
{
GLuint programId = program.getHandle();
GLuint pos;
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
// Step 0: Create vertex array object.
glGenVertexArrays(1, &obj.vao);
glBindVertexArray(obj.vao);
// Step 1: Create vertex buffer object for position attribute and bind it to the associated "shader attribute".
glGenBuffers(1, &obj.positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, obj.positionBuffer);
glBufferData(GL_ARRAY_BUFFER, vertices.size() * sizeof(glm::vec3), vertices.data(), GL_STATIC_DRAW);
// Bind it to position.
pos = glGetAttribLocation(programId, "position");
glEnableVertexAttribArray(pos);
glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Step 2: Create vertex buffer object for color attribute and bind it to...
glGenBuffers(1, &obj.colorBuffer);
glBindBuffer(GL_ARRAY_BUFFER, obj.colorBuffer);
glBufferData(GL_ARRAY_BUFFER, colors.size() * sizeof(glm::vec3), colors.data(), GL_STATIC_DRAW);
// Bind it to color.
pos = glGetAttribLocation(programId, "color");
glEnableVertexAttribArray(pos);
glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Step 3: Create vertex buffer object for indices. No binding needed here.
glGenBuffers(1, &obj.indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, obj.indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices.size() * sizeof(GLushort), indices.data(), GL_STATIC_DRAW);
// Unbind vertex array object (back to default).
glBindVertexArray(0);
// Modify model matrix.
obj.model = glm::translate(glm::mat4(1.0f), offset);
}
Now I got an instance of that which is tessellated octahedron acting as a sphere, which i want to rotate around a global Axis, specifically the X axis. The center of that Object is at (3, 1, 0) so that a rotation around 90 degrees should the origin be at (3, 0, 1).
I tried to do this with the glm::rotate Method:
glm::vec3 axis;
axis = { 1.0, 0.0f, 0.0f };
sphere.model = glm::rotate(sphere.model, glm::radians(90.0f), axis);
But that only rotates the object around it's local Axis.
Another solution I tried was this one:
glm::vec3 axis;
axis = glm::inverse(sphere.model) * glm::vec4(1.0, 0.0, 0.0, 0.0f);
sphere.model = glm::rotate(sphere.model, (2.0f*3.1415f)/48.0f, axis);
This one the other hand act's like the global axis is in the center of the model. So the rotation would be right if the center of the object would be equal to the origin of the global coordinate system.

I would give Kudos to #genpfault. It sounds like the implementation of glm::rotate is that one:https://github.com/g-truc/glm/blob/1498e094b95d1d89164c6442c632d775a2a1bab5/glm/ext/matrix_transform.inl
Hence it doesn't touch the translation, it changes only the rotation part of the matrix, like to set the things up. In order to perform animations or combine different transformation you need to use another API.

retMat = glm::rotate(curMat, ...) calculates the rotation matrix and multiply it with the given curMat matrix.
The returned matrix retMat can be used with any point that was defined in the same coordinates-system (aka "space") as curMat to calculate the new coordinates, again in the same space: newXYZ = retMat * oldXYZ.
The axis of rotation given to glm::rotate always goes through the origin of the space.
If you need another line of rotation (not containing the origin) then you must do the sequence "translate to some point on line ==> rotate ==> translate back"
For your case, I guess your sphere is defined such way its center is the origin 0,0,0. This means that "model space" is the same as "global space". So, you don't need to translate the sphere before the rotation.
Once you have rotated the object then translate it to the point you wish.

In your model+view+projection (MVP) matrix (or quaternion) operations, you are mixing your model and view matrices. You need to rotate the model from the identity matrix to the desired RPY matrix. Then, you move and/or rotate the object to the location you want in XYZ space. Finally, you apply your orthogonal projection or your perspective projection, whichever you want.
The point is that you should keep track of where the origin of the OBJECT is relative to the GLOBAL origin separately. In other words, you have where you want the centroid of the object to be, but also the centers of rotations (which aren't necessarily the global origin).
In order to rotate an object in the object centroid's local frame, you need to first get rid of the translation component. This is the Model+View part. You can do this by doing the inverse matrix of just he XYZ elements in the last column of the 4x4 matrix. You then apply your 4x4 rotation matrix to the centroid. You then move the object back to the desired centroid location.
Does this make sense?
I suggest you study more into the MVP model more. The OGL 4 shading language cookbook (by Packt) is a great resource.
There is also this link and this one.
I should note, however, that you should be familiar with the backend of the glm matrix library. I have use a custom matrix library that showed the implementation of the rotate() function.

Related

openGL 4, translate and scale going wrong

I’m trying to do an ortho projection onto a plane, which represents a map – think “floor plan”. I’m running into trouble because openGL 4 is new to me (I last used 1.1, and the world has changed) and because what I’m trying to do isn’t much like common examples online. My problem is scaling and translating.
The data that describes the map is a series of lines with endpoints are in what I’ll call “dungeon coordinates units”. When I render the image I want to have a fixed rule of “1 unit is 1 pixel”.
My coordinates are all in the first quadrant, with (0,0) representing the lower left of the map. I’d like (0,0) to show up in the lower left of the screen.
Now for the tricky bits. When I render the “floor” in the fragment shader, I’m being handed gl_FragCoord, which is ideal. It’s effectively a pixel location, which means for my purposes it is equivalent to a dungeon coordinate. I can look up all the information I passed to the shader (also in dungeon coordinates) and figure out how to paint (or discard) that pixel. It works, except… it draws (0,0) is in the center of the screen, not the low left.
Worse, There are some things, like lines (“walls”), that I render with skinny triangles in dungeon coordinates in a second pass. They don’t show up where I want them. (In fact I’m pretty sure that the triangles I’m using to tile the floor are also wrong and are only covering the screen by coincidence.)
I really, really need openGL to use a coordinate system that puts 0,0 at the lower left of the image and lets me specify triangle vertices in my units, which happen to map straight to pixels.
This seems like a simple case of scaling and translating. But I’m obviously applying the scale and translate incorrectly.
The vertex code is simple:
#version 430
layout (location = 0) in vec3 Position;
uniform mat4 gWorld;
out vec4 Color; //unused; the fragment shader caslculates all colors
void main()
{
gl_Position = gWorld * vec4(Position, 1.0);
}
Building the 2 triangles for the map floor (a simple rectangle for now) seems simple:
Vector3f Vertices[4];
Vertices[0] = Vector3f(0.f, 0.f, 0.0f);
Vertices[1] = Vector3f(0.f, mapEdges.maxs.y, 0.0f);
Vertices[2] = Vector3f(mapEdges.maxs.x, 0.f, 0.0f);
Vertices[3] = Vector3f(mapEdges.maxs.x, mapEdges.maxs.y, 0.0f);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
unsigned int Indices[] = { 0, 1, 2,
1, 2, 3 };
glGenBuffers(1, &IBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
and I use an indexed draw for them.
The C++ code (using glm) sets up the world matrix:
glUseProgram(ShaderProgram); //this selects the shader
gWorldLocation = glGetUniformLocation(ShaderProgram, "gWorld");
assert(gWorldLocation != 0xFFFFFFFF);
...and when rendering…
//try to fix openGL’s desire to think my buffer is -1 to 1 across
float scale = 1/1024.f; //test map is about 1024 units across
glm::mat4 sm = glm::scale(
glm::mat4( 1.0f ),
glm::vec3( scale, scale, 1.0f )
);
glm::mat4 ts = glm::translate(
sm,
glm::vec3( -512.0f, -512.0f, 0.0f ) //shove left and down
);
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
Since my test map is about 1024 units across, I’d have thought this would have shoved things into position. But no. The floor (which, remember, is using gl_FragCoord to decide where and what to draw) is painted from screen center and up and right, though it otherwise looks as I’d expect. The walls, which are painted by skinny triangles in dungeon coordinates, are nowhere to be seen, probably scaled off into the aether somewhere.
Basically I’m not convincing openGL that I want x=0 to be the left edge of the image and my scaling is obviously completely wrong. Sadly I had one version that (incorrectly) drew some walls on the screen at one point, but I don’t have that code anymore. Still, it tells me that I’m not completely off in generating the walls, just laying them down.
How do I get openGL to use my units?
You transpose the matrix when you set the matrix uniform. Since the vector is multiplied to the matrix from the right in your shader program, this is wrong. See GLSL Programming/Vector and Matrix Operations
glUniformMatrix4fv(gWorldLocation, 1, GL_TRUE, &ts[0][0]);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, &ts[0][0]);
Instead of scaling and translating the vertices you can set an orthographic projection with matrix with glm::ortho:
glm::mat4 projection = glm::ortho(0.0f, 1024.0f, 0.0f, 1024.0f, -1.0f, 1.0f);
glUniformMatrix4fv(gWorldLocation, 1, GL_FALSE, glm::value_ptr(projection));

Display two objects using OpenGL. Textures not behaving as expected

Hi, I am trying to display two objects using OpenGL viz., 1) a rotating cube with a mix of two textures (a wooden crate pattern and a smiley) in the foreground and 2) rectangular plate with just one texture (dark grey wood) as a background. When I comment out the part of the code governing the display of rectangular plate, the rotating cube displays both the textures (wooden crate and smiley). Otherwise, the cube displays only the wooden crate texture and the dark grey wood texture is also displayed on the rectangular plate, i.e. the smiley texture disappears from the rotating cube. Please find the images 1) http://oi68.tinypic.com/2la4r3c.jpg (with the rectangular plate portion of code commented) and 2) http://i67.tinypic.com/9u9rpf.jpg (without the rectangular plate portion of code commented). The relavant portion of the code is pasted below
// Rotating Cube ===================================================
// Texture of wooden crate
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture1);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture1"), 0);
// Texture of a smiley
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, texture2);
glUniform1i(glGetUniformLocation(ourShader_box.Program, "ourTexture2"), 1);
// lets use the box shader for the cube
ourShader_box.Use();
// transformations for the rotating cube ---------------------------------
glm::mat4 model_box, model1, model2;
glm::mat4 view_box;
glm::mat4 perspective;
perspective = glm::perspective(45.0f, (GLfloat)width_screen/(GLfloat)height_screen, 0.1f, 200.0f);
model1 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.5f, 1.0f, 0.0f));
model2 = glm::rotate(model_box, (GLfloat)glfwGetTime()*1.0f, glm::vec3(0.0f, 1.0f, 0.5f));
model_box = model1 * model2;
view_box= glm::translate(view_box, glm::vec3(1.0f, 0.0f, -3.0f));
GLint modelLoc_box = glGetUniformLocation(ourShader_box.Program, "model");
GLint viewLoc_box = glGetUniformLocation(ourShader_box.Program, "view");
GLint projLoc_box = glGetUniformLocation(ourShader_box.Program, "perspective");
glUniformMatrix4fv(modelLoc_box, 1, GL_FALSE, glm::value_ptr(model_box));
glUniformMatrix4fv(viewLoc_box, 1, GL_FALSE, glm::value_ptr(view_box));
glUniformMatrix4fv(projLoc_box, 1, GL_FALSE, glm::value_ptr(perspective));
// --------------------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_box);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
// Rectangular Plate =====================================================
// Background Shader
ourShader_bg.Use();
// Texture of dark grey wood
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, texture_wood);
glUniform1i(glGetUniformLocation(ourShader_bg.Program, "ourTexture3"), 2);
// Transformations -------------------------------------------
glm::mat4 model_bg;
glm::mat4 view_bg;
GLint modelLoc_bg = glGetUniformLocation(ourShader_bg.Program, "model");
GLint viewLoc_bg= glGetUniformLocation(ourShader_bg.Program, "view");
GLint projLoc_bg = glGetUniformLocation(ourShader_bg.Program, "perspective");
glUniformMatrix4fv(modelLoc_bg, 1, GL_FALSE, glm::value_ptr(model_bg));
glUniformMatrix4fv(viewLoc_bg, 1, GL_FALSE, glm::value_ptr(view_bg));
glUniformMatrix4fv(projLoc_bg, 1, GL_FALSE, glm::value_ptr(perspective));
// -----------------------------------------------------------
// Draw calls
glBindVertexArray(VAO_bg);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
// =================================================================
I have a two questions regarding this code.
Why is the smiley disappearing?
Is this how multiple objects are supposed to be rendered? I know OpenGL does not care about objects, it only cares about vertices, but in this case these are separate, disjoint objects. So, should I be organizing them as two VBO's bound to a single VAO or as separate VBO's each bound to two VAO's for each object? Or is the case such that, either way is fine - depends on coder's choice and elegance of code?
You are using the same shader, same matrices and you have the same geometry type for the two objects (triangles), so why set the shader twice ?
Did you try to;
Set shader
Bind buffer #1
Bind texture #1
Draw object #1
Bind buffer #2
Bind texture #2
Draw object #2

Multiple images of same mesh without duplicate triangle transfers

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

openGL - orthogonal projection matrix

I'm very new to openGL and I am doing a mini project where I experiment with the depth buffer. I got to the stage of displaying it to the screen. However I want to draw it as screen coordinates instead of converting to floats. I read somewhere that I need to use a projection matrix. I have looked for ages and tested loads of different options but I can't seem to get it right.
Can anyone point me to a useful resource or explain how I would go about doing this?
EDIT
At the moment my matrix looks like this:
projectionMat = glm::ortho(0.0f, (float)_cols, 0.0f, (float)_rows, 0.0f, (float)_maxDepthVal);
projection = glGetUniformLocation(_program, "Projection");
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 2
With some fiddling I found that cols had to be negative for some strange reason before it would display. I twill now display correctly on the screen but for some reason it his a gap around the sides opposite the origin, why is this? Even a small move in the camera position and target cause all of it to vanish so I don't think that would be the problem.
Pixel Art Representation!!
OOOO!!
OOOO!!
OOOO!!
!!!!!!!!!!!!!!
New code
glm::mat4 Projection = glm::ortho(0.0f, -static_cast<float>(_cols), 0.0f, static_cast<float>(_rows), 0.0f, static_cast<float>(_maxDepthVal));
projection = glGetUniformLocation(_program, "Projection");
glm::mat4 View = glm::lookAt(
glm::vec3(0.0f, 0.0f, -0.1f),
glm::vec3(0.0f , 0.0f, 0.0f), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
projectionMat = Projection * View * Model;
glUniformMatrix4fv(projection, 1, GL_FALSE, glm::value_ptr(projectionMat));
EDIT 3
I can translate it using the Model matrix but it has a gap of 5 pixels around it that I can't get rid of, any help on that would be appreciated but thanks for taken an interest.
UPDATE
As per request my draw code
glUseProgram(_program);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_ALWAYS);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
SDL_GL_SwapWindow(_window);
glPointSize(1);
glEnableVertexAttribArray(0);
//Insert matrix here
glVertexAttribPointer(0, 3, GL_UNSIGNED_INT, GL_FALSE, 0, 0);
glDrawArrays(GL_POINTS, 0, _dataCount)
glDisableVertexAttribArray(0);
my vbo:
glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _dataCount * 4 * sizeof(unsigned int), NULL, GL_STATIC_DRAW);
if(_vbo == 0 || glGetError() != GL_NO_ERROR)
{
_errorMessage = "VBO COULD NOT BE CREATED";
error();
}
checkCudaErrors(cudaGraphicsGLRegisterBuffer(&vbo, _vbo, cudaGraphicsMapFlagsNone));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);
I'm also having issues with the write as when it converts to floats(for drawing) it loses precision so if I read the value out again it rounds to the nearest factor(0, 256, 512 etc.). Is there another way to do it that stores it as unsigned int. (I realize this is getting slightly off topic but any help would be appreciated)
The issue appeared to be with the cols variable, it needed to be inverted to work otherwise it was off the screen.

While drawing in orthographic view, Is there any performance advantage of using glDrawElements

I am drawing some orthographic representations in bulk around one million in my model drawing.
(I will draw these things with some flag)
Camera is also implemented. rotation etc are possible.
All these orthograhic representations will change their positions when I rotate the model.
So that, it looks like, all these are in the same place on the model.
Now I would like to draw these orthographic things through graphics card, because, when these are huge in number, model rotation is very very slow.
I feel like there would not be any advantage, because, every time I have to recompute the postions based on the projection matrix.
1) Am I correct?
2) And also please let me know, how to improve performance when i am drawing bulk orthographic representations using opengl.
3) I also feel instancing will not work here, because for each orthographic rep is drawn between 2/3 positions. Am I correct ?
Usually, OpenGL does the projection calculation for you while drawing: The positions handed over to GL are world or model coordinates, and GL rendering uses the model-view-projection matrix (while rendering) to calculate the screen coordinates for the current projection etc. If the camera moves, the only thing that changes is the MVP matrix handed to GL.
This shouldn't really depend on the kind of projection you are using. So I don't think you need to / should update the positions in your array.
Here is my approach:
You create a vertex buffer that contains each vertex position 6 times and 6 texture coordinates (that you need anyways if you want to draw your representation with textures) from which you make a quad in the vertex shader. In that you would emulate the openGL projection and then offset the vertex by its texture coordinate to create the quad with constant size.
When constructing the model:
vector<vec3>* positionList = new vector<vec3>();
vector<vec2>* texCoordList = new vector<vec2>();
for (vector<vec3>::iterator it = originalPositions->begin(); it != originalPositions->end(); ++it) {
for (int i = 0; i < 6; i++) //each quad consists of 2 triangles, therefore 6 vertices
positionList->push_back(vec3(*it));
texCoordList->push_back(vec2(0, 0)); //corresponding texture coordinates
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(0, 1));
texCoordList->push_back(vec2(1, 0));
texCoordList->push_back(vec2(1, 1));
texCoordList->push_back(vec2(0, 1));
}
vertexCount = positionList->size();
glGenBuffers(1, &VBO_Positions); //Generate the buffer for the vertex positions
glBindBuffer(GL_ARRAY_BUFFER, VBO_Positions);
glBufferData(GL_ARRAY_BUFFER, positionList->size() * sizeof(vec3), positionList->data(), GL_STATIC_DRAW);
glGenBuffers(1, &VBO_texCoord); //Generate the buffer for texture coordinates, which we are also going to use as offset values
glBindBuffer(GL_ARRAY_BUFFER, VBO_texCoord);
glBufferData(GL_ARRAY_BUFFER, texCoordList->size() * sizeof(vec2), texCoordList->data(), GL_STATIC_DRAW);
Vertex Shader:
void main() {
fs_texCoord = vs_texCoord;
vec4 transformed = (transform * vec4(vs_position, 1));
transformed.xyz /= transformed.w; //This is how the openGL pipeline does projection
vec2 offset = (vs_texCoord * 2 - 1) * offsetScale;
//Map the texture coordinates from [0, 1] to [-offsetScale, offsetScale]
offset.x *= invAspectRatio;
gl_Position = vec4(transformed.xy + offset, 0, 1);
//We pass the new position to the pipeline with w = 1 so openGL keeps the position we calculated
}
}
Note that you need to adapt to the aspect ratio yourself, since there is no actual orthogonal matrix in this that would do this for you, which is this line:
offset.x *= invAspectRatio;