Rotating, scaling and translating 2d points in GLM - c++

I am currently trying to create a 2d game and I am a bit shocked that I can not find any transformations like rotation, scale, translate for vec2.
For example as far as I know rotate is only available for a mat4x4 and mat4x4 can only be multiplied by a vec4.
Is there any reason for that?
I want to calculate my vertices on the CPU
std::vector<glm::mat2> m;
Gather all matrices, generate the vertices, fill the GPU buffer and then draw everything in one draw call.
How would I do this in glm? Just use a mat4 and then ignore the z and w component?

One thing that you should understand is that you can't just ignore the z component, let alone the w component. It depends on how you want to view the problem. The problem is that you want to use a mat2 and a vec2 for 2D, but alas it's not as simple as you may think. You are using OpenGL, and presumably it's with shaders too. You need to use at least a glm::mat3, or better: mat4. The reason for this is that although you want everything in 2D, it has to be done in terms of 3D. 2D is really just 3D with the z-buffer being 1 and the clip pane simply static, relative to the Window size.
So, what I suggest is this:
For your model matrix, you have it as a glm::mat4 which contains all data, even if you don't intend to use it. Consistency is important here, especially for transformations.
Don't ignore the z and w component in glm::mat4; they are important because their values dictate where they are on screen. OpenGL needs to know where in the z-plane a value is. And for matrix multiplication, homogenous coordinates are required, so the w component is also important. In other words, you are virtually stuck with glm::mat4.
To get transformations for glm::mat4 you should
#include <glm/gtc/matrix_transform.hpp>
Treat your sprites separately; as in have a Sprite class, and group them outside that class. Don't do the grouping recursively as this leads to messy code. Generating vertices should be done on a per-Sprite basis; don't worry about optimisation because OpenGL takes care of that for you, let alone the compiler on the C++ side of things. You'll soon realise that by breaking down this problem you can have something like
std::vector<Sprite*> sprites;
for (const auto& : i)
i->init();
// etc...
for (const auto& : i)
i->render();
With regards to the shaders, you shouldn't really have them inside the Sprite class. Have a resource loader, and simply have each Sprite class retrieve the shader from that loader.
And most importantly: remember the order of transformations!
With regards to transformations of sprites, what you can do is have a glm::vec3 for sprite positions, setting the z-component to 0. You can then move your sprite by simply having a function which wants x and y values. Feed these values into the model matrix using glm::translate(..). With regards to rotation, you use glm::rotate() and simply have functions which get rotation angles. ALWAYS ROTATE ON THE Z-PANE thus it should be similar to this:
modelMatrix = glm::rotate(modelMatrix, glm::radians(angle), glm::vec3(0.f, 0.f, 1.f));
As for scaling, again a suitable setScale() function that accepts two values for x and y. Set the z component to 1 to prevent the Sprite from being scaled in z. Feed the values into glm::scale like so:
modelMatrix = glm::scale(modelMatrix, glm::vec3(scale));
Remember, that storing matrices provides little benefit because they're just numbers; they don't indicate what you really want to do with them. It's better for a Sprite class to encapsulate a matrix so that it's clear what they stand for.
You're welcome!

It seems that I haven't looked hard enough:
GLM_GTX_matrix_transform_2d from glm/gtx/matrix_transform_2d.hpp.

Related

How to scale a matrix obtained with glm::lookat()?

I am trying to create an orthographic view (separate from the final perspective view). I know the view's scale, position, it's up vector, and the point it's looking at. So it seems easiest to simply use glm::lookAt() to create the view matrix, except there's no way to specify scale.
Is there a way to introduce scaling with glm::lookAt()? I don't think it's possible to scale the matrix made by lookAt, since scaling needs to happen first to get the 'expected' result (objects appearing larger or smaller, like zooming a camera in or out). I may be wrong there though.
If it's not possible while using lookAt, is there a series of glm functions that would be equivalent?
In terms of matrix transformations, you can start with a scale and apply the glm::lookAt() matrix afterwards. I'm not familiar with glm, but I have used GLUT before. You should be able to do a matrix multiplication like this (I'm assuming that you've already transformed everything into scene space, defined all the variables etc.)
TransformedVector = glm::lookAt(cameraPosition,
cameraTarget,
upVector) *
glm::scale(scalingFactor,
scalingFactor,
scalingFactor) *
OriginalVector;
If this behaves peculiarly try swapping the glm::lookAt() and glm::scale() order.

How do I access a transformed openegl modelview matrix [tried glGetFloatv()]?

I am trying to rotate over the 'x' axis and save the transformed matrix so that I can use it to rotate further later; or over another axis from the already rotated perspective.
//rotate
glRotatef(yROT,model[0],model[4],model[8]);//front over right axis
//save model
glGetFloatv(GL_MODELVIEW_MATRIX, model);
Unfortunately I noticed that openGL must buffer the transformations because the identity matrix is loaded to model. Is there a work-around?
Why, oh God, would you do this?
I have been toying around with attempting to understand quaternions, euler, or axis rotation. The concepts are not difficult but I have been having trouble with the math even after looking at examples *edit[and most of the open classes I have found either are not well documented for simpleton users or have restrictions on movement].
I decided to find a way to cheat.
edit*
By 'further later' I mean in the next loop of code. In other words, yRot is the number of degrees I want my view to rotate from the saved perspective.
My suggestion: Don't bother with glRotate at all, they were never very pleasant to work with in the first place and no serious program did use them ever.
If you want to use the fixed function pipeline (= no shaders), use glLoadMatrix to load whatever transformation you currently need. With shaders you have to do the conceptually same with glUniform anyway.
Use a existing matrix math library, like GLM, Eigen or linmath.h to construct the transformation matrices. The nice benefit is, that you can make copies of a matrix at any point, so instead of fiddling with glLoadIdentity, glPushMatrix and glPopMatrix you just make copies where you need them and work from them.
BTW: There is no such thing as "models" in OpenGL. That's not how OpenGL works. OpenGL draws points, lines or triangles, one at a time, where each such called primitive is transformed individually to a position on the (screen) framebuffer and turned into pixels. Once a primitive has been processed OpenGL already forgot about it.

Why are model and view matrices specifically often combined together?

I'm having trouble understanding why the model and view matrices are traditionally combined together. I know that the less matrix multiplying you do in the vertex shader the better, but it makes much more sense to me to combine the projection and view matrices.
This is because they are both intrinsically camera properties. It makes sense to me to first transform vertices into world space with the model matrix, perform lighting etc., then use your combined camera matrix to translate to normalised clip space.
I know that I can do it that way if I want in a programmable pipeline, but I want to know why historically people combined the model and view matrices.
In graphics programming, camera doesn't exist. It's always fixed at (0,0,0) and looking towards (0,0,-1). Camera as everyone knows it is totally artificial and mimics the way we are used to observing objects, as humans, and by that I mean: moving around, pivoting our head and such. To mimic that, cg introduces the concept of camera. It is interesting and well known that it is the same thing whether will you move to camera to the right or to move all other objects in the scene to the left. That invariance is then transffered onto modelMatrix by combining all transformations on object in one matrix - MVMatrix.
View and Projection matrices are seperated because those matrices do very different transformations. One is very similar to modelMatrix and represents 3d, in-space transformations, and the other is used for calculating angles from which the objects are viewed.

OpenGL define vertex position in pixels

I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!
This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:
The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world
Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.
Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.
After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.
In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.
When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.
The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.
So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.
These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.
EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.
It is called transformation.
Vertices are set in 3D coordinates which is transformed into a viewport coordinates (into your window view). This transformation can be set in various ways. Orthogonal transformation can be easiest to understand as a starter.
http://www.songho.ca/opengl/gl_transform.html
http://www.opengl.org/wiki/Vertex_Transformation
http://www.falloutsoftware.com/tutorials/gl/gl5.htm
Firstly be aware that OpenGL not uses standard pixel coordinates. I mean by that for particular resolution, ie. 800x600 you dont have horizontal coordinates in range 0-799 or 1-800 stepped by one. You rather have coordinates ranged from -1 to 1 later send to graphic card rasterizing unit and after that matched to particular resolution.
I ommited one step here - before all that you have an ModelViewProjection matrix (or viewProjection matrix in some simple cases) which before all that will cast coordinates you use to an projection plane. Default use of that is to implement a camera which converts 3D space of world (View for placing an camera into right position and Projection for casting 3d coordinates into screen plane. In ModelViewProjection it's also step of placing a model into right place in world).
Another case (and you can use Projection matrix this way to achieve what you want) is to use these matrixes to convert one range of resolutions to another.
And there's a trick you will need. You should read about modelViewProjection matrix and camera in openGL if you want to go serious. But for now I will tell you that with proper matrix you can just cast your own coordinate system (and ie. use ranges 0-799 horizontaly and 0-599 verticaly) to standarized -1:1 range. That way you will not see that underlying openGL api uses his own -1 to 1 system.
The easiest way to achieve this is glOrtho function. Here's the link to documentation:
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
This is example of proper usage:
glMatrixMode (GL_PROJECTION)
glLoadIdentity ();
glOrtho (0, 800, 600, 0, 0, 1)
glMatrixMode (GL_MODELVIEW)
Now you can use own modelView matrix ie. for translation (moving) objects but don't touch your projection example. This code should be executed before any drawing commands. (Can be after initializing opengl in fact if you wont use 3d graphics).
And here's working example: http://nehe.gamedev.net/tutorial/2d_texture_font/18002/
Just draw your figures instead of drawing text. And there is another thing - glPushMatrix and glPopMatrix for choosen matrix (in this example projection matrix) - you wont use that until you combining 3d with 2d rendering.
And you can still use model matrix (ie. for placing tiles somewhere in world) and view matrix (in example for zooming view, or scrolling through world - in this case your world can be larger than resolution and you could crop view by simple translations)
After looking at my answer I see it's a little chaotic but If you confused - just read about Model, View, and Projection matixes and try example with glOrtho. If you're still confused feel free to ask.
MSDN has a great explanation. It may be in terms of DirectX but OpenGL is more-or-less the same.
Google for "opengl rendering pipeline". The first five articles all provide good expositions.
The key transition from vertices to pixels (actually, fragments, but you won't be too far off if you think "pixels") is in the rasterization stage, which occurs after all vertices have been transformed from world-coordinates to screen coordinates and clipped.

How do I translate single objects in OpenGL 3.x?

I have a bit of experience writing OpenGL 2 applications and want to learn using OpenGL 3. For this I've bought the Addison Wesley "Red-book" and "Orange-book" (GLSL) which descirbe the deprecation of the fixed functionality and the new programmable pipeline (shaders). But what I can't get a grasp of is how to construct a scene with multiple objects without using the deprecated translate*, rotate* and scale* functions.
What I used to do in OGL2 was to "move about" in 3D space using the translate and rotate functions, and create the objects in local coordinates where I wanted them using glBegin ... glEnd. In OGL3 these functions are all deprecated, and, as I understand, replaced by shaders. But I can't call a shaderprogram for each and every object I make, can I? Wouldn't this affect all the other objects too?
I'm not sure if I've explained my problem satisfactory, but the core of it is how to program a scene with multiple objects defined in local coordinates in OpenGL 3.1. All the beginner tutorials I've found only uses a single object and doesn't have/solve this problem.
Edit: Imagine you want two spinning cubes. It would be a pain manually modifying each vertex coordinate, and you can't simply modify the modelview-matrix, because that would rather spin the camera around two static cubes...
Let's start with the basics.
Usually, you want to transform your local triangle vertices through the following steps:
local-space coords-> world-space coords -> view-space coords -> clip-space coords
In standard GL, the first 2 transforms are done through GL_MODELVIEW_MATRIX, the 3rd is done through GL_PROJECTION_MATRIX
These model-view transformations, for the many interesting transforms that we usually want to apply (say, translate, scale and rotate, for example), happen to be expressible as vector-matrix multiplication when we represent vertices in homogeneous coordinates. Typically, the vertex V = (x, y, z) is represented in this system as (x, y, z, 1).
Ok. Say we want to transform a vertex V_local through a translation, then a rotation, then a translation. Each transform can be represented as a matrix*, let's call them T1, R1, T2.
We want to apply the transform to each vertex: V_view = V_local * T1 * R1 * T2. Matrix multiplication being associative, we can compute once and for all M = T1 * R1 * T2.
That way, we only need to pass down M to the vertex program, and compute V_view = V_local * M. In the end, a typical vertex shader multiplies the vertex position by a single matrix. All the work to compute that one matrix is how you move your object from local space to the clip space.
Ok... I glanced over a number of important details.
First, what I described so far only really covers the transformation we usually want to do up to the view space, not the clip space. However, the hardware expects the output position of the vertex shader to be represented in that special clip-space. It's hard to explain clip-space coordinates without significant math, so I will leave that out, but the important bit is that the transformation that brings the vertices to that clip-space can usually be expressed as the same type of matrix multiplication. This is what the old gluPerspective, glFrustum and glOrtho compute.
Second, this is what you apply to vertex positions. The math to transform normals is somewhat different. That's because you want the normal to stay perpendicular to the surface after transformation (for reference, it requires a multiplication by the inverse-transpose of the model-view in the general case, but that can be simplified in many cases)
Third, you never send 4-D coordinates to the vertex shader. In general you pass 3-D ones. OpenGL will transform those 3-D coordinates (or 2-D, btw) to 4-D ones so that the vertex shader does not have to add the extra coordinate. it expands each vertex to add the 1 as the w coordinate.
So... to put all that back together, for each object, you need to compute those magic M matrices based on all the transforms that you want to apply to the object. Inside the shader, you then have to multiply each vertex position by that matrix and pass that to the vertex shader Position output. Typical code is more or less (this is using old nomenclature):
mat4 MVP;
gl_Position=MVP * gl_Vertex;
* the actual matrices can be found on the web, notably on the man pages for each of those functions: rotate, translate, scale, perspective, ortho
Those functions are apparently deprecated, but are technically still perfectly functional and indeed will compile. So you can certainly still use the translate3f(...) etc functions.
HOWEVER, this tutorial has a good explanation of how the new shaders and so on work, AND for multiple objects in space.
You can create x arrays of vertexes, and bind them into x VAO objects, and you render the scene from there with shaders etc...meh, it's easier for you to just read it - it is a really good read to grasp the new concepts.
Also, the OpenGL 'Red Book' as it is called has a new release - The Official Guide to Learning OpenGL, Versions 3.0 and 3.1. It includes 'Discussion of OpenGL’s deprecation mechanism and how to verify your programs for future versions of OpenGL'.
I hope that's of some assistance!