glMultMatrix/glLoadMatrix more efficient than glRotatef, or glTranslatef? - c++

Suppose I have the following code:
glRotatef(angle, 1.0f, 1.0f, 0.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glRotatef(angle, 0.0f, 0.0f, 1.0f);
glTranslatef(0.0f, 0.0f -5.0f);
Is this less efficient than utilizing one's own custom matrix via the glLoadMatrix function that accomplishes the same functionality?
Also, I know that when multiplying matrices to form a custom linear transformation, the last matrix multiplied is the first transformation to take place. Likewise, is this the case if I utilize the above code? Will it translate, then rotate about the Z axis, followed by rotations about the y and x axes?

In general if you assemble your matrix on your own and load it via glLoadMatrix or glMultMatrix your program will run faster. Unless you make stupid mistakes in your own matrix routines that ruin the performance of course.
This is because the glRotate glTranslate etc. functions do quite a bit more than the pure math. They have to check the matrix-mode. glRotate has to deal with cases where the axis is not passed as a unit-vector etc.
But unless you do this 10thousands times per frame I wouldn't worry about the lost performance. It adds up, but it's not that much.
My personal way of dealing with openGL transformations is to build the matrices in my code and only upload them to OpenGL via glLoadMatrix. This allows me to do lots of shortcuts like reversing the order of multiplications (faster to calculate than the way OpenGL does it). Also it gives me instant access to the matrix which is required if you want to do boundary box checks before rendering.
Needless to say code written with such an approach is also easier to port onto a different graphics API (think OpenGL|ES2, DirectX, Game-Consoles...)

According to the OpenGL specs the glRotate and glTranslate are using their parameters to produce a 4x4 matrix, then the current matrix is multiplied by the (glRotate or glTranslate) produced matrix with the product replacing the current matrix.
This roughly means that in your enlisted code you have 4 matrix multiplications! On top of that you have 4 API calls and a few other calculations that convert the angles of glRotate to a 4x4 matrix.
By using glLoadMatrix you will have to produce the transformation matrix yourself. Having the angles and the translation there are way more efficient ways to produce that transformation matrix and thus speedup the whole thing.

Is this less efficient than utilizing one's own custom matrix via the glLoadMatrix function that accomplishes the same functionality?
Very likely. However if you're running into a situation where setting of the transformation matrices has become a bottleneck you're doing something fundamentally wrong. In a sanely written realtime graphics program calculation of the transformation matrices should make only a very small amount of the things processed in total.
A example for very bad programming was something like this (pseudocode):
glMatrixMode(GL_MODELVIEW)
for q in quads:
glPushMatrix()
glTranslatef(q.x, q.y, q.z)
glBindTexture(GL_TEXTURE_2D, q.texture)
glBegin(GL_QUADS)
for v in [(0,0), (1,0), (1,1), (0,1)]:
glVertex2f(v[0], v[1]
glEnd()
glPopMatrix()
Code like this will perform very poorly. First you're spending an awful lot of time in calculating the new transformation matrix for each quad, then you restart a primitive batch for each quad, the texture switches kill the caches and last but not least its using immediate mode. Indeed the above code is the the worst of all OpenGL anti-patterns in one single example.
Your best bet for increasing rendering performance is to avoid any of the patterns you can see in above example.

Those matrix functions are implemented in the driver, so they might be optimized. You will have to spend some time to write your own code and test if the performance is better or not than the original OpenGL code.
On the other hand in "new" version of OpenGL all of those functions are missing and marked as deprecated. So in new standard you are forced to use custom math functions (assuming that you are using Core profile)

Related

Rotating, scaling and translating 2d points in GLM

I am currently trying to create a 2d game and I am a bit shocked that I can not find any transformations like rotation, scale, translate for vec2.
For example as far as I know rotate is only available for a mat4x4 and mat4x4 can only be multiplied by a vec4.
Is there any reason for that?
I want to calculate my vertices on the CPU
std::vector<glm::mat2> m;
Gather all matrices, generate the vertices, fill the GPU buffer and then draw everything in one draw call.
How would I do this in glm? Just use a mat4 and then ignore the z and w component?
One thing that you should understand is that you can't just ignore the z component, let alone the w component. It depends on how you want to view the problem. The problem is that you want to use a mat2 and a vec2 for 2D, but alas it's not as simple as you may think. You are using OpenGL, and presumably it's with shaders too. You need to use at least a glm::mat3, or better: mat4. The reason for this is that although you want everything in 2D, it has to be done in terms of 3D. 2D is really just 3D with the z-buffer being 1 and the clip pane simply static, relative to the Window size.
So, what I suggest is this:
For your model matrix, you have it as a glm::mat4 which contains all data, even if you don't intend to use it. Consistency is important here, especially for transformations.
Don't ignore the z and w component in glm::mat4; they are important because their values dictate where they are on screen. OpenGL needs to know where in the z-plane a value is. And for matrix multiplication, homogenous coordinates are required, so the w component is also important. In other words, you are virtually stuck with glm::mat4.
To get transformations for glm::mat4 you should
#include <glm/gtc/matrix_transform.hpp>
Treat your sprites separately; as in have a Sprite class, and group them outside that class. Don't do the grouping recursively as this leads to messy code. Generating vertices should be done on a per-Sprite basis; don't worry about optimisation because OpenGL takes care of that for you, let alone the compiler on the C++ side of things. You'll soon realise that by breaking down this problem you can have something like
std::vector<Sprite*> sprites;
for (const auto& : i)
i->init();
// etc...
for (const auto& : i)
i->render();
With regards to the shaders, you shouldn't really have them inside the Sprite class. Have a resource loader, and simply have each Sprite class retrieve the shader from that loader.
And most importantly: remember the order of transformations!
With regards to transformations of sprites, what you can do is have a glm::vec3 for sprite positions, setting the z-component to 0. You can then move your sprite by simply having a function which wants x and y values. Feed these values into the model matrix using glm::translate(..). With regards to rotation, you use glm::rotate() and simply have functions which get rotation angles. ALWAYS ROTATE ON THE Z-PANE thus it should be similar to this:
modelMatrix = glm::rotate(modelMatrix, glm::radians(angle), glm::vec3(0.f, 0.f, 1.f));
As for scaling, again a suitable setScale() function that accepts two values for x and y. Set the z component to 1 to prevent the Sprite from being scaled in z. Feed the values into glm::scale like so:
modelMatrix = glm::scale(modelMatrix, glm::vec3(scale));
Remember, that storing matrices provides little benefit because they're just numbers; they don't indicate what you really want to do with them. It's better for a Sprite class to encapsulate a matrix so that it's clear what they stand for.
You're welcome!
It seems that I haven't looked hard enough:
GLM_GTX_matrix_transform_2d from glm/gtx/matrix_transform_2d.hpp.

How do I access a transformed openegl modelview matrix [tried glGetFloatv()]?

I am trying to rotate over the 'x' axis and save the transformed matrix so that I can use it to rotate further later; or over another axis from the already rotated perspective.
//rotate
glRotatef(yROT,model[0],model[4],model[8]);//front over right axis
//save model
glGetFloatv(GL_MODELVIEW_MATRIX, model);
Unfortunately I noticed that openGL must buffer the transformations because the identity matrix is loaded to model. Is there a work-around?
Why, oh God, would you do this?
I have been toying around with attempting to understand quaternions, euler, or axis rotation. The concepts are not difficult but I have been having trouble with the math even after looking at examples *edit[and most of the open classes I have found either are not well documented for simpleton users or have restrictions on movement].
I decided to find a way to cheat.
edit*
By 'further later' I mean in the next loop of code. In other words, yRot is the number of degrees I want my view to rotate from the saved perspective.
My suggestion: Don't bother with glRotate at all, they were never very pleasant to work with in the first place and no serious program did use them ever.
If you want to use the fixed function pipeline (= no shaders), use glLoadMatrix to load whatever transformation you currently need. With shaders you have to do the conceptually same with glUniform anyway.
Use a existing matrix math library, like GLM, Eigen or linmath.h to construct the transformation matrices. The nice benefit is, that you can make copies of a matrix at any point, so instead of fiddling with glLoadIdentity, glPushMatrix and glPopMatrix you just make copies where you need them and work from them.
BTW: There is no such thing as "models" in OpenGL. That's not how OpenGL works. OpenGL draws points, lines or triangles, one at a time, where each such called primitive is transformed individually to a position on the (screen) framebuffer and turned into pixels. Once a primitive has been processed OpenGL already forgot about it.

Efficient way to render multiple mesh objects in different positions using DirectX / C++

When using only one translation matrix, multiple meshes appear overlapping onscreen.
The solution I tried was to create multiple translation matrices to set different initial xyz coordinates for each mesh. It works, but this method seems pretty inefficient in terms of the number of lines of code used. (The final project will probably incorporate 20+ meshes so I was hoping I would not need to create 20 different translation matrices using basically the same section of code).
I'd greatly appreciate any suggestions as to the best way of rendering multiple meshes with the most efficient use of code (i.e fewest instructions with least amount of repetition).
This is only a small graphical demo so getting a high framerate is not the priority, but achieving the result with the most efficient use of code is.
Below Code is a sample of how i'm currently rendering multiple meshes in different positions...
// initial locations for each mesh
D3DXVECTOR3 Translate1(-30.0f, 0.0f, 0.0f);
D3DXVECTOR3 Translate2(-30.0f, 10.0f, 0.0f);
D3DXVECTOR3 Translate3(0.0f, 0.0f, 0.0f);
D3DXVECTOR3 Translate4(0.0f, 10.0f, 0.0f);
//set scaling on all x y and z axis
D3DXVECTOR3 Scaling(g_ScaleX, g_ScaleY, g_ScaleZ);
/////////////////////////////////////////////////////////////////////////
// create first transformation matrix
D3DXMATRIX world1;
// create first translation matrix
D3DXMATRIX matrixTranslate;
D3DXMatrixTransformation(&world1, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate1);
//D3DXMatrixIdentity(&world1); // set world1 as current transformation matrix
// set world1 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world1);
// recompute normals
g_pd3dDevice -> SetRenderState(D3DRS_NORMALIZENORMALS, TRUE);
// render first mesh
mesh1.Render(g_pd3dDevice);
/////////////////////////////////////////////////////////////////////////
D3DXMATRIX world2;
D3DXMATRIX matrixTranslate2;
D3DXMatrixTransformation(&world2, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate2);
// set world2 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world2);
// render second mesh
mesh2.Render(g_pd3dDevice);
////////////////////////////////////////////////////////////////////////
D3DXMATRIX world3;
D3DXMATRIX matrixTranslate3;
D3DXMatrixTransformation(&world3, &ScalingCentre, &ScalingRotation, &Scaling, &RotationCentre, &Rotation, &Translate3);
// set world2 as current transformation matrix for future meshes
g_pd3dDevice -> SetTransform(D3DTS_WORLD, &world3);
//render thirdmesh
mesh3.Render(g_pd3dDevice);
//////////////////////////////////////////////////////////////////////
Edit: I see by efficient you meant 'compact code' (it usually means 'less cpu usage' :)
In that case, yes, I noticed that you copy and paste essentially the same code. Why don't you use a function that takes parameters, including a transform and a mesh? That way, you can write one function that draws a mesh, and call it for each mesh. Better yet, also store the meshes in an array, then iterate over the array, calling the draw function for each element. I think you should read up on basic tutorials for C/C++. Have fun!
http://www.cplusplus.com/doc/tutorial/
Original comment:
The cost of calculating and setting a transform is much smaller than the cost of rendering a mesh, so i think there is no problem here we can help you solve. For example, is your framerate low?
When thinking about performance (in computer graphics or otherwise), try to express your problem as a measurable statement, rather than guesses based on feel. Start by describing your purpose (e.g. drawing multiple meshes at a good frame rate), then describe what isn't working, then develop theories as to why, which you can then test.

Coordinates in openGL

I am having trouble finding the right coordinates in openGL.
For e.g.: - If the h and w are height and width of the window, then I want to draw a line of length w/2 at a distance h/4 from bottom. How would I do this in openGL?
I don't find any references telling the maximum and minimum values of coordinates.
My computer screen is 1024*768 so technically the limit should be:-
x coordinate: -512 to 512
y coordinate: -384 to 384
z coordinate: -inf to 0
But this doesn't work. Why? I need to know how coordinate system is working for openGL.
In OpenGL you can redefine the coordinate system to whatever you need. The default coordinate system is defined by a identity transform from model space to clip space identity transformed to NDC space. What this means is, that the xy coordinate range [-1,1]² maps to the viewport you set with glViewport. However by applying the right transformations you can alter that mappings to whatever you want, or need.
So I strongly suggest you read some tutorial on the OpenGL transformation pipeline and how to use it.
Fixed Function pipeline approach http://www.opengl.org/wiki/Vertex_Transformation
And the modern approach http://arcsynthesis.org/gltut/Positioning/Tut03%20A%20Better%20Way.html
I was having a difficult time with the generic coordinate system at first too.
What I wound up doing which makes more sense to me is working with OpenGL coordinate systems as if I am working with real objects in a real world coordinate system.
What I did was: I took a blueprint of a TARDIS (From the TV Show Doctor Who), which had dimensions for building the blue box in inches and feet.
From there, I took the GL coordinate system, and for every "1" GL unit, I made that equivalent to "1" foot, or 12 inches.
And - based on 0,0,0 being the center point of my TARDIS, i just hand drew the TARDIS through code, as I saw on the blueprint, translating the precise dimensions from what i saw on the blueprint.
Here's a SMALL example of what i did:
glBegin( GL_QUADS );
glNormal3f( 1.0f, 0.0f, 0.0f);
glTexCoord2f(0.0f, 0.0f); glVertex3f(15.0f/12.0f, myTop, 2.0f-(ONEINCH*0.25f));
glTexCoord2f(0.0f, 1.0f); glVertex3f(3.25f/12.0f, myTop, 2.0f-(ONEINCH*0.25f));
glTexCoord2f(1.0f, 1.0f); glVertex3f(3.25f/12.0f, myTop-(14.5f/12.0f), 2.0f - (ONEINCH*0.25f));
glTexCoord2f(1.0f, 0.0f); glVertex3f(15.0f/12.0f, myTop-(14.5f/12.0f), 2.0f - (ONEINCH*0.25f));
glEnd();
The first thing I learned in this exercise is - GL units are generic units. So applying a system I was more familiar with - feet and inches - made it SO much easier to focus on GL rather than what the heck does this unit mean?
Once I started the drawing, I was able to work with gluPerspective much more effectively, which helped me understand the viewport resolution (not to be confused with screen resolution) should ONLY have to be dealt with once, and that's when leveraging the gluPerspective command (as follows)
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,250.0f);
So to answer your question: There's no maximums for the floating point values, only levels of accuracy of the float you're dealing with.
For instance, in my perspective example above, i put the viewport extremely close (0.1 GL Unit, which is 1.2 inches), and set the distance to 250 GL uints, or 250 feet in the distance.
I think that's in part what was intended with OpenGL to begin with. It's generic units because three dimensional design makes so much more sense if you have a real world unit system to measure it again.
My advice is: Do not think of the GL units as having bounds or limitations other than degrees of error associated with the floating point inaccuracies at small and large scales.
In fact, I advise comparing it to real world units. I work in Feet and Inches. Metric system and meters are quite common. If those don't work for you, make one up that does.
Here's the TARDIS I built:
https://universalbri.wordpress.com/2015/05/24/creators-journal-holodeck-management-system-progress-10/
I am working on a game with all of this, a Star Trek Online without combat and now doing database work (SQL Server), the OpenGL was part of the question: Can I roll my own engine rather than leverage someone else's.
The answer is Yes, I can, and in fact it's preferable.
Good luck

OpenGL Rotate, Translate, Push/Pop question

I'm pretty new to openGL, but I have been struggling with why this won't work (not trying to solve a problem necessarily, but just looking to understand the fundamentals)
So basically this draws a grid of triangles that are rotating. What I'm trying to understand is if/how the Push and Pop operations can be removed. When I take them out there is a random scattering. I've tried reversing the rotation after the drawing is complete, switching the translate and rotate commands. I guess the way I see is that I should be able to avoid the push/pop and instead use the rotate each time to increment a few degrees further than the last time, but I'm obviously missing something.
The push/pop matrices are saving/restoring the previous matrix. You can't really avoid doing this, because you would be accumulating translations and rotations.
Remember: glTranslate does not position things. It builds a translation matrix and applies it to the current matrix, whatever that is. So if you do a translate, then rotate, then translate, the last translate will be in the rotated space, not the same space as the rotate.
OpenGL keeps an internal stack of transformation matrices. When you call glPushMatrix() it pushes all the matrices down by one, duplicating the one on the top of the list. This lets you then apply transformations, rotations, scaling etc to your hearts content, and provided you call a glPopMatrix() to remove your newly adjusted transform matrix when you're done, the rest of the 3D world won't change at all.
Note that when I say the rest of the 3D world, that includes your objects the next time your main loop loops. If you don't call glPushMatrix() and then glPopMatrix() you are permanently transforming the world and it will have an (unpredictable) effect on your objects.
So to wrap up, always call glPushMatrix() and glPopMatrix() at the beginning and end of your transformations, unless you know what you're doing and want to have an effect on the rest of the 3D world.
You need to reverse both the translate and rotate commands (in reverse order):
glTranslatef(xF, yF, 0);
glRotatef(angle, 0.0f, 0.0f, 1.0f);
// draw
glRotatef(-angle, 0.0f, 0.0f, 1.0f);
glTranslatef(-xF, -yF, 0);
As #Nicol points out below, it will probably be faster to use glPushMatrix/glPopMatrix, since glRotatef requires building up a rotation matrix with sines and cosines.
What's more, a rotation followed by its inverse rotation may not be exactly the identity due to floating point rounding errors. This means that if you rely on this technique in an inner loop (as you do above), you may accumulate some substantial floating point errors over the course of your entire draw call.