How to rotate vertices exactly like with glRotatef() in OpenGL? - c++

I need to optimize my rendering code, currently I'm using glPushMatrix() with glTranslatef() and glRotatef(). But this costs more than rendering all objects in a single vertex array call.
Is there some fast built in function in OpenGL to rotate my vertices data exactly like glRotatef() would do? If not, what library or method would you recommend using? The only method I know is to use sin/cos functions to rotate the vertices, but I'm not sure if that is the fastest way to do it, or will it even result in the same outcome.
I only need to rotate along one axis once per object (2D rendering), so it doesn't need to be super complicated or support that glPushMatrix() system in its full potential.
Edit: I don't want to use shaders for this.
Edit2: I am only rendering individual quads (in 2D mode) which are rotated along the zero point, so each vertex would go from -10 to 10 values for example. My current code: (quad.vert[i].x*cosval)-(quad.vert[i].y*sinval) (twice, for y too).

I'm assuming you're using an old version of OpenGL, since you're using glRotate, and truly ancient/strange hardware since you don't want to use shaders.
You can put the glRotate*() calls in a display list, or compute the rotation matrices yourself. Chapter 3 of the OpenGL Red Book together with appendix F has the information you need to construct the matrices yourself. Look at chapter 7 for more information about display lists.

Related

How do I access a transformed openegl modelview matrix [tried glGetFloatv()]?

I am trying to rotate over the 'x' axis and save the transformed matrix so that I can use it to rotate further later; or over another axis from the already rotated perspective.
//rotate
glRotatef(yROT,model[0],model[4],model[8]);//front over right axis
//save model
glGetFloatv(GL_MODELVIEW_MATRIX, model);
Unfortunately I noticed that openGL must buffer the transformations because the identity matrix is loaded to model. Is there a work-around?
Why, oh God, would you do this?
I have been toying around with attempting to understand quaternions, euler, or axis rotation. The concepts are not difficult but I have been having trouble with the math even after looking at examples *edit[and most of the open classes I have found either are not well documented for simpleton users or have restrictions on movement].
I decided to find a way to cheat.
edit*
By 'further later' I mean in the next loop of code. In other words, yRot is the number of degrees I want my view to rotate from the saved perspective.
My suggestion: Don't bother with glRotate at all, they were never very pleasant to work with in the first place and no serious program did use them ever.
If you want to use the fixed function pipeline (= no shaders), use glLoadMatrix to load whatever transformation you currently need. With shaders you have to do the conceptually same with glUniform anyway.
Use a existing matrix math library, like GLM, Eigen or linmath.h to construct the transformation matrices. The nice benefit is, that you can make copies of a matrix at any point, so instead of fiddling with glLoadIdentity, glPushMatrix and glPopMatrix you just make copies where you need them and work from them.
BTW: There is no such thing as "models" in OpenGL. That's not how OpenGL works. OpenGL draws points, lines or triangles, one at a time, where each such called primitive is transformed individually to a position on the (screen) framebuffer and turned into pixels. Once a primitive has been processed OpenGL already forgot about it.

Is my situation a good case to use GL_STATIC_DRAW?

I have a textured polygon mesh that I plan to be move-able based on the user's various inputs.
For example: the user can move the vertices in various directions. But the number of vertices and the texture coordinates will always be constant.
Is this a good situation to use GL_STATIC_DRAW, or should i use something else, like GL_STREAM_DRAW?
Instead of updating a VBO every time the vertices are moved, I would suggest using transformations. With transformations, you can create a matrix that can translate, rotate, or scale the vertices by simply multiplying the transformation matrix by the position vector. This multiplication can be done on the graphics card with a GLSL shader. Using this method, your vertex buffer would never have to change.
I would suggest reading this article for more information on how to use transformations in OpenGL: https://open.gl/transformations
No, your situation is not a good case to use GL_STATIC_DRAW. As h4lcOn's link suggests you should use dynamic or stream. Though if I understand correctly what you are trying to do I wouldn't even use VBO at all. There will not be much overhead (if any at all) if you push the coordinates every draw call for a simple polygon. Use a VBO in cases when you have a large quantity of polygons or when you make large amount of draw calls with the same vertex data in a single frame.

In a big OpenGL game with lots of 3D moving objects, how are the points typically updated?

Points calculated with own physics engine and then sent to OpenGL every time it has to display, e.g. with glBufferSubDataArb, with the updated coordinates of a flying barrel
There are lots of barrels with the same world coordinates but somehow for each one you tell OpenGL to use a different matrix transformation. When a barrel moves you update it's transformation matrix somehow, to reflect which way it rotated/translated in the world.
Some other way
Also, if the answer is #2, is there any easy way to do it, e.g. with abstracted code rather than manipulating the matrices yourself
OpenGL is not a scene graph, it's a drawing API. Most recent versions of OpenGL (OpenGL-3 core and above) reflect this, by not managing matrix state at all. Indeed the answer is 2, more or less. And actually you are expected to deal with the matrix math. OpenGL-3 no longer provides any primitives for that.
Usually a physics engine sees an object as a rigid body with a convex hull. The natural way to represent such a body is using a 4×3 matrix (a 3×3 rotation matrix and a translation vector). So if using a physics engine you're presented with such matrices anyway.
Also you must understand that OpenGL doesn't maintain a scene, so there is nothing you "update". You just draw your data using OpenGL. Matrices are loaded as they are needed.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

Fast rotate and translate without using glRotate / glTranslate

I have an octagon which I need to rotate and translate to 10,000 different locations/angle. The angle and coordinates changes dynamically.
If I use glRotate and glTranslate in immediate mode, it would be too slow due to all the going back and forth between client/server.
If I use glRotate and glTranslate on a Display List, it will be fast, but I am avoiding Display List because it is deprecated.
If I use VBO, I have to pre-rotate and and pre-translate the octagon on the CPU prior to uploading it to server memory. This works, but takes lots of CPU time.
So I am wondering...is there anyway to translate/rotate with vertices stored in VBO , without resorting to CPU based computation. Is there a VBO equivalent for executing rotate/translate values stored in server memory? I would really love the GPU to do all the calculations and free my CPU from all the trig functions.
I would use VBO and regular glRotate and glTranslate, (or provide a matrix to a vertex shader using glUniformMatrix). I don't think it will slow down the rendering!
You can use GLSL to write a shader that handles the transformation for you. However, you will need to make the transformation matrix available to the shader somehow.
If you are doing this in 2D, there's a similar question (for quads, but the theory is the same) on the game development stack exchange: Basics of drawing in 2D with OpenGL 3 shaders.
Note that the second answer for that question, which gives more details, has a link to OpenGL.org which has a broken anchor. I believe it should link to Instanced arrays.
An example of instancing I found after a quick google search: Shader instancing. In this tutorial you probably want to look at the vertex shader for an example of a transformation matrix applied using a texture buffer for storing the matrix. The example code is Delphi, but it should be readable. The site is in German, but you can always use Google translate.