Since OpenGL is a state machine, I am constantly glEnable() and glDisable()-ing things in my program. There are a select few calls that I make only at the beginning (such as glClearColor) but most others I flip on and off (like lighting, depending on if I'm rendering a model or 3d text or the gui).
How do you keep track of what state things in? Do you constantly set/reset these things at the top of each function? Isn't that a lot of unnecessary overhead?
For example, when I write a new function, sometimes I know what state things will be in when the function is called, and I leave out glEnable or glDisable or other related state-switching calls at the top of the function. Other times, I'm just writing the function in advance and I add in these sorts of things. So my functions end up being very messy, some of them modifying OpenGL state and others just making assumptions (that are later broken, and then I have to go back and figure out why something turned yellow or why another thing is upside down, etc.).
How do you keep track of OpenGL across functions in an object oriented environment?
Also related to this question, is how to know when to use push and pop, and when to just set the value.
For example, let's say you have a program that draws some 3D stuff, then draws some 2D stuff. Obviously the projection matrix is different in each case. So do you:
set up a 3d projection matrix, draw 3D, set up a 2d projection matrix, draw 2d, loop
set up a 3d projection matrix at the program; then draw 3d, push matrix, draw 2d, pop matrix, loop
And why?
Great question. Think about how textures work. There are an insane amount of textures for OpenGL to switch between, and you need to enable/disable texturing after every object is drawn. You generally try to optimize this by drawing all objects with the same texture at once, but there's still a remarkable amount of state-switching that goes on. Because of this fact, I do my best to draw everything with the same state at once, but I'm not sure if it's terribly important to optimize it the way you're thinking.
As for pushing and popping between 3D and 2D projection modes, pushing and popping is intended to be used for hierarchical modeling. If you need your 2D projection to be at a location relative to a 3D object, by all means, push the matrix and switch to 2D projection mode, then pop when you're done. However, if you're writing a 2D GUI overlay that has a constant location on the screen, I see no reason why pushing and popping is important.
You might be interested to read more about scene graph libraries. They are meant to manage your graphics from a higher level. They don't handle everything well, but they excel at organizing your geometry for optimized rendering. They can additionally be used for sorting your scenes based on OpenGL state, although many do not do this. By sorting on state, you can render your geometry in an order that results in the fewer state transitions. Here's a nice overview about scene graph libraries:
http://www.realityprime.com/articles/scenegraphs-past-present-and-future
I think the simplest approach would be to write your own render class, wrapping all OpenGL state manipulation functions you are using and do book keeping of the states you set. You need to take into account that changing screen resolution or toggling fullscreen mode will invalidate your current OpenGL render context's states and data, which means that after such an event you will have to set all states, re-upload all textures and shader programs, etc.
Related
I've been hacking a very old fixed function pipeline OpenGL 1.x game. I've been intercepting opengl calls and injecting all sorts of modern opengl elements, such that I've achieved new lighting and added post processing shaders (programmable pipeline).
I'd like to generate a shadow map (for real time shadows) for this game which does not currently support this. As I understand, I need to perform multipass rendering which the game currently does not support, and I need one of those passes to use the light source as the camera position, to produce a shadow map.
I was thinking I could double up the draw calls (drawelements and drawarrays) and for one of each call, use gluLookAt to transform the scene to the perspective of the light position, and render this to a framebuffer, and then revert the camera and proceed rendering
I'm not sure i understand the opengl state well enough to do this however, as every use of gluLookAt prior to the individual rendering calls gives me very funky results, even if I supply the current camera position I pull from the modelview matrix. these funky results are just like, some random vertices are in the correct place and many are not, and they are flashing and moving all over the place
I assume I should not be trying to transform the camera at the moment of the individual render call for a vertex array.. but im not sure what to do next because i need two passes.
Question: Is there a specific time during the rendering calls that it makes sense to perform the gluLookAt, or a way to use it to transform the entire scene from within the context of a single rendering call?
The only other thought I have is actually recording all the rendering and matrix transformations in order and replaying them for a second pass.
So I've written a program that renders a mesh using a Vertex Buffer Object, and lets me move the camera around. I now want to make the object move independently of the camera/view.
However, I'm not sure how to go about moving my meshes through space. Googling tends to find sources either telling me to rotate the objects with glRotatef(), etc., or that using glRotatef() and its siblings is a bad idea because they are deprecated. Perhaps I'm not using the right search terms, but I'm not finding all that much that seems like a good starting point. I see vague references to matrix math, but I don't know how to use that and what approach to take. Other sources say I should apply a vertex shader to transform the objects.
I suppose I could manually reconstruct my mesh each frame, but that seems like a horrible idea (the meshes frequently have upwards of 50k triangles, and I'd like to have dozens of them at least), and I don't really need to have the vertices constantly in use in the rest of my memory if they are already stored in a VBO... right?
So how do I go about manipulating meshes that are stored in VBOs independently of the global space? What resources should I use in learning to do so?
You should be using your ModelView matrix to apply transformations to your vertices. To apply a transformation to a particular object/mesh and not to the entire screen, push a copy of your ModelView matrix onto the stack, apply your transformation, draw your object, then pop that matrix off to go back to your old ModelView matrix.
No need to recompute your vertex positions! That's exactly what these matrices are designed to help you avoid. And the fact that they're stored in a VBO won't matter to you - vertices passed to OpenGL manually are treated exactly the same.
And you might want to check out this question, and the transformation article its accepted answer links to - they'll be useful if you're still getting a hang of transformations and the matrix stack.
Hope that helps!
Edit: A quick example of why the stack is useful. Say you're drawing a simple scene: a guy on a raft (with a sail) in the ocean.
First, you'll want to set up your camera angle, so do whatever transformations you need to set that up. You don't need - and in fact don't want - to push and pop matrices here, because these transformations apply to everything in your scene (In OpenGL, moving the camera = moving the entire world. Weird to think about, but you get used to it.).
Then you draw your ocean. No need to transform it, 'cause it's a static object, and doesn't move.
Then you draw your raft. But your raft has moved! It's drifted along the X axis. Now, since the raft is an independent object and transformations that apply to the raft shouldn't apply to the larger world, you push a matrix onto the stack. This copies the existing ModelView matrix. All those camera transformations are already applied; Your "drifting" transformation on the raft is in addition to the transformations you did at lower levels of the stack.
Draw the raft. Then, before you pop that matrix off the stack, draw the things that are on the raft - the guy and the sail. Since they move with the raft, all the transformations that apply to the raft should be applied to them, to.
Say you draw your castaway first. But he's moved too - he's jumping into the air. So you push another matrix onto the stack, apply a "jumping" transformation, and then render your person. If there's anything that should move with the person - if he were holding anything, say - you'd draw it here, too. But he's not. So pop the "jumping" matrix off the stack.
Now you're back in the "raft" context. Since you applied the "jumping" transformation to a copy, the "drifting" transformation was left untouched a stack level down. Draw the sail now, and it'll be on top of the raft, right where it should be.
And then you're done with raft, so you can pop that matrix off the stack too. You're back down to your plain camera transform. Draw some more static geometry - islands or something.
And that's why the matrix stack is useful. It's also why people build more complicated scenes scenes as "scene graphs" - so they can keep track of the nesting of transformations. It's also useful in skeletal animation, where the position of the wrist depends on the position of the elbow, which depends on the position of the shoulder, and so forth.
And that was way longer than I expected - but hopefully useful. Cheers!
If I was making a 3D engine, the answer to this question would be clear: I'd go for using the depth buffer instead of thinking of sorting all my polygons on my own.
However, this is a different situation with 2D, because here layers can be implemented easily without the help of OpenGL - and you then could even sort and move sprites within layers. (Which isn't possible in OpenGL afaik)
(Why) should I use the OpenGL depth buffer instead of a C++ layer system running on the CPU?
How much slower would the depth buffer version be?
It is clear to me that making a layer system in C++ would impose as good as no performance impact at all, as I have to iterate over the sprites for rendering in any case.
I would suggest you to do it in software since you probably want to use transparency on your sprites and that implies you render them from back to front. Also sorting a couple of sprites shouldn't be that CPU demanding.
Use both, if you can.
Depth information is nice for post-processing and stuff like 3D-glasses, so you shouldn't throw it away. These kinds of effects can be very nice for 2D games.
Also, if you draw your (opaque) layers front to back, you can save fill-rate because the Z-Buffer can do the clipping for you (Depth tests are faster than actual drawing).
Depth testing is usually almost free, especially when you got hierarchical Z info. Because of this and the fill-rate savings, using depth testing will probably be even faster.
On the other hand, the software sorting is nice so you can actually do front to back rendering for opaque sprites and it's mandatory to do alpha-blending right (of course, you draw these sprites back to front).
Direct answers:
allowing the GPU to use the depth buffer would allow you to dynamically adjust the draw order of things without any on-CPU shuffling and would free you from having to assign things to different layers in situations where doing so is a bit of a fiction — for example, you could have effects like projectiles that come from the background towards and then in front of the player, without having to figure out which layer to assign them to all the time
on the GPU, the use of a depth would have no measurable effect, even if you're on an embedded chip, a plug-in card from more than a decade ago or an integrated part; they're so fundamental to modern GPUs that they've been optimised down to costing nothing in practical terms
However, I'd imagine you actually want to do it on the CPU for the simple reason of treating transparency correctly. A depth buffer stores one depth per pixel, so if you draw a near transparent object then attempt to draw something behind it, the thing behind won't be drawn even though it should be visible. In a 2d game it's likely that anti-aliasing will give your sprites partially transparent edges; if you submit drawing to the GPU in draw order then your partial transparencies will always be composited correctly. If you leave the z-buffer to do it then you risk weird looking fringing.
I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.
I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.
I would treat the handles as 3D objects. This provides many advantages - it's more consistant, they behave well, hit testing is easy, etc.
If you want the handles to be a constant size, you can still treat them as 3D objects, but you will have to scale their size as appropriate based off the distance to camera. This is a bit of a hassle, but since there are typically only a few handles, and these are usually small objects, it should be fine performance wise.
However, I'd actually say let the handles scale with the scene. As long as you pick a rendering style for the handle that makes them stand out (ie: bright orange boxes, etc), the perspective effects (smaller handles in the background) actually makes working with them easier for the end-user in many ways. It is difficult to get a sense of depth from a 3D scene - the perspective effects on the handles help provide more visual clues as to how "deep" the handle is into the screen.
First off, project the handle/corner co-ordinates onto the camera's plane (effectively converting them to 2D coordinates on the screen; normalize this against the screen dimensions.)
Here's some simple code to enable orthogonal/2D-overlay drawing:
void enable2D()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
int wind[4];
glGetIntegerv(GL_VIEWPORT,wind);
glOrtho(0,wind[2],0,wind[3],-1,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}
void disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
enable2D() caches the current modelview/projection matrices and replaces the projection matrix with one normalized to the screen (i.e. the width/height of the screen) and restores the identity matrix for modelview.
After making this call, you can make glVertex2f() calls using screen/pixel coordinates, allowing you to draw in 2D! (This will also allow you to hit-test since you can easily get the mouse's current pixel coords.)
When you're done, call disable2D to restore your old modelview/projection matrices :)
The hardest part is computing where the hitboxes fall on the 2D plane and dealing with overlaying (if two project to the same place, which to select on click?)
Hope this helped :)
I've coded up a manipulator with handles for a 3d editing package, and ran into a lot of these same issues.
First, there's an open source manipulator. I couldn't find it in my most recent search, probably because there's a plethora of names for these things - 3d widgets, gizmos, manipulators, gimbals, etc.
Anyhow, the way I did it was to add a manipulator object to the scene that, when drawn, draws all of the handles. It does the same thing for bounding box computation, and selection.
Reed's idea about keeping them the same size is interesting for handles that exist on objects, and might work there. For a manipulator, I found that it was more of a 3d UI element, and it was much more usable if it did not change size. I had a bug where the size was only determined based on the active viewport, which resulted in horrible huge/tiny manipulators in other viewports, very useless. If you're going to add them to the scene, you might want to add them per-viewport, or make them actually have a fixed size.
I know the question is really old. But just in case someone needs it:
Interactive Techniques in Three-dimensional Scenes (Part 1): Moving 3D Objects with the Mouse using OpenGL 2.1
Article is good and has an interesting link section at the bottom.