How can I change an object's rate of rotation in OpenGL? - c++

I'm working with a simple rotating polygon that should speed up when the user clicks and drags upwards and show down when the user clicks and drags downward. Unfortunately, I have searched EVERYWHERE and cannot seem to find any specific GL function or variable that will easily let me manipulate the speed (I've searched for "frame rate," too...)
Is there an easy call/series of calls to do something like this, or will I actually need to do things with timers at different segments of the code?

OpenGL draws stuff exactly where you tell it to draw the stuff. It has no notion of what was rendered the previous frame or what will be rendered the next frame. OpenGL has no concept of time, and without time, you cannot have speed.
Speed is something you have to manage. Generally, this is done by taking your intended speed, multiplying it by the time interval since you last rendered, and adding that into the current rotation (or position). Then render at that new orientation/position. This is all up to you however.

Related

OpenGL 2D agents animation and scenario refresh

I'm trying to do a little game in 2D to learn how to do it and improve my programming skills. I programme the game using C++/C and OpenGL 3.0 with GLUT.
I so confused with some important concepts about animations and scenario refresh.
It's a good practice load all the textures only when the level begins ?
I choose a fps rate to 40 fps, should i redraw all the scenario and the agents in every frame or only the modifications ?
In an agent animation should i redraw all the entire agent or only the parts which changes from the past ?
If some part of the scene changes (one wall or something similar is destroyed) should i need to redraw all the entire scene or only the part which changes ?
Now my "game" works with a framerate of 40fps but the game has a flickering effect that looks really weird.
Yes, creating and deleting textures/buffers every frame is a huge waste.
It's almost always cheaper to just redraw the entire scene. GPUs are built to do this, it's very fast.
Reading the framebuffer from VRAM back to regular RAM and calculating the difference is going to be much slower, especially since OpenGL doesn't keep track of your "objects", it just takes a triangle at a time, rasterizes it, then forgets about it.
Depends on how you define the animation. If you're talking about sprite-like animation, where each frame is a separate image, then it's cheapest to just refer to the new texture and redraw.
If you've got a texture atlas, update the texture coordinates and redraw, and if you're using shaders (you pretty much have to if you want to be OpenGL 3.0), you might be able to get away with a uniform that offsets texture coordinates.
Yeah, as I said before, the hardware is built to clear the screen and redraw everything.
And for a framerate, you should be using the monitor's refresh rate to avoid vertical tearing. Pretty much all monitors now are 60Hz, so 60fps is the most common "target" framerate.
Choose either 30 or 60 fps as most modern monitors refresh in 60 Hz rate. So you have either 2 or 1 rendered frame per "monitor frame". This should reduce flickering effects. (I'm not 100% sure if you mean this with "flash effect".)
Regarding all other questions (which sound pretty much the same): In OpenGL rendering, redrawing everything is pretty common, as in most games almost the entire screen changes in every frame, for example if you're moving around. You could do a partial screen update, but it's very uncommon and more expensive on the CPU side, as you have to compute which parts to draw instead of just "draw everything".
Yes
2-4. Yes - Hopefully this help you understand why you must...
Imagine you have 2 pieces of paper. The first paper you draw a stick man standing still, and show that to somebody.
The second paper while the user is looking at that paper you draw the same thing again but this time you move the arm a little bit.
Now you show them the second paper, as they look at the second paper you clear the first paper and draw the man moving his arm a little bit more.
This is pretty much how it works and is the reason you must always render the whole image regardless if nothing has changed.

OpenGL: which strategy to render a tree menu?

So, what we need to implement is a tree-menu with several nodes (up to hundreds). A node can have children and then can be expanded/collapsed. There is also a background lightning by mouse over and a background lightning by mouse selection.
Each node has a box, an icon and a text, which can be very large, occupying the whole width screen.
This is an example of an already working solution:
Basically I am:
rendering text a first time, just to get the length of an possible background highlight
rendering boxes and icon-textures (yeah I know, they are upside down at the moment)
rendering text a second time, first all the bold one and then all the normal one
This solution actually has a relative acceptable performance impact.
Then we tried another way, that is using the g Graphic java by drawing the tree menu and returning it as a bufferedImage to create at the end a big texture render it. All this obviously is done at every node collapse/expande and at each mouse movement.
This performed much better, but Java seems to have some big troubles handling the old bufferedImages. Indeed ram consumption increases constantly and forcing a garbage collection improves only slightly memory increasing, by slowing down it, but still...
Moreover performances fall, since the garbage collector is called every time and does not seem light at all.
So what I am going to ask you is: which is the best strategy for my needing?
Would be also maybe feasible to render each node on a different texture (actually three: one normal, one with a light background for mouse over and a last one with a normal background for mouse selection) and then at each display() just combine all these textures with the current tree-menu state?
For the Java-approach: If the BufferedImage hasn't changed in size (the width/height of your tree control), can't you reuse it to avoid garbage collection?
For the GL-approach, make sure you minimize texture switches. How do you render text? You can have a single large texture that contains all the normal and bold letters and just use different texture coordinates for each letter.

OpenGL object & camera resistance?

I'm developing a very basic came In C++ with openGL and GLUT where you move the "camera" around as the player.
In short:
My camera slows down when I look at a snowman
Full explanation:
Everything was fine until I decided to finally add in an object (a giant snowman in fact), but now I've added it, I'm experiencing very odd behaviour.
If I look at the snowman object and attempt to move forward, It feels like I'm moving against a force, as if I was walking through mud.
Now If I face opposite the snowman, and "walk" backwards with the camera, It moves completely fine, but when I look at it... I slow down. I've tried different scales of the snowman, and the larger the snowman is, the further I can feel the effect.
Note though, It doesn't appear to cause me to lag, only slow down.
Any insights would be greatly appreciated, and I will post code if needed, but currently.. I have no idea what code would be relevant!
When you say it slows down, you mean your frame rate drops? Sounds like your snowman is very polygon heavy, when it's being rendered it causes a drop in frame rate slowing things down.
When you're facing away from the snowman it's being clipped, it's not in view so the polygons comprising the model aren't being sent all the way through the 3D pipeline.
If you don't have back face culling turned on, you'll probably want to do that — otherwise you probably need to simplify the model somewhat. What happens if you render a cube there instead?
Depending on what hardware you're using, even a low poly model could cause problems if you don't have a huge fill rate (the speed of the hardware to fill pixels in the render buffer), but given that it's one model and that the hardware should be more than capable of filling the screen once, I'd say this is an unlikely scenario.

Selection / glRenderMode(GL_SELECT)

In order to do object picking in OpenGL, do I really have to render the scene twice?
I realize rendering the scene is supposed to be cheap, going at 30fps.
But if every selection object requires an additional gall to RenderScene()
then if I click at 30 times a second, then the GPU has to render twice as many times?
One common trick is to have two separate functions to render your scene. When you're in picking mode, you can render a simplified version of the world, without the things you don't want to pick. So terrain, inert objects, etc, don't need to be rendered at all.
The time to render a stripped-down scene should be much less than the time to render a full scene. Even if you click 30 times a second (!), your frame rate should not be impacted much.
First of all, the only way you're going to get 30 mouse clicks per second is if you have some other code simulating mouse clicks. For a person, 10 clicks a second would be pretty fast -- and at that, they wouldn't have any chance to look at what they'd selected -- that's just clicking the button as fast as possible.
Second, when you're using GL_SELECT, you normally want to use gluPickMatrix to give it a small area to render, typically a (say) 10x10 pixel square, centered on the click point. At least in a typical case, the vast majority of objects will fall entirely outside that area, and be culled immediately (won't be rendered at all). This speeds up that rendering pass tremendously in most cases.
There have been some good suggestions already about how to optimize picking in GL. That will probably work for you.
But if you need more performance than you can squeeze out of gl-picking, then you may want to consider doing what most game-engines do. Since most engines already have some form of a 3D collision detection system, it can be much faster to use that. Unproject the screen coordinates of the click and run a ray-vs-world collision test to see what was clicked on. You don't get to leverage the GPU, but the volume of work is much smaller. Even smaller than the CPU-side setup work that gl-picking requires.
Select based on simpler collision hulls, or even just bounding boxes. The performance scales by number of objects/hulls in the scene rather than by the amount of geometry plus the number of objects.

OpenGL game development - scenes that span far into view

I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.
Imagine that the XY plane is quite large and there are other characters outside of your current view.
Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?
Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.
I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.
OpenGL faq on "Clipping, Culling and Visibility Testing" says this:
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.
Go ahead and read the rest of that link, it's all relevant.
If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).
If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.
Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.
There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.
Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.
"the only winning move is not to play"
Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.
most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.
The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.