OpenGL game development - scenes that span far into view - opengl

I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.
Imagine that the XY plane is quite large and there are other characters outside of your current view.
Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?
Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.
I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.

OpenGL faq on "Clipping, Culling and Visibility Testing" says this:
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.
Go ahead and read the rest of that link, it's all relevant.

If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).
If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.

Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.
There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.
Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.

"the only winning move is not to play"
Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.
most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.

The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.

Related

OpenGL - selective world rendering

I'm building a miniature city with the basic minimum looks of a city (roads,buildings,trees etc) where u can move around. I know that rendering the whole model set in each frame doesn't work...
So can anyone give me an insight on the standard (but easiest) procedure used in selectively rendering only the visible parts of the system? I mean, just displaying only the visible stuff (with respect to the camera position) and not rendering the unseen part..
Im using VC++ and GLUT API.
Maybe this Wikipedia article provides a very basic introduction to the field of culling techniques.
A good starting point and one of the easiest techniques is view frustum culling. With this method you check for each object in your scene if it is inside the viewing volume (viewing frustum). This basically amounts to checking for some simplified bounding volume of the geometry (like a box or a sphere, that completely contain the geometry) if it lies inside the viewing frustum, defined by six planes.
This can further be optimized by grouping objects by their position and create a so-called bounding volume hierarchy, this way you e.g. first check if a whole city block is inside the viewing volume (by using a bounding volume that contains the whole block) and only if it is, you further check the individual houses.
A more complicated technique is occlusion culling, which means checking if an object is completely hidden behind another object. Because these techniques can get substantially more complicated it should (if done) actually be done after the view frustum culling. OpenGL has hardware occlusion queries that can aid you in determining if an object is actually visible, but they require some additional work to work well. Especially for cities there may be special two-dimensional occlusion culling techniques (long time ago I heard about that, don't know).
This is just a very broad overview, feel free to google for individual keywords. It is always a good idea to carefully weight if the additional CPU-overhead is worth it (especially with complicated occlusion culling techniques), considering that nowadays the trend is to batch as many geometry as possible into a single draw call (by the way, I hope you don't use immediate mode glBegin/glEnd, otherwise changing this to vertex arrays or better VBOs is the first point on your agenda). But view frustum culling might be a nice and easy starting point, especially if the city gets rather large.
Google "binary space partition trees".
BSP trees are a good means of determining what should be rendered from the camera's view angle and position. The old-school first-person shooters, i.e. Quake et al, used them (or at least some derivation of the principle).
Here is a good FAQ.
Other good resources:
link
link

Game engines: What are scene graphs?

I've started reading into the material on Wikipedia, but I still feel like I don't really understand how a scene graph works and how it can provide benefits for a game.
What is a scene graph in the game engine development context?
Why would I want to implement one for my 2D game engine?
Does the usage of a scene graph stand as an alternative to a classic entity system with a linear entity manager?
What is a scene graph in the game
engine development context?
Well, it's some code that actively sorts your game objects in the game space in a way that makes it easy to quickly find which objects are around a point in the game space.
That way, it's easy to :
quickly find which objects are in the camera view (and send only them to the graphics cards, making rendering very fast)
quickly find objects near to the player (and apply collision checks to only those ones)
And other things. It's about allowing quick search in space. It's called "space partitioning". It's about divide and conquer.
Why would I want to implement one for
my 2D game engine?
That depends on the type of game, more precisely on the structure of your game space.
For example, a game like Zelda could not need such techniques if it's fast enough to test collision between all objects in the screen. However it can easily be really really slow, so most of the time you at least setup a scene graph (or space partition of any kind) to at least know what is around all the moving objects and test collisions only on those objects.
So, that depends. Most of the time it's required for performance reasons. But the implementation of your space partitioning is totally relative to the way your game space is structured.
Does the usage of a scene graph stand
as an alternative to a classic entity
system with a linear entity manager?
No.
Whatever way you manage your game entities' object life, the space-partition/scene-graph is there only to allow you to quickly search objects in space, no more no less. Most of the time it will be an object that will have some slots of objects, corresponding to different parts of the game space and in those slots it will be objects that are in those parts.
It can be flat (like a 2D screen divider in 2 or 4), or it can be a tree (like binary tree or quadtree, or any other kind of tree) or any other sorting structure that limits the number of operations you have to execute to get some space-related informations.
Note one thing :
In some cases, you even need different separate space partition systems for different purposes. Often a "scene graph" is about rendering so it's optimized in a way that is dependent on the player's point of view and it's purpose is to allow quick gathering of a list of objects to render to send to the graphics card. It's not really suited to perform searches of objects around another object and that makes it hard to use for precise collision detection, like when you use a physic engine. So to help, you might have a different space partition system just for physics purpose.
To give an example, I want to make a "bullet hell" game, where there is a lot of balls that the player's spaceship has to dodge in a very precise way. To achieve enough rendering and collision detection performance I need to know :
when bullets appear in the screen space
when bullets leave the screen space
when the player enters in collision with bullets
when the player enters in collision with monsters
So I recursively cut the screen that is 2D in 4 parts, that gives me a quadtree. The quadtree is updated each game tick, because everything moves constantly, so I have to keep track of each object's (spaceship, bullet, monster) position in the quadtree to know which one is in which part of the screen.
Achieving 1. is easy, just enter the bullet in the system.
To achieve 2. I kept a list of leaves in the quadtree (squared sections of the screen) that are on the border of the screen. Those leaves contain the ids/pointers of the bullets that are near the border so I just have to check that they are moving out to know if I can stop rendering them and managing collision too. (It might be bit more complex but you get the idea.)
To achieve 3 and 4. I need to retrieve the objects that are near the player's spaceship. So first I get the leaf where the player's spaceship is and I get all of the objects in it. That way I will only test the collision with the player spaceship on objects that are around it, not all objects. (It IS a bit more complex but you get the idea.)
That way I can make sure that my game will run smoothly even with thousands of bullets constantly moving.
In other types of space structure, other types of space partitioning are required. Typically, kart/auto games will have a "tunnel" scene-graph because visually the player will see only things along the road, so you just have to check where he is on the road to retrieve all visible objects around in the "tunnel".
What is a scene graph? A Scene graph contains all of the geometry of a particular scene. They are useful for representing translations, rotations and scales (along with other affine transformations) of objects relative to each other.
For instance, consider a tank (the type with tracks and a gun). Your scene may have multiple tanks, but each one be oriented and positioned differently, with each having its turret rotated to different azimuth and with a different gun elevation. Rather than figuring out exactly how the gun should be positioned for each tank, you can accumulate affine transformations as you traverse your scene graph to properly position it. It makes computation of such things much easier.
2D Scene Graphs: Use of a scene graph for 2D may be useful if your content is sufficiently complex and if your objects have a number of sub components not rigidly fixed to the larger body. Otherwise, as others have mentioned, it's probably overkill. The complexity of affine transformations in 2D is quite a bit less than in the 3D case.
Linear Entity Manager: I'm not clear on exactly what you mean by a linear entity manager, but if you are refering to just keeping track of where things are positioned in your scene, then scene graphs can make things easier if there is a high degree of spatial dependence between the various objects or sub-objects in your scene.
A scene graph is a way of organizing all objects in the environment. Usually care is taken to organize the data for efficient rendering. The graph, or tree if you like, can show ownership of sub objects. For example, at the highest level there may be a city object, under it would be many building objects, under those may be walls, furniture...
For the most part though, these are only used for 3D scenes. I would suggest not going with something that complicated for a 2D scene.
There appear to be quite a few different philosophies on the web as to what the responsebilties are of a scenegraph. People tend to put in a lot of different things like geometry, camera's, light sources, game triggers etc.
In general I would describe a scenegraph as a description of a scene and is composed of a single or multiple datastructures containing the entities present in the scene. These datastructures can be of any kind (array, tree, Composite pattern, etc) and can describe any property of the entities or any relationship between the entities in the scene.
These entities can be anything ranging from solid drawable objects to collision-meshes, camera's and lightsources.
The only real restriction I saw so far is that people recommend keeping game specific components (like game triggers) out to prevent depedency problems later on. Such things would have to be abstracted away to, say, "LogicEntity", "InvisibleEntity" or just "Entity".
Here are some common uses of and datastructures in a scenegraph.
Parent/Child relationships
The way you could use a scenegraph in a game or engine is to describe parent/child relationships between anything that has a position, be it a solid object, a camera or anything else. Such a relationship would mean that the position, scale and orientation of any child would be relative to that of its parent. This would allow you to make the camera follow the player or to have a lightsource follow a flashlight object. It would also allow you to make things like the solar system in which you can describe the position of planets relative to the sun and the position of moons relative to their planet if that is what you're making.
Also things specific to some system in your game/engine can be stored in the scenegraph. For example, as part of a physics engine you may have defined simple collision-meshes for solid objects which may have too complex geometry to test collisions on. You could put these collision-meshes (I'm sure they have another name but I forgot it:P) in your scenegraph and have them follow the objects they model.
Space-partitioning
Another possible datastructure in a scenegraph is some form of space-partitioning as stated in other answers. This would allow you to perform fast queries on the scene like clipping any object that isn't in the viewing frustum or to efficiently filter out objects that need collision checking. You can also allow client code (in case you're writing an engine) to perform custom queries for whatever purpose. That way client code doesn't have to maintain its own space-partitioning structures.
I hope I gave you, and other readers, some ideas of how you can use a scenegraph and what you could put in it. I'm sure there are alot of other ways to use a scenegraph but these are the things I came up with.
In practice, scene objects in videogames are rarely organized into a graph that is "walked" as a tree when the scene is rendered. A graphics system typically expects one big array of stuff to render, and this big array is walked linearly.
Games that require geometric parenting relationships, such as those with people holding guns or tanks with turrets, define and enforce those relationships on an as-needed basis outside of the graphics system. These relationships tend to be only one-deep, and so there is almost never a need for an arbitrarily deep tree structure.

How do you determine when an object is drawn on-screen in OpenGL?

I'm extremely new to OpenGL. I'm writing a program that displays flying 3D text on screen. I need to know when certain text string appears (drawn) onto the screen and are visible to the user. The program needs to identify which text strings are displayed. (Note: although my problem deals with text, it could be generalized to any OpenGL object).
At first, I started to think that I could use OpenGL's picking mechanism, but so far I've only seen examples where the selection area is focused on some sort of user interaction. I want to know what objects are displayed on the entire window area. This leads me to think I'm on the wrong track... Am I missing something?
Any suggestions are welcome.
You can use the query objects (specifically those object created using GL_ARB_occlusion_query extension Specification). Those object are used to query how many fragments are rendered using a sequence of OpenGL operations (begin/end, etc...).
Another scheme (software only), is to determine a bounding box for your rendered text, then compute mathematically whether the bounding box is inside the view frustrum (derived from the current perspective used for rendering.
A note: using OpenGL picking doesn't necessary imply the use of gluPickMatrix. You can render you scene "as is", and the query the rendered names (altought picking is deprecated from OpenGL 3).
Query objects are easy to use, and they are lightweight. Picking is another good solution for most hardware, but more schematic than query objects.
hmm, is it actually in 3D? or is it just 2D text on the screen in 2D space? in that case I would just keep track of it manually. how exactly are you drawing your text?
generally the way you do this is with a "frustum check" where you basically just make a volume for the camera and test whether you're 3d objects are inside it or not.
You can try OpenGL's feedback mechanism. In this mode, OpenGL generates fragments and passes them to a feedback buffer. If something is clipped, no fragments will be generated. When the text becomes visible, you will find the corresponding fragment in the fragment buffer.
This link should get you started.
Here is another link, the Question 10.010 seems particularly relevant to what you want.
Run your object coordinates through your projection and modelview matrices to get screen-space coordinates. Compare the X/Y output against your screen extents to figure out if the text is on-screen.

Selection / glRenderMode(GL_SELECT)

In order to do object picking in OpenGL, do I really have to render the scene twice?
I realize rendering the scene is supposed to be cheap, going at 30fps.
But if every selection object requires an additional gall to RenderScene()
then if I click at 30 times a second, then the GPU has to render twice as many times?
One common trick is to have two separate functions to render your scene. When you're in picking mode, you can render a simplified version of the world, without the things you don't want to pick. So terrain, inert objects, etc, don't need to be rendered at all.
The time to render a stripped-down scene should be much less than the time to render a full scene. Even if you click 30 times a second (!), your frame rate should not be impacted much.
First of all, the only way you're going to get 30 mouse clicks per second is if you have some other code simulating mouse clicks. For a person, 10 clicks a second would be pretty fast -- and at that, they wouldn't have any chance to look at what they'd selected -- that's just clicking the button as fast as possible.
Second, when you're using GL_SELECT, you normally want to use gluPickMatrix to give it a small area to render, typically a (say) 10x10 pixel square, centered on the click point. At least in a typical case, the vast majority of objects will fall entirely outside that area, and be culled immediately (won't be rendered at all). This speeds up that rendering pass tremendously in most cases.
There have been some good suggestions already about how to optimize picking in GL. That will probably work for you.
But if you need more performance than you can squeeze out of gl-picking, then you may want to consider doing what most game-engines do. Since most engines already have some form of a 3D collision detection system, it can be much faster to use that. Unproject the screen coordinates of the click and run a ray-vs-world collision test to see what was clicked on. You don't get to leverage the GPU, but the volume of work is much smaller. Even smaller than the CPU-side setup work that gl-picking requires.
Select based on simpler collision hulls, or even just bounding boxes. The performance scales by number of objects/hulls in the scene rather than by the amount of geometry plus the number of objects.

How can you draw primitives in OpenGL interactively?

I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.