I'm building a miniature city with the basic minimum looks of a city (roads,buildings,trees etc) where u can move around. I know that rendering the whole model set in each frame doesn't work...
So can anyone give me an insight on the standard (but easiest) procedure used in selectively rendering only the visible parts of the system? I mean, just displaying only the visible stuff (with respect to the camera position) and not rendering the unseen part..
Im using VC++ and GLUT API.
Maybe this Wikipedia article provides a very basic introduction to the field of culling techniques.
A good starting point and one of the easiest techniques is view frustum culling. With this method you check for each object in your scene if it is inside the viewing volume (viewing frustum). This basically amounts to checking for some simplified bounding volume of the geometry (like a box or a sphere, that completely contain the geometry) if it lies inside the viewing frustum, defined by six planes.
This can further be optimized by grouping objects by their position and create a so-called bounding volume hierarchy, this way you e.g. first check if a whole city block is inside the viewing volume (by using a bounding volume that contains the whole block) and only if it is, you further check the individual houses.
A more complicated technique is occlusion culling, which means checking if an object is completely hidden behind another object. Because these techniques can get substantially more complicated it should (if done) actually be done after the view frustum culling. OpenGL has hardware occlusion queries that can aid you in determining if an object is actually visible, but they require some additional work to work well. Especially for cities there may be special two-dimensional occlusion culling techniques (long time ago I heard about that, don't know).
This is just a very broad overview, feel free to google for individual keywords. It is always a good idea to carefully weight if the additional CPU-overhead is worth it (especially with complicated occlusion culling techniques), considering that nowadays the trend is to batch as many geometry as possible into a single draw call (by the way, I hope you don't use immediate mode glBegin/glEnd, otherwise changing this to vertex arrays or better VBOs is the first point on your agenda). But view frustum culling might be a nice and easy starting point, especially if the city gets rather large.
Google "binary space partition trees".
BSP trees are a good means of determining what should be rendered from the camera's view angle and position. The old-school first-person shooters, i.e. Quake et al, used them (or at least some derivation of the principle).
Here is a good FAQ.
Other good resources:
link
link
Related
I have the following declaration:
glBegin( GL_QUADS );
glColor3f(0.0f,0.7f,0.7f);
glVertex2f(x1,y1);
glVertex2f(x2,y2);
glVertex2f(x3,y3);
glVertex2f(x4,y4);
glEnd();
The question is: If I apply a rotation, let's say, of 20 degrees, how can I know where these vertices are then?
Because later I need to be able to click on the square and identify if the place where I am clicking is, indeed, inside the square or not.
While I hope that nobody has used it in this millennium, there actually was a mechanism for getting transformed vertices in legacy OpenGL. It's called "feedback mode". Explaining it in detail is beyond the scope of an answer. But if you want to see how it worked, you can read up on it in the freely available online version of the Red Book.
The "click and identify" you talk about in your question is often called "picking" or "selection". There are numerous approaches to implement it, and the one to choose depends somewhat on your application. To give you a quick overview of some common approaches:
Selection mode. This is almost as obsolete as feedback mode. It's as old, but I have a feeling that it was at least much more commonly used, so it might have better support. Still, I wouldn't recommend using it in new code. Again, if you want to learn about it anyway, the explanation can be found in the Red Book.
Modern OpenGL has a feature called Transform Feedback. While its primary purpose is different, it can be used to read back transformed vertices similar to legacy Feedback Mode.
Draw the scene to an off screen buffer, with each object rendered in a different color. Then read back the color at the selection position, and map it to an object. This is a fairly elegant and efficient approach, and can be recommended if it works for your requirements.
Perform the calculations in your own code on the CPU. Instead of transforming all objects, the much more efficient approach is normally to apply the inverse transformation to your pick point (which actually becomes a ray), and intersect it with the geometry.
I am looking for a method by which to generate 3D models for use in video games. The idea is virtual primitives that are simply points with associated data for size, shape, material and rotation.
For instance an asteroid might start as two simple spheres that intersect. Material of dusty rock which would tell the skinning algorithm to provide smooth sandy curves and occasional jagged boulders. Probably end up with a sort of lumpy peanut shape.
After that add smaller spheres with material of void or crater, peppered around the object. These would produce crater like areas in the surface of the peanut and the skin would adjust to suit. In the end you would have a semi plausible representation of an asteroid.
Now with that in mind, my question is, are there any decent open source or public domain examples of skinning algorithms that can find the surface of a model and generate a smooth, evenly distributed quad-strip mesh that could be then textured?
Some more information; I'm looking at CSG methods for the underlying models (adding and subtracting volume) then looking at other methods for remeshing the whole thing.
Skinning is an art more than a scientific process (and so almost impossible to automate) because skinning is a visual approximation of movement. To get something fully automatic, you would either have to assume bone placement or simply assume there are none at all.
Here's an example. This is an open-source project that skins automatically based on the fact that the provided mesh is a humanoid.
http://igl.ethz.ch/projects/fast/
EDIT: Wait, you mean the other way around? Isn't that similar to marching cubes? http://en.wikipedia.org/wiki/Marching_cubes
This is an exciting question and no doubt there are many ways it could be done. Personally I'd probably start by getting basic shapes on .obj format, which is easy to both parse and create programmatically, and then do exactly that in my code: tweak or randomize the the vertices you export from a modelling program to create an infinite variety of similar but slightly different objects, like asteroids. Of course if you need more than asteroids, you'd go back to a different .obj file. It's hard to say the best technique for your case since I think some experimentation would be required no matter what you try.
I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.
I've started reading into the material on Wikipedia, but I still feel like I don't really understand how a scene graph works and how it can provide benefits for a game.
What is a scene graph in the game engine development context?
Why would I want to implement one for my 2D game engine?
Does the usage of a scene graph stand as an alternative to a classic entity system with a linear entity manager?
What is a scene graph in the game
engine development context?
Well, it's some code that actively sorts your game objects in the game space in a way that makes it easy to quickly find which objects are around a point in the game space.
That way, it's easy to :
quickly find which objects are in the camera view (and send only them to the graphics cards, making rendering very fast)
quickly find objects near to the player (and apply collision checks to only those ones)
And other things. It's about allowing quick search in space. It's called "space partitioning". It's about divide and conquer.
Why would I want to implement one for
my 2D game engine?
That depends on the type of game, more precisely on the structure of your game space.
For example, a game like Zelda could not need such techniques if it's fast enough to test collision between all objects in the screen. However it can easily be really really slow, so most of the time you at least setup a scene graph (or space partition of any kind) to at least know what is around all the moving objects and test collisions only on those objects.
So, that depends. Most of the time it's required for performance reasons. But the implementation of your space partitioning is totally relative to the way your game space is structured.
Does the usage of a scene graph stand
as an alternative to a classic entity
system with a linear entity manager?
No.
Whatever way you manage your game entities' object life, the space-partition/scene-graph is there only to allow you to quickly search objects in space, no more no less. Most of the time it will be an object that will have some slots of objects, corresponding to different parts of the game space and in those slots it will be objects that are in those parts.
It can be flat (like a 2D screen divider in 2 or 4), or it can be a tree (like binary tree or quadtree, or any other kind of tree) or any other sorting structure that limits the number of operations you have to execute to get some space-related informations.
Note one thing :
In some cases, you even need different separate space partition systems for different purposes. Often a "scene graph" is about rendering so it's optimized in a way that is dependent on the player's point of view and it's purpose is to allow quick gathering of a list of objects to render to send to the graphics card. It's not really suited to perform searches of objects around another object and that makes it hard to use for precise collision detection, like when you use a physic engine. So to help, you might have a different space partition system just for physics purpose.
To give an example, I want to make a "bullet hell" game, where there is a lot of balls that the player's spaceship has to dodge in a very precise way. To achieve enough rendering and collision detection performance I need to know :
when bullets appear in the screen space
when bullets leave the screen space
when the player enters in collision with bullets
when the player enters in collision with monsters
So I recursively cut the screen that is 2D in 4 parts, that gives me a quadtree. The quadtree is updated each game tick, because everything moves constantly, so I have to keep track of each object's (spaceship, bullet, monster) position in the quadtree to know which one is in which part of the screen.
Achieving 1. is easy, just enter the bullet in the system.
To achieve 2. I kept a list of leaves in the quadtree (squared sections of the screen) that are on the border of the screen. Those leaves contain the ids/pointers of the bullets that are near the border so I just have to check that they are moving out to know if I can stop rendering them and managing collision too. (It might be bit more complex but you get the idea.)
To achieve 3 and 4. I need to retrieve the objects that are near the player's spaceship. So first I get the leaf where the player's spaceship is and I get all of the objects in it. That way I will only test the collision with the player spaceship on objects that are around it, not all objects. (It IS a bit more complex but you get the idea.)
That way I can make sure that my game will run smoothly even with thousands of bullets constantly moving.
In other types of space structure, other types of space partitioning are required. Typically, kart/auto games will have a "tunnel" scene-graph because visually the player will see only things along the road, so you just have to check where he is on the road to retrieve all visible objects around in the "tunnel".
What is a scene graph? A Scene graph contains all of the geometry of a particular scene. They are useful for representing translations, rotations and scales (along with other affine transformations) of objects relative to each other.
For instance, consider a tank (the type with tracks and a gun). Your scene may have multiple tanks, but each one be oriented and positioned differently, with each having its turret rotated to different azimuth and with a different gun elevation. Rather than figuring out exactly how the gun should be positioned for each tank, you can accumulate affine transformations as you traverse your scene graph to properly position it. It makes computation of such things much easier.
2D Scene Graphs: Use of a scene graph for 2D may be useful if your content is sufficiently complex and if your objects have a number of sub components not rigidly fixed to the larger body. Otherwise, as others have mentioned, it's probably overkill. The complexity of affine transformations in 2D is quite a bit less than in the 3D case.
Linear Entity Manager: I'm not clear on exactly what you mean by a linear entity manager, but if you are refering to just keeping track of where things are positioned in your scene, then scene graphs can make things easier if there is a high degree of spatial dependence between the various objects or sub-objects in your scene.
A scene graph is a way of organizing all objects in the environment. Usually care is taken to organize the data for efficient rendering. The graph, or tree if you like, can show ownership of sub objects. For example, at the highest level there may be a city object, under it would be many building objects, under those may be walls, furniture...
For the most part though, these are only used for 3D scenes. I would suggest not going with something that complicated for a 2D scene.
There appear to be quite a few different philosophies on the web as to what the responsebilties are of a scenegraph. People tend to put in a lot of different things like geometry, camera's, light sources, game triggers etc.
In general I would describe a scenegraph as a description of a scene and is composed of a single or multiple datastructures containing the entities present in the scene. These datastructures can be of any kind (array, tree, Composite pattern, etc) and can describe any property of the entities or any relationship between the entities in the scene.
These entities can be anything ranging from solid drawable objects to collision-meshes, camera's and lightsources.
The only real restriction I saw so far is that people recommend keeping game specific components (like game triggers) out to prevent depedency problems later on. Such things would have to be abstracted away to, say, "LogicEntity", "InvisibleEntity" or just "Entity".
Here are some common uses of and datastructures in a scenegraph.
Parent/Child relationships
The way you could use a scenegraph in a game or engine is to describe parent/child relationships between anything that has a position, be it a solid object, a camera or anything else. Such a relationship would mean that the position, scale and orientation of any child would be relative to that of its parent. This would allow you to make the camera follow the player or to have a lightsource follow a flashlight object. It would also allow you to make things like the solar system in which you can describe the position of planets relative to the sun and the position of moons relative to their planet if that is what you're making.
Also things specific to some system in your game/engine can be stored in the scenegraph. For example, as part of a physics engine you may have defined simple collision-meshes for solid objects which may have too complex geometry to test collisions on. You could put these collision-meshes (I'm sure they have another name but I forgot it:P) in your scenegraph and have them follow the objects they model.
Space-partitioning
Another possible datastructure in a scenegraph is some form of space-partitioning as stated in other answers. This would allow you to perform fast queries on the scene like clipping any object that isn't in the viewing frustum or to efficiently filter out objects that need collision checking. You can also allow client code (in case you're writing an engine) to perform custom queries for whatever purpose. That way client code doesn't have to maintain its own space-partitioning structures.
I hope I gave you, and other readers, some ideas of how you can use a scenegraph and what you could put in it. I'm sure there are alot of other ways to use a scenegraph but these are the things I came up with.
In practice, scene objects in videogames are rarely organized into a graph that is "walked" as a tree when the scene is rendered. A graphics system typically expects one big array of stuff to render, and this big array is walked linearly.
Games that require geometric parenting relationships, such as those with people holding guns or tanks with turrets, define and enforce those relationships on an as-needed basis outside of the graphics system. These relationships tend to be only one-deep, and so there is almost never a need for an arbitrarily deep tree structure.
I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.
Imagine that the XY plane is quite large and there are other characters outside of your current view.
Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?
Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.
I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.
OpenGL faq on "Clipping, Culling and Visibility Testing" says this:
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.
Go ahead and read the rest of that link, it's all relevant.
If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).
If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.
Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.
There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.
Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.
"the only winning move is not to play"
Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.
most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.
The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.