Scene graph implementation for Papervision? - papervision3d

I'm trying to use Papervision for Flash, for this project of mine, which involves a 3D model of a mechanical frame, consisting of several connected parts. Movement of one of the parts results in a corresponding change in orientation and position of other parts of the frame.
My understanding is that using a scene graph to handle this kind of linked movement would be the ideal way to go, at least, if I were to implement in one of the more established 3D development options, like OpenGL or DirectX.
My question is, is there an existing scene graph implementation for Papervision? Or, an alternative way to generate the required 3D motion?
Thank!

I thought Papervision is basically a Flash-based 3D rendering engine, therefore should contain its own scene graph.
See org.papervision3d.scenes.Scene3D in the API.
And see this article for a lengthier explanation of the various objects in Papervision. One thing you can do is google for articles with the key objects in P3D, such as EngineManager, Viewport3D, BasicRenderEngine, Scene3D and Camera3D.
As for "generating the motion", it depends on what you are trying to achieve exactly. Either you code that up and alter the scene yourself, or use a third-party library like a physics library so as to not have to code all that up yourself.

You can honestly build one in the time it would take you to search for one:
Create a class called Node with a virtual method Render(matrix:Matrix), which holds an array of child nodes.
Create a subclass of Node called TransformNode which takes a reference to a matrix.
Create a subclass of Node called ModelNode which takes a reference to a model.
The Render method of TransformNode multiplies the incoming matrix with its own, then calls the render method of its children with the resulting matrix.
The Render method of ModelNode sends its model off to the renderer at the location specified by the incoming matrix.
That's it. You can enhance things further with a BoundsNode that doesn't call its children if it's bounding shape is not visible in the viewing frustum.

Related

Entity Component System - Components requiring each other

I have written an entity component system for my game (C++). I have then refactored my render system to work with Entities / RenderComponents rather than some virtual drawable interface. Their are some classes for which I don't think it makes too much sense to force them to be a component. One of those classes is the map.
My map class consists of a tiled terrain class and some other data (not important). The tiled terrain class manages multiple layers in form of (what is at the moment) the TiledTerrainLayer class. Before refactoring the render system I simply inherited from Drawable and Transformable to enable this class to be drawn by the render system. Now it is required to be an entity with at least a TransformComponent and some RenderComponent.
Now, the TiledTerrainLayerRenderComponent should really only own the vertices and a reference of the texture and maybe a flag for whether it has been created yet. The TiledTerrainComponent would then own the list of tile indecies as well as tile and map size.
Now my problem is that when I set a tile (using something like a SetTile(size_t tileIndex, const Position & pos) method, I also have to update texture coordinates of the vertex array.
I am generally fine with one component requiring another component. For example the SpriteRenderComponent requires a TransformComponent and I am also fine with one component accessing the information of another. E.g. the GetBoundingBox() method uses the position of the transform component.
What I want to avoid is two components 'cross-referencing' each other like it would be the case with the TiledTerrainComponent (TTC) and TiledTerrainRenderComponent. (TTRC) (The TTRC gets the TTC's tileIndexList to create itself and the TTC calls the TTRC's UpdateVertices() method when its SetTile() method is called.
Lastly, I am aware that components should mainly be data. I have only added methods that directly get or modify that data such as SetTile() or GetTexture(). Would a system be viable in the case described above and if yes how would it look like?
It sounds like all you need here is a Dirty Flag.
When you change tile index, size, or other properties on your Tiled Terrain, you do not immediately phone the Tiled Renderer to update its vertices (after all, you might have many tile updates yet to come this frame — it could be wasteful to recalculate your vertices every time)
Instead, the Tiled Terrain renderer just sets its internal hasBeenModifiedSinceLastUse flag to true. It doesn't need to know about the Renderer at all.
Next, when updating your Tiled Renderer just prior to drawing, you have it ask its Tiled Terrain whether it's been updated since the last draw (you could even query a list of updates if you want to target the changes). If so, you update the vertices in one big batch, for better code & data locality.
In the process, you reset the modified flag so that if there are no updates on subsequent frames you can re-use the last generated set of vertices as-is.
Now your dependency points only one way — the renderer depends on the tile data, but the tile data has no knowledge of the rendering apart from maintaining its flag.

Render a dynamic shape in cocos2d

I'm using cocos2d-x and want to create a dynamic shape as part of my user interface. I need a circle with an adjustable section removed. I attempted this using the draw method but item would be drawn every frame which required too much processing power. What would be an efficient way to achieve this without drawing the shape every frame? Is it possible to clip a circle sprite to remove a section?
The mathematics behind the implementation is ok, I'm just looking for a high level explanation about how I should approach this.
you can have a try on CCTransitionProgressRadialCW. This class contains something similar to what you want.
Turns out theres a class specifically designed for this, CCProgressTimer.

Advice on setting up a Qt3D scene with redundant objects

I'm new to the Qt3D module and am currently writing a game in Qt5/C++ using Qt3D. This question is about "Am I on the correct path?" or "Can you give me some advice on...".
The scene of the game has a static part (the "world") and some objects (buildings and movable units). Some of the buildings might be animated in the future, but most of them are very static (but of course destructible).
I divide the quesion into two parts: How to handle copies of the same model placed at different positions in the scene and how to manage the scene as a whole in the viewer class.
Redundant objects in the scene:
Of course the objects share the same library of buildings / movable units, so it would be dumb to upload the models for these objects to the graphics card for every instance of such a unit. I read through the documentation of QGLSceneNode, from which I guess that it is designed to share the same QGeometryData among multiple scene nodes, but apply different transformations in order to place the objects at different positions in my scene. Sharing the same QGLSceneNode for all instances of a building would be the wrong way, I guess.
I currently have a unit "library class" telling me the properties of each type of building / movable unit, among other things the geometry including textures. Now, I'd provide a QGeometryData for each building in this library class, which is uploaded on the loading procedure of the game (if I decide to do this for all buildings at startup...).
When creating a new instance of a unit, I'd now create a new QGLSceneNode, request the QGeometryData (which is explicitly shared) from the library and set it on the node. Then I set the transformation for this new node and put it in my scene. This leads us to the second part of my question:
Manage the scene as a whole:
My "scene" currently is neither a QGLSceneNode nor a QGLAbstractScene, but a struct of some QGLSceneNodes, one for each object (or collection of objects) in the scene. I see three approaches:
My current approach, but I guess it's "the wrong way".
The composition: Putting everything as child nodes in one root QGLSceneNode. This seemed the correct way for me, until I realized that it is very difficult to access specific nodes in such a composition. But when would I even need to access such "specific" nodes? Most operations require to take all nodes into account (rendering them, updating positions for animations), or even operate on a signal-slot-basis so I even don't need to find the nodes manually at all. For example, animations can be done using QPropertyAnimations. Acting on events can also be done by connecting a QObject in the game engine core (all buildings are QObjects in the engine's core part) with the corresponding QGLSceneNode.
But this approach has another downside: During rendering, I might need to change some properties of the QGLPainter. I'm not sure which properties I need to change, this is because I don't know Qt3D enough and can't guess what can be done without changing the properties (for example: using a specific shader to render a specific scene node).
Then I found QGLAbstractScene, but I can't see the advantages when comparing with the two solutions above, since I can't define the rendering process in the scene. But maybe it's not the correct location where to define it?
Which is the best approach to manage such a scene in Qt3D?
With "best" I mean: What am I going to do wrong? What can I do better? What other things should I take into account? Have I overlooked anything important in the Qt3D library?

Game engines: What are scene graphs?

I've started reading into the material on Wikipedia, but I still feel like I don't really understand how a scene graph works and how it can provide benefits for a game.
What is a scene graph in the game engine development context?
Why would I want to implement one for my 2D game engine?
Does the usage of a scene graph stand as an alternative to a classic entity system with a linear entity manager?
What is a scene graph in the game
engine development context?
Well, it's some code that actively sorts your game objects in the game space in a way that makes it easy to quickly find which objects are around a point in the game space.
That way, it's easy to :
quickly find which objects are in the camera view (and send only them to the graphics cards, making rendering very fast)
quickly find objects near to the player (and apply collision checks to only those ones)
And other things. It's about allowing quick search in space. It's called "space partitioning". It's about divide and conquer.
Why would I want to implement one for
my 2D game engine?
That depends on the type of game, more precisely on the structure of your game space.
For example, a game like Zelda could not need such techniques if it's fast enough to test collision between all objects in the screen. However it can easily be really really slow, so most of the time you at least setup a scene graph (or space partition of any kind) to at least know what is around all the moving objects and test collisions only on those objects.
So, that depends. Most of the time it's required for performance reasons. But the implementation of your space partitioning is totally relative to the way your game space is structured.
Does the usage of a scene graph stand
as an alternative to a classic entity
system with a linear entity manager?
No.
Whatever way you manage your game entities' object life, the space-partition/scene-graph is there only to allow you to quickly search objects in space, no more no less. Most of the time it will be an object that will have some slots of objects, corresponding to different parts of the game space and in those slots it will be objects that are in those parts.
It can be flat (like a 2D screen divider in 2 or 4), or it can be a tree (like binary tree or quadtree, or any other kind of tree) or any other sorting structure that limits the number of operations you have to execute to get some space-related informations.
Note one thing :
In some cases, you even need different separate space partition systems for different purposes. Often a "scene graph" is about rendering so it's optimized in a way that is dependent on the player's point of view and it's purpose is to allow quick gathering of a list of objects to render to send to the graphics card. It's not really suited to perform searches of objects around another object and that makes it hard to use for precise collision detection, like when you use a physic engine. So to help, you might have a different space partition system just for physics purpose.
To give an example, I want to make a "bullet hell" game, where there is a lot of balls that the player's spaceship has to dodge in a very precise way. To achieve enough rendering and collision detection performance I need to know :
when bullets appear in the screen space
when bullets leave the screen space
when the player enters in collision with bullets
when the player enters in collision with monsters
So I recursively cut the screen that is 2D in 4 parts, that gives me a quadtree. The quadtree is updated each game tick, because everything moves constantly, so I have to keep track of each object's (spaceship, bullet, monster) position in the quadtree to know which one is in which part of the screen.
Achieving 1. is easy, just enter the bullet in the system.
To achieve 2. I kept a list of leaves in the quadtree (squared sections of the screen) that are on the border of the screen. Those leaves contain the ids/pointers of the bullets that are near the border so I just have to check that they are moving out to know if I can stop rendering them and managing collision too. (It might be bit more complex but you get the idea.)
To achieve 3 and 4. I need to retrieve the objects that are near the player's spaceship. So first I get the leaf where the player's spaceship is and I get all of the objects in it. That way I will only test the collision with the player spaceship on objects that are around it, not all objects. (It IS a bit more complex but you get the idea.)
That way I can make sure that my game will run smoothly even with thousands of bullets constantly moving.
In other types of space structure, other types of space partitioning are required. Typically, kart/auto games will have a "tunnel" scene-graph because visually the player will see only things along the road, so you just have to check where he is on the road to retrieve all visible objects around in the "tunnel".
What is a scene graph? A Scene graph contains all of the geometry of a particular scene. They are useful for representing translations, rotations and scales (along with other affine transformations) of objects relative to each other.
For instance, consider a tank (the type with tracks and a gun). Your scene may have multiple tanks, but each one be oriented and positioned differently, with each having its turret rotated to different azimuth and with a different gun elevation. Rather than figuring out exactly how the gun should be positioned for each tank, you can accumulate affine transformations as you traverse your scene graph to properly position it. It makes computation of such things much easier.
2D Scene Graphs: Use of a scene graph for 2D may be useful if your content is sufficiently complex and if your objects have a number of sub components not rigidly fixed to the larger body. Otherwise, as others have mentioned, it's probably overkill. The complexity of affine transformations in 2D is quite a bit less than in the 3D case.
Linear Entity Manager: I'm not clear on exactly what you mean by a linear entity manager, but if you are refering to just keeping track of where things are positioned in your scene, then scene graphs can make things easier if there is a high degree of spatial dependence between the various objects or sub-objects in your scene.
A scene graph is a way of organizing all objects in the environment. Usually care is taken to organize the data for efficient rendering. The graph, or tree if you like, can show ownership of sub objects. For example, at the highest level there may be a city object, under it would be many building objects, under those may be walls, furniture...
For the most part though, these are only used for 3D scenes. I would suggest not going with something that complicated for a 2D scene.
There appear to be quite a few different philosophies on the web as to what the responsebilties are of a scenegraph. People tend to put in a lot of different things like geometry, camera's, light sources, game triggers etc.
In general I would describe a scenegraph as a description of a scene and is composed of a single or multiple datastructures containing the entities present in the scene. These datastructures can be of any kind (array, tree, Composite pattern, etc) and can describe any property of the entities or any relationship between the entities in the scene.
These entities can be anything ranging from solid drawable objects to collision-meshes, camera's and lightsources.
The only real restriction I saw so far is that people recommend keeping game specific components (like game triggers) out to prevent depedency problems later on. Such things would have to be abstracted away to, say, "LogicEntity", "InvisibleEntity" or just "Entity".
Here are some common uses of and datastructures in a scenegraph.
Parent/Child relationships
The way you could use a scenegraph in a game or engine is to describe parent/child relationships between anything that has a position, be it a solid object, a camera or anything else. Such a relationship would mean that the position, scale and orientation of any child would be relative to that of its parent. This would allow you to make the camera follow the player or to have a lightsource follow a flashlight object. It would also allow you to make things like the solar system in which you can describe the position of planets relative to the sun and the position of moons relative to their planet if that is what you're making.
Also things specific to some system in your game/engine can be stored in the scenegraph. For example, as part of a physics engine you may have defined simple collision-meshes for solid objects which may have too complex geometry to test collisions on. You could put these collision-meshes (I'm sure they have another name but I forgot it:P) in your scenegraph and have them follow the objects they model.
Space-partitioning
Another possible datastructure in a scenegraph is some form of space-partitioning as stated in other answers. This would allow you to perform fast queries on the scene like clipping any object that isn't in the viewing frustum or to efficiently filter out objects that need collision checking. You can also allow client code (in case you're writing an engine) to perform custom queries for whatever purpose. That way client code doesn't have to maintain its own space-partitioning structures.
I hope I gave you, and other readers, some ideas of how you can use a scenegraph and what you could put in it. I'm sure there are alot of other ways to use a scenegraph but these are the things I came up with.
In practice, scene objects in videogames are rarely organized into a graph that is "walked" as a tree when the scene is rendered. A graphics system typically expects one big array of stuff to render, and this big array is walked linearly.
Games that require geometric parenting relationships, such as those with people holding guns or tanks with turrets, define and enforce those relationships on an as-needed basis outside of the graphics system. These relationships tend to be only one-deep, and so there is almost never a need for an arbitrarily deep tree structure.

Renderer Efficiency

Ok, I have a renderer class which has all kinds of special functions called by the rest of the program:
DrawBoxFilled
DrawText
DrawLine
About 30 more...
Each of these functions calls glBegin/glEnd separably, which I know can be very inefficiently(its even deprecated). So anyways, I am planning a total rewrite of the renderer and I need to know the most efficient ways to set up the functions so that when something calls it, it draws it all at once, or whatever else it needs to do so it will run most efficiently. Thanks in advance :)
The efficient way to render is generally to use VBO's (vertex buffer objects) to store your vertex data, but that is only really meaningful if you are rendering (mostly) static data.
Without knowing more about what your application is supposed to render, it's hard to say how you should structure it. But ideally, you should never draw individual primitives, but rather draw the contents (a subset) of a vertexbuffer.
The most efficient way is not to expose such low-level methods at all. Instead, what you want to do is build a scene graph, which is a data structure that contains a representation of the entire scene. You update the scene graph in your "update" method, then render the whole thing in one go in your "render" method.
Another, slightly different approach is to re-build the entire scene graph each frame. This has the advantage that once the scene graph is composed, it doesn't change. So you can call your "render" method on another thread while your "update" method is going through and constructing the scene for the next frame at the same time.
Many of the more advanced effects are simply not possible without a complete scene graph. You can't do shadow mapping, for instance (which requires you to render the scene multiple times from a different angle), you can't do deferred rendering, it also makes anything which relies on sorted draw order (e.g. alpha-blending) very difficult.
From your method names, it looks like you're working in 2D, so while shadow mapping is probably not high on your feature list, alpha-blending deferred rendering might be.