I am writing a custom osg::Drawable class which needs to calculate its current distance from the camera's eye when its drawImplementation method is called. It needs to do this in order to determine the optimal number of facets for rendering.
The difficulty is that my drawable can have any number of osg:Transform nodes as parents. I need to apply the transformation of the actual parent that is being applied to the drawable. Using osg::Node::getParents() and/or getParentalNodePaths(), I can determine all possible paths to this drawable, but not the path that was taken.
Is there any way to determine this in OpenSceneGraph? I have dug through the examples and documentation and have not found exactly what I require.
You can do this at cull stage rather than the render/draw stage. You can get the model view matrix from the cull visitor and later determine the distance from this. Since you want this for your custom drawable class, you can do this by attaching a cullcallback.
Related
I am new to VTK. I would like to know how VTK abstract picker behaves for multiple actors of different opacity values. Let's consider two actors, with one
inside another. When I set the opacity of the outer surface to 0.3, while
keeping the opacity of the inner one 1.0.Since the outer one is semi-transparent, I can see the inner actor in the overlap region of the two actors. When I perform picking in that region,the resulting coordinates is from the inner surface itself, and when I pick some point other than the overlap region,I am getting outer surface coordinates. How can I perform picking operation based on opacity values? Such that i want to pick one actor at a time. Anybody please help..
vtkAbstractPicker is, as the name suggest, just as an abstract class that defines interface for picking, but nothing more. When choosing actual pickers, you basically have a choice between picking based on ray casting or "color picking" using graphical hardware (see the linked documentation for actual vtk classes that implement those).
Now to the actual problem, if I understood what you wrote correctly, you are facing a rather simple sorting problem. The opacity can be seen as kind of a priority - the actors with higher opacity should be picked even if they are inside others with lower opacity, right? Then all you need to do is get all the actors that are underneath your mouse cursor and then choose the one with highest opacity, or the closest one for cases when they have the same opacity.
I think the easiest way to implement this is using the vtkPropPicker (vtkProp is a parent class for actor so this is a good picker for picking actors). It is one of the "hardware" pickers, using the color picking algorithm. The basic algorithm is that each pickable object is rendered using a different color into a hidden buffer (texture). This color (which after all is a 32-bit number like any other) serves as an ID of that object: when the user clicks on a screen, you read the color of the pixel from the picking texture on the clicked coordinates and then you simply look into the map of objects under the ID that is equal to that color and you have the object. Obviously, it cannot use any transparency - the individual colors are IDs of the objects, blending them would make them impossible to identify.
However, the vtkPropPicker provides a method:
// Perform a pick from the user-provided list of vtkProps
// and not from the list of vtkProps that the render maintains.
// If something is picked, a 1 is returned, otherwise 0 is returned.
// Use the GetViewProp() method to get the instance of vtkProp that was picked.
int PickProp (double selectionX, double selectionY,
vtkRenderer *renderer, vtkPropCollection *pickfrom);
What you can do with this is simply first call PickProp(mouseClickX, mouseClickY, renderer of your render window, pickfrom) providing only the highest-priority actors in the pickfrom collection i.e. the actors with the highest opacity. Underneath, this will do a render of all the provided actors using the color-coding algorithm and tell you which actor is underneath the specified coordinates. If it picks something (return value is 1, you call GetViewProp on it and it gives you the pointer to your picked actor), you keep it, if it does not (return value is 0), you call it again, this time providing actors with lower opacity and so on until you pick something or you test all the actors.
You can do the same with the ray-casting pickers like vtkPicker as well - it cast a ray underneath your mouse and gives you all intersections with everything in the scene. But the vtkPicker's API is optimized for finding the closest intersection, it might be a bit of work to get all of them and then sorting them and in the end, I believe the solution using vtkPropPicker will be faster anyway.
If this solution is good, you might want to look at vtkHardwareSelector, which uses the same algorithm, but unlike the vtkPropPicker allows you to access the underlying picking texture many times, so that you don't need to re-render for every picking query. Depending on how your rendering pipeline is set up, this might be a more efficient solution (= if you make a lot of picking without updating the scene).
I'm developing a simple rendering engine as a pet project.
So far I'm able to load geometry data from Wavefront .obj files and render them onscreen separately. I know that vertex coordinates stored in these files are defined in Model space and to place them correctly in the scene I need to apply Model-to-world transform matrix to each vertex position (am I even correct here?).
But how do I define those matrices for each object? Do i need to develop a separate tool for scene composition, in which I will move objects around and the "tool" will calculate appropriate Model-to-world matrices based on translations, rotations an so on?
I would look into the "Scene Graph" data structure. It's essentially a tree, where nodes (may) define their transformations relative to their parent. Think of it this way. Each of your fingers moves relative to your hand. Moving your hand, rotating or scaling it also involves doing the same transformation on your fingers.
It is therefore beneficial to base all these relative transformations on one another as relative ones, and combine trhem to determine the overall transformation of each individual part of your model. As such you don't just define the direct model to view transformation, but rather a transformation from each part to its parent.
This saves having to define a whole bunch of transformations yourself, which are in the vast majority of cases similarly in the way I described anyway. As such you save yourself a lot of work by representing your models/scene in this manner.
Each of these relative transformations is usually a 4x4 affine transformation matrix. Combining these is just a matter of multiplying them together to obtain the combination of all of them.
A description of Scene Graphs
In order to animate objects within a scene graph, you need to specify transformations relative to their parent in the tree. For instance, spinning wheels of a car need to rotate relative to the car's chassis. These transformations largely depend on what kind of animations you'd like to show.
So I guess the answer to your question is "mostly yes". You do need to define transformations for every single object in your scene if things are going to look good. However, orgasnising the scene into a tree structure makes this process a lot easier to handle.
Regarding the creation of those matrices what you have to do is to export a scene from an authoring package.
That software can be the same you used to model the objects in the first place, Maya, Lightwave...
Right now you have your objects independent of each other.
So, using the package of your choice, either find a file format allowing you to export a scene you would have made by positioning each of your meshes where you want them, like FBX or GLTF or make your own.
Either way there is a scene structure, containing models, transforms, lights, cameras, everything you want in your engine.
After that you have to parse that structure.
You'll find here some explanations regarding how you could architect that:
https://nlguillemot.wordpress.com/2016/11/18/opengl-renderer-design/
Good luck,
I want to create a 2D game with monsters build as a custom vertex mesh and a texture map. I want to use this mesh to provide smooth vector animations. I'm using opengl es 2.0.
For now the best idea i have is to write a simple editor, where i can create a mesh and make key-frame based animation by changing position of each vertex and specifying the key-frames interpolation technics ( linear, quadric and so on).
I also have some understanding of bone animation (and skin based on bones), but i'm not sure i will be able to provide a good skeletons for my monsters.
I'm not sure it is a good way to go. Can you suggest some better ideas and / or editors, libraries for such mesh animations ?
PS: i'm using C++ now and so c++ libraries are the most welcome
You said this is a 2D game, so I'm going to assume your characters are flat polygons on to which you apply a texture map. Please add more detail to your question if this is not the case.
As far as the C++ part I think the same principles used for 3D blend shape animation can be applied to this case. For each character you will have a list of possible 'morph targets' or poses, each being a different polygon shape with same number of vertices. The character's AI will determine when to change from one to another, and how long a transition takes. So at any given point time your character can be either at a fixed state, matching one of your morph targets, or it can be in a transition state between two poses. The first has no trouble, the second case is handled by interpolating the vertices of the two polygons one by one to arrive to a morphed polygon. You can start with linear interpolation and see if that is sufficient, I suspect you may want to at least apply an easing function to the start and end of the transitions, maybe the smoothstep function.
As far as authoring these characters, have you considered using Blender? You can design and test your characters entirely within this package, then export the meshes as .obj files that you can easily import into your game.
Does anyone know a good algorithm for converting a vector path into a stroked path that is composed of triangle/quad faces? Ideally with round line joins.
Basically I am trying to draw a thick path that whose colour is based upon a value that varies with the distance along the path. I'm thinking that converting the path to triangles/quads and texture mapping it by providing the distance along the path as a 1d texture coordinate that can then be used to retrieve the colours at the corners of the triangles and interpolate.
Any other suggestions on how to do this that won't look terrible and can be anti-aliased would be appreciated.
I'm using AGG for rendering, currently, but I could maybe use an alternative provided it doesn't have too many dependencies. I guess the back-end used for rendering doesn't really matter. Whilst AGG can stroke paths, the VertexSource interface does not allow for additional vertex information other than the x/y coordinates. Additionally getting my colour mapping into the rasterizer doesn't look feasible when using the normal conv_stroke.
Here's another great resource for understanding the mechanics of stroking a path.
For anyone looking for a solution to this, I found this useful:
https://keithp.com/~keithp/talks/cairo2003.pdf
So you can effectively convolve a regular polygon with the line to generate the mesh. Requires a slightly more complicated algorithm than outlined in the pdf in order to output triangles, but it's not actually too difficult to extend it.
You can also write a custom span generator for AGG along the lines of agg::span_gouraud_rgba but one that effectively does texture mapping instead.
In my OpenGL research (the OpenGL Red Book, I think) I came across an example of a model of an articulating robot arm consisting of an "upper arm", a "lower arm", a "hand", and five or more "fingers". Each of the sections should be able to move independently, but constrained by the "joints" (the upper and lower "arms" are always connected at the "elbow").
In immediate mode (glBegin/glEnd), they use one mesh of a cube, called "member", and use scaled copies of this single mesh for each of the parts of the arm, hand, etc. "Movements" were accomplished by pushing rotations onto the transformation matrix stack for each of the following joints: shoulder, elbow, wrist, knuckle - you get the picture.
Now, this solves problem, but since it's using old, deprecated immediate mode, I don't yet understand the solution to this problem in a modern OpenGL context. My question is: how to approach this problem using modern OpenGL? In particular, should each individual "member" keep track of its own current transformation matrix since matrix stacks are no longer kosher?
Pretty much. If you really need it, implementing your own stack-like interface is pretty simple. You would literally just store a stack, then implement whatever matrix operations you need using your preferred math library, and have some way to initialized your desired matrix uniform using the top element of the stack.
In your robot arm example, suppose that the linkage is represented as a tree (or even a graph if you prefer), with relative transformations specified between each body. To draw the robot arm, you just do a traversal of this data structure and set the transformation of whichever child body to be the parent body's transformation composed with its own. For example:
def draw_linkage(body, view):
//Draw the body using view matrix
for child, relative_xform in body.edges:
if visited[child]:
continue
draw_linkage(child, view * relative_xform)
In the case of rigid parts, connected by joints, one usually treats each part as a individial submesh, loading the appropriate matrix before drawing.
In the case of "connected"/"continous" meshes, like a face, animation usually happens through bones and deformation targets. Each of those defines a deformation and every vertex in the mesh is assigned a weight, how strong it is affected by each deformators. Technically this can be applied to a rigid limb model, too, giving each limb a single deformator nonzero weighting.
Any decent animation system keeps track of transformations (matrices) itself anyway, the OpenGL matrix stack functions are seldomly used in serious applications (since OpenGL had been invented). But usually the transformations are stored in a hierachy.
You generally do this at a level above openGL using a scenegraph.
The same matrix transforms at each node in the scenegraph tree just map simply onto the openGL matrices so it's pretty efficient.