I am experimenting and currently have two objects in an iOS app using the VES/VTK framework and can move the vesActors in the scene. What I don't understand is how I can take the position of one object and apply it to a second object. In other words make two planes parallel basically planar homography within the VTK framework using actors, mappers, and/or transforms. Are there any examples of this?
If you can pick 3 points on/relative to your planes, you can use vtkLandmarkTransform (http://www.vtk.org/doc/nightly/html/classvtkLandmarkTransform.html), as demonstrated here: http://www.vtk.org/Wiki/VTK/Examples/Cxx/PolyData/AlignFrames
Related
I am working on a project where I am trying to compare a 3D reconstructed model with a predefined 3D model of the same object to find the orientation shift between them. An example of these types of models can be seen here: example models.
I was thinking about maybe trying to use Kabsch's algorithm to compare them but I'm not completely sure that will work because they don't have the same number of vertices and I'm not sure if there's a good way to make that happen. Also, I don't know the correspondence information - which point in set 1 represents which point in set 2.
Regardless, I have the model's PLY files which contains the coordinates of each vertex so I'm looking for some way to compare them that will match up the different features in each object together. Here is a GitHub repo with both PLY files just in case that would be useful.
Thanks in advance for any help, I've been struck trying to figure out this problem for a while!
Is it possible to show 3D scene with two viewports who will be drawn over each other and to display different meshes appropriate with different cameras using QtWidgets/C++ ?
Can someone share some code example how to make that.
Thanks
Using framegraphs:
Essentially, you need to add two QViewPorts to your framegraph where e.g. one covers the left half and the other the right half of the screen. Along the branches, you can use two different QCameraSelectors. I guess by drawn over each other you mean next to each other, I don't think you can actually draw them over each other.
There's the Qt3D Multiviewport example. It's in QML but should be easily translatable to QML.
Then you can point one camera at the first object and another camera at the second object and simply place them somewhere differently in 3D space.
Using two Qt3DWindows:
Simply use two Qt3DWindows next to each other and embed them.
I have two STL models of a scanned skull that are similar but not the same. When they are rendered side by side as actors in a vtkRenderer, they are facing different directions and one has been rotated 180 degrees.
Normally, I would just hard-code in the transformation so that they are both oriented facing the screen, but in this case, there will be lots of similar but different skulls uploaded, all of which might face different directions.
So, can anyone suggest a VTK specific way to programmatically orient the skulls so they both face they same direction? If not in a VTK specific way, does there exist a generally accepted method to do this else where in computer visualization software?
In case you know rotation angles for each skull I would suggest to use that knowledge (eg.: prepare file with rotation angles for each model) and rotate them on load.
If not, then you have a real problem. If assumed that these skulls are pretty similar then I could suggest to try to align these skulls to each other, so in result they will be facing same direction.
You can achieve that through dedicated software like Geomagic, CloudCompare, or MeshLab , you can also write your own algorithm (Eg.: Least Squares Matching). You can also try to use library with already implemented alignment algorithms like PCL
Manual approach: You can use 3 points alignment method to achieve that. It will be way faster than trying doing that through rotations and translations. (How it works)
I'm developing a simple rendering engine as a pet project.
So far I'm able to load geometry data from Wavefront .obj files and render them onscreen separately. I know that vertex coordinates stored in these files are defined in Model space and to place them correctly in the scene I need to apply Model-to-world transform matrix to each vertex position (am I even correct here?).
But how do I define those matrices for each object? Do i need to develop a separate tool for scene composition, in which I will move objects around and the "tool" will calculate appropriate Model-to-world matrices based on translations, rotations an so on?
I would look into the "Scene Graph" data structure. It's essentially a tree, where nodes (may) define their transformations relative to their parent. Think of it this way. Each of your fingers moves relative to your hand. Moving your hand, rotating or scaling it also involves doing the same transformation on your fingers.
It is therefore beneficial to base all these relative transformations on one another as relative ones, and combine trhem to determine the overall transformation of each individual part of your model. As such you don't just define the direct model to view transformation, but rather a transformation from each part to its parent.
This saves having to define a whole bunch of transformations yourself, which are in the vast majority of cases similarly in the way I described anyway. As such you save yourself a lot of work by representing your models/scene in this manner.
Each of these relative transformations is usually a 4x4 affine transformation matrix. Combining these is just a matter of multiplying them together to obtain the combination of all of them.
A description of Scene Graphs
In order to animate objects within a scene graph, you need to specify transformations relative to their parent in the tree. For instance, spinning wheels of a car need to rotate relative to the car's chassis. These transformations largely depend on what kind of animations you'd like to show.
So I guess the answer to your question is "mostly yes". You do need to define transformations for every single object in your scene if things are going to look good. However, orgasnising the scene into a tree structure makes this process a lot easier to handle.
Regarding the creation of those matrices what you have to do is to export a scene from an authoring package.
That software can be the same you used to model the objects in the first place, Maya, Lightwave...
Right now you have your objects independent of each other.
So, using the package of your choice, either find a file format allowing you to export a scene you would have made by positioning each of your meshes where you want them, like FBX or GLTF or make your own.
Either way there is a scene structure, containing models, transforms, lights, cameras, everything you want in your engine.
After that you have to parse that structure.
You'll find here some explanations regarding how you could architect that:
https://nlguillemot.wordpress.com/2016/11/18/opengl-renderer-design/
Good luck,
I want to create a 2D game with monsters build as a custom vertex mesh and a texture map. I want to use this mesh to provide smooth vector animations. I'm using opengl es 2.0.
For now the best idea i have is to write a simple editor, where i can create a mesh and make key-frame based animation by changing position of each vertex and specifying the key-frames interpolation technics ( linear, quadric and so on).
I also have some understanding of bone animation (and skin based on bones), but i'm not sure i will be able to provide a good skeletons for my monsters.
I'm not sure it is a good way to go. Can you suggest some better ideas and / or editors, libraries for such mesh animations ?
PS: i'm using C++ now and so c++ libraries are the most welcome
You said this is a 2D game, so I'm going to assume your characters are flat polygons on to which you apply a texture map. Please add more detail to your question if this is not the case.
As far as the C++ part I think the same principles used for 3D blend shape animation can be applied to this case. For each character you will have a list of possible 'morph targets' or poses, each being a different polygon shape with same number of vertices. The character's AI will determine when to change from one to another, and how long a transition takes. So at any given point time your character can be either at a fixed state, matching one of your morph targets, or it can be in a transition state between two poses. The first has no trouble, the second case is handled by interpolating the vertices of the two polygons one by one to arrive to a morphed polygon. You can start with linear interpolation and see if that is sufficient, I suspect you may want to at least apply an easing function to the start and end of the transitions, maybe the smoothstep function.
As far as authoring these characters, have you considered using Blender? You can design and test your characters entirely within this package, then export the meshes as .obj files that you can easily import into your game.