what are good mesh animation techniques? - c++

I want to create a 2D game with monsters build as a custom vertex mesh and a texture map. I want to use this mesh to provide smooth vector animations. I'm using opengl es 2.0.
For now the best idea i have is to write a simple editor, where i can create a mesh and make key-frame based animation by changing position of each vertex and specifying the key-frames interpolation technics ( linear, quadric and so on).
I also have some understanding of bone animation (and skin based on bones), but i'm not sure i will be able to provide a good skeletons for my monsters.
I'm not sure it is a good way to go. Can you suggest some better ideas and / or editors, libraries for such mesh animations ?
PS: i'm using C++ now and so c++ libraries are the most welcome

You said this is a 2D game, so I'm going to assume your characters are flat polygons on to which you apply a texture map. Please add more detail to your question if this is not the case.
As far as the C++ part I think the same principles used for 3D blend shape animation can be applied to this case. For each character you will have a list of possible 'morph targets' or poses, each being a different polygon shape with same number of vertices. The character's AI will determine when to change from one to another, and how long a transition takes. So at any given point time your character can be either at a fixed state, matching one of your morph targets, or it can be in a transition state between two poses. The first has no trouble, the second case is handled by interpolating the vertices of the two polygons one by one to arrive to a morphed polygon. You can start with linear interpolation and see if that is sufficient, I suspect you may want to at least apply an easing function to the start and end of the transitions, maybe the smoothstep function.
As far as authoring these characters, have you considered using Blender? You can design and test your characters entirely within this package, then export the meshes as .obj files that you can easily import into your game.

Related

How to place 3D objects in a scene?

I'm developing a simple rendering engine as a pet project.
So far I'm able to load geometry data from Wavefront .obj files and render them onscreen separately. I know that vertex coordinates stored in these files are defined in Model space and to place them correctly in the scene I need to apply Model-to-world transform matrix to each vertex position (am I even correct here?).
But how do I define those matrices for each object? Do i need to develop a separate tool for scene composition, in which I will move objects around and the "tool" will calculate appropriate Model-to-world matrices based on translations, rotations an so on?
I would look into the "Scene Graph" data structure. It's essentially a tree, where nodes (may) define their transformations relative to their parent. Think of it this way. Each of your fingers moves relative to your hand. Moving your hand, rotating or scaling it also involves doing the same transformation on your fingers.
It is therefore beneficial to base all these relative transformations on one another as relative ones, and combine trhem to determine the overall transformation of each individual part of your model. As such you don't just define the direct model to view transformation, but rather a transformation from each part to its parent.
This saves having to define a whole bunch of transformations yourself, which are in the vast majority of cases similarly in the way I described anyway. As such you save yourself a lot of work by representing your models/scene in this manner.
Each of these relative transformations is usually a 4x4 affine transformation matrix. Combining these is just a matter of multiplying them together to obtain the combination of all of them.
A description of Scene Graphs
In order to animate objects within a scene graph, you need to specify transformations relative to their parent in the tree. For instance, spinning wheels of a car need to rotate relative to the car's chassis. These transformations largely depend on what kind of animations you'd like to show.
So I guess the answer to your question is "mostly yes". You do need to define transformations for every single object in your scene if things are going to look good. However, orgasnising the scene into a tree structure makes this process a lot easier to handle.
Regarding the creation of those matrices what you have to do is to export a scene from an authoring package.
That software can be the same you used to model the objects in the first place, Maya, Lightwave...
Right now you have your objects independent of each other.
So, using the package of your choice, either find a file format allowing you to export a scene you would have made by positioning each of your meshes where you want them, like FBX or GLTF or make your own.
Either way there is a scene structure, containing models, transforms, lights, cameras, everything you want in your engine.
After that you have to parse that structure.
You'll find here some explanations regarding how you could architect that:
https://nlguillemot.wordpress.com/2016/11/18/opengl-renderer-design/
Good luck,

Generate volume out of 3d-matrix in cpp

I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.

Techniques for generating a 2D game world

I want to make a 2D game in C++ using the Irrlicht engine. In this game, you will control a tiny ship in a cave of some sort. This cave will be created automatically (the game will have random levels) and will look like this:
Suppose I already have the the points of the polygon of the inside of the cave (the white part). How should I render this shape on the screen and use it for collision detection? From what I've read around different sites, I should use a triangulation algorithm to make meshes of the walls of the cave (the black part) using the polygon of the inside of the cave (the white part). Then, I can also use these meshes for collision detection. Is this really the best way to do it? Do you know if Irrlicht has some built-in functions that can help me achieve this?
Any advice will be apreciated.
Describing how to get an arbitrary polygonal shape to render using a given 3D engine is quite a lengthy process. Suffice to say that pretty much all 3D rendering is done in terms of triangles, and if you didn't use a tool to generate a model that is already composed of triangles, you'll need to generate triangles from whatever data you have there. Triangulating either the black space or the white space is probably the best way to do it, yes. Then you can build up a mesh or vertex list from that, and render those triangles that way. The triangles in the list then also double up for collision detection purposes.
I doubt Irrlicht has anything for triangulation as it's quite specific to your game design and not a general approach most people would take. (Typically they would have a tool which permits generation of the game geometry and the navigation geometry side by side.) It looks like it might be quite tricky given the shapes you have there.
One option is to use the map (image mask) directly to test for collision.
For example,
if map_points[sprite.x sprite.y] is black then
collision detected
assuming that your objects are images and they aren't real polygons.
In case you use real polygons you can have a "points sample" for every object shape,
and check the sample for collisions.
To check whether a point is inside or outside your polygon, you can simply count crossings. You know (0,0) is outside your polygon. Now draw a line from there to your test point (X,Y). If this line crosses an odd number of polygon edges (e.g. 1), it's inside the polygon . If the line crosses an even number of edges (e.g. 0 or 2), the point (X,Y) is outside the polygon. It's useful to run this algorithm on paper once to convince yourself.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/

How do I render thick 2D lines as polygons?

I have a path made up of a list of 2D points. I want to turn these into a strip of triangles in order to render a textured line with a specified thickness (and other such things). So essentially the list of 2D points need to become a list of vertices specifying the outline of a polygon that if rendered would render the line. The problem is handling the corner joins, miters, caps etc. The resulting polygon needs to be "perfect" in the sense of no overdraw, clean joins, etc. so that it could feasibly be extruded or otherwise toyed with.
Are there any simple resources around that can provide algorithm insight, code or any more information on doing this efficiently?
I absolutely DO NOT want a full fledged 2D vector library (cairo, antigrain, OpenVG, etc.) with curves, arcs, dashes and all the bells and whistles. I've been digging in multiple source trees for OpenVG implementations and other things to find some insight, but it's all terribly convoluted.
I'm definitely willing to code it myself, but there are many degenerate cases (small segments + thick widths + sharp corners) that create all kinds of join issues. Even a little help would save me hours of trying to deal with them all.
EDIT: Here's an example of one of those degenerate cases that causes ugliness if you were simply to go from vertex to vertex. Red is the original path. The orange blocks are rectangles drawn at a specified width aligned and centered on each segment.
Oh well - I've tried to solve that problem myself. I wasted two month on a solution that tried to solve the zero overdraw problem. As you've already found out you can't deal with all degenerated cases and have zero overdraw at the same time.
You can however use a hybrid approach:
Write yourself a routine that checks if the joins can be constructed from simple geometry without problems. To do so you have to check the join-angle, the width of the line and the length of the joined line-segments (line-segments that are shorter than their width are a PITA). With some heuristics you should be able to sort out all the trivial cases.
I don't know how your average line-data looks like, but in my case more than 90% of the wide lines had no degenerated cases.
For all other lines:
You've most probably already found out that if you tolerate overdraw, generating the geometry is a lot easier. Do so, and let a polygon CSG algorithm and a tesselation algorithm do the hard job.
I've evaluated most of the available tesselation packages, and I ended up with the GLU tesselator. It was fast, robust, never crashed (unlike most other algorithms). It was free and the license allowed me to include it in a commercial program. The quality and speed of the tesselation is okay. You will not get delaunay triangulation quality, but since you just need the triangles for rendering that's not a problem.
Since I disliked the tesselator API I lifted the tesselation code from the free SGI OpenGL reference implementation, rewrote the entire front-end and added memory pools to get the number of allocations down. It took two days to do this, but it was well worth it (like factor five performance improvement). The solution ended up in a commercial OpenVG implementation btw :-)
If you're rendering with OpenGL on a PC, you may want to move the tesselation/CSG-job from the CPU to the GPU and use stencil-buffer or z-buffer tricks to remove the overdraw. That's a lot easier and may be even faster than CPU tesselation.
I just found this amazing work:
http://www.codeproject.com/Articles/226569/Drawing-polylines-by-tessellation
It seems to do exactly what you want, and its licence allows to use it even in commercial applications. Plus, the author did a truly great job to detail his method. I'll probably give it a shot at some point to replace my own not-nearly-as-perfect implementation.
A simple method off the top of my head.
Bisect the angle of each 2d Vertex, this will create a nice miter line. Then move along that line, both inward and outward, the amount of your "thickness" (or thickness divided by two?), you now have your inner and outer polygon points. Move to the next point, repeat the same process, building your new polygon points along the way. Then apply a triangualtion to get your render-ready vertexes.
I ended up having to get my hands dirty and write a small ribbonizer to solve a similar problem.
For me the issue was that I wanted fat lines in OpenGL that did not have the kinds of artifacts that I was seeing with OpenGL on the iPhone. After looking at various solutions; bezier curves and the like - I decided it was probably easiest to just make my own. There are a couple of different approaches.
One approach is to find the angle of intersection between two segments and then move along that intersection line a certain distance away from the surface and treat that as a ribbon vertex. I tried that and it did not look intuitive; the ribbon width would vary.
Another approach is to actually compute a normal to the surface of the line segments and use that to compute the ideal ribbon edge for that segment and to do actual intersection tests between ribbon segments. This worked well except that for sharp corners the ribbon line segment intersections were too far away ( if the inter-segment angle approached 180' ).
I worked around the sharp angle issue with two approaches. The Paul Bourke line intersection algorithm ( which I used in an unoptimized way ) suggested detecting if the intersection was inside of the segments. Since both segments are identical I only needed to test one of the segments for intersection. I could then arbitrate how to resolve this; either by fudging a best point between the two ends or by putting on an end cap - both approaches look good - the end cap approach may throw off the polygon front/back facing ordering for opengl.
See http://paulbourke.net/geometry/lineline2d/
See my source code here : https://gist.github.com/1474156
I'm interested in this too, since I want to perfect my mapping application's (Kosmos) drawing of roads. One workaround I used is to draw the polyline twice, once with a thicker line and once with a thinner, with a different color. But this is not really a polygon, it's just a quick way of simulating one. See some samples here: http://wiki.openstreetmap.org/wiki/Kosmos_Rendering_Help#Rendering_Options
I'm not sure if this is what you need.
I think I'd reach for a tessellation algorithm. It's true that in most case where these are used the aim is to reduce the number of vertexes to optimise rendering, but in your case you could parameterise to retain all the detail - and the possibility of optimising may come in useful.
There are numerous tessellation algorithms and code around on the web - I wrapped up a pure C on in a DLL a few years back for use with a Delphi landscape renderer, and they are not an uncommon subject for advanced graphics coding tutorials and the like.
See if Delaunay triangulation can help.
In my case I could afford to overdraw. I just drow circles with radius = width/2 centered on each of the polyline's vertices.
Artifacts are masked this way, and it is very easy to implement, if you can live with "rounded" corners and some overdrawing.
From your image it looks like that you are drawing box around line segments with FILL on and using orange color. Doing so is going to create bad overdraws for sure. So first thing to do would be not render black border and fill color can be opaque.
Why can't you use GL_LINES primitive to do what you intent to do? You can specify width, filtering, smoothness, texture anything. You can render all vertices using glDrawArrays(). I know this is not something you have in mind but as you are focusing on 2D drawing, this might be easier approach. (search for Textured lines etc.)