make a charater follow an uneven terrain (2D) - c++

I'd like to make a game where the terrain is not even and is based on a png. How s this done in theory, given the object's vec2 and its angle, because if for instance there is a hill, the character will rotate based on the angle of the hill. Thanks
2d like mario

I think you are talking about a heightmap which is you PNG which is then converted to a 3D triangle mesh. You need to use the information from the mesh (or PNG color value) to calculate the current height where you should place your character.
If this is a flying character your pretty much done here, but in your case you need to calculate the normal vector of the current triangle the character is standing on. This is pretty simple using the cross product of the two triangle vectors (V2 - V1) x (V3 - V1). That should be your characters angle as well. You could maybe average this vector by including normals from the surrounding trangles as well.
Btw, when you have the normals of the triangle you can apply some basic shading to the ground as well.
Added: The OP changed the question to be a 2D problem. The above approach still works, but it much easier in 2D.
Use the height values not as triangles but as lines (silhouette) and calculate the normal of the current line instead. That is, create a vector, v, between the current height value and the next. Then the normal of that vector is n = <-v.y, v.x>. Use that as the angle of your character.

You need a mapping function that converts the PNG data to a 3-d representation... This mapping function can be simple, as in simply interpreting greyscale values in the PNG as altitude or via human guidance, or it can be complex, as in shadow detection used in advanced computer vision algorithms. In either case, you would then move your character based on the data gleaned thru the mapping function.

In c++, I would suggest looking for a 3d-gaming engine that support more than just plain terrains.
You could start with this list
That is of course, if you're not trying to start from scratch, in which case you need to look for algorithms first.
Edit:
since it's the game and not the terrain that is 2d, if you want to make your environment out of an image, you're in for quite some edge-detection.

Related

How do you store voxel data?

I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.

Draw zones instead of of points on an radar image

I don't really know how to explain it in a better way, so please look at the following images :
This is what I create for the moment
This is what I whish to create instead
I am currently using C++ with Qt 4.8.
Do you know a way that would allow me to reach my goal ? Using a library or a transformation matrix ? Or something else ?
I am a total newbie to image manipulation, so every advice is precious for me.
Thanks
EDIT :
I draw each colored pixel from Lat/Long measures, if it can help.
Use what is called a morphological operator. In this case, you would require the 'open' operator. OpenCV provides a pretty good implementation (and documentation of these) which can be found here.
Draw circles instead of points is all I can think of. Creating a triangle mesh is tricky with the concave elements of the distribution.
EDIT: Just looked at the full size version of the image and wondered if the data set is stored radially? You could scan adjacent radial lines and try to match up the changes in value along each line to form a set of quads. There will be a large number of edge conditions to consider though.
EDIT2: Alternatively, form a uniformly distributed set of quads and interpolate the vertex colours.
you can start by increasing the size of the points,
you could create a triangle mesh by using a sweepline algorithm:
sort the points by lat
keep a subset sorted by long
when you add a point compare to the 4 adjacent points and add triangles to the "to draw" set (remove points too far away from the current lat as needed)
with opengl you can use an index buffer to hold which point should be drawn

Voxels... Honestly, I need to know where to begin

Okay, i understand that voxels are just basically a volumetric version of a pixel.
After that, I have no idea what to even look for.
Googling doesn't show any tutorials, I can't find a book on it anywhere, I can't find anything even having to do with the basic idea of what a voxel really is.
I know much of the C++ library, and have the basics of OpenGL down.
Can someone point me in the right direction?
EDIT: I guess I'm just confused on how to implement them? Sorry for being a pain, it's just that I can't really find anything that I can easily correlate to... I think I was imagining a voxel being relevant to a vector in which you can actually store data.
a voxel can be represented as ANY 3D shape? For example, say I wanted the shape to be a cylinder. Is this possible, or do they have to link like cubes?
Minecraft is a good example of using voxels. In Minecraft each voxel is a cube.
To see a C++ example you can look at the Minecraft clone Minetest-c55. This is open source so you can read all of the source code to see how its done.
Being cubes is not a requirement of voxels. They could be pyramids or any other shape that can fit together.
I suspect that you are looking for information on Volume Rendering techniques (since you mention voxels and OpenGL). You can find plenty of simple rendering code in C++, and more advanced OpenGL shaders as well with a little searching on that term.
In the simplest possible implementation, a voxel space is just a 3 dimensional Array. For solids you could use a single bit per voxel: 1 == filled and 0 == empty. You use implicit formulas to make shapes, e.g. A sphere is all the voxels within a radius from the center voxel.
Voxels are not really compatible with polygon-based 3d rendering, but they are widely used in image analysis, medical imaging, computer vision...
Let's Make a Voxel Engine on Google Sites might help one to get started creating a voxel based engine:
https://sites.google.com/site/letsmakeavoxelengine/
In addition to that there are presentations of the results on Youtube worth checking:
http://www.youtube.com/watch?v=nH_bHqury9Q&list=PL3899B2CEE4CD4687
Typically a voxel is a position in some 3D space that has a volume (analogous to the area that a pixel contains.
Just like in an image, where a pixel contains some scalar value (grayscale) or vector of values (like in a color image where the vector is either the red, green, and blue components, or hue, saturation, and value components) the entries for a voxel can have some scale or vector of values.
A couple natural examples of volumetric images that contains voxels are 3D medical imagines such as CT, MRI, 3D ultrasound etc.
Mathematically speaking a 3D image is a function from some voxel space to some set of numbers.
look for voxlap or try this http://www.html5code.com/gallery/voxel-rain/ or write your own code. Yes, a voxel can be reduced to a 3d coordinate (which can be implied by it's position in the file structure) and a graphical representation which can be anything (cube, sphere, picture, color ...). Just like a pixel is a 2D coordinate with a color index.
You only need to parse your file and render the corresponding voxels. Sadly, there is no 'right' file format although voxlaps file formats seem pretty neat.
good luck

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/

Finding Rotation Angles between 3d points

I am writing a program that will draw a solid along the curve of a spline. I am using visual studio 2005, and writing in C++ for OpenGL. I'm using FLTK to open my windows (fast and light toolkit).
I currently have an algorithm that will draw a Cardinal Cubic Spline, given a set of control points, by breaking the intervals between the points up into subintervals and drawing linesegments between these sub points. The number of subintervals is variable.
The line drawing code works wonderfully, and basically works as follows: I generate a set of points along the spline curve using the spline equation and store them in an array (as a special datastructure called Pnt3f, where the coordinates are 3 floats and there are some handy functions such as distance, length, dot and crossproduct). Then i have a single loop that iterates through the array of points and draws them as so:
glBegin(GL_LINE_STRIP);
for(pt = 0; pt<=numsubsegements ; ++pt) {
glVertex3fv(pt.v());
}
glEnd();
As stated, this code works great. Now what i want to do is, instead of drawing a line, I want to extrude a solid. My current exploration is using a 'cylinder' quadric to create a tube along the line. This is a bit trickier, as I have to orient openGL in the direction i want to draw the cylinder. My idea is to do this:
Psuedocode:
Push the current matrix,
translate to the first control point
rotate to face the next point
draw a cylinder (length = distance between the points)
Pop the matrix
repeat
My problem is getting the angles between the points. I only need yaw and pitch, roll isnt important. I know take the arc-cosine of the dot product of the two points divided by the magnitude of both points, will return the angle between them, but this is not something i can feed to OpenGL to rotate with. I've tried doing this in 2d, using the XZ plane to get x rotation, and making the points vectors from the origin, but it does not return the correct angle.
My current approach is much simpler. For each plane of rotation (X and Y), find the angle by:
arc-cosine( (difference in 'x' values)/distance between the points)
the 'x' value depends on how your set your plane up, though for my calculations I always use world x.
Barring a few issues of it making it draw in the correct quadrant that I havent worked out yet, I want to get advice to see if this was a good implementation, or to see if someone knew a better way.
You are correct in forming two vectors from the three points in two adjacent line segments and then using the arccosine of the dot product to get the angle between them. To make use of this angle you need to determine the axis around which the rotation should occur. Take the cross product of the same two vectors to get this axis. You can then build a transformation matrix using this axis-angle or pass it as parameters to glRotate.
A few notes:
first of all, this:
for(pt = 0; pt<=numsubsegements ; ++pt) {
glBegin(GL_LINE_STRIP);
glVertex3fv(pt.v());
}
glEnd();
is not a good way to draw anything. You MUST have one glEnd() for every single glBegin(). you probably want to get the glBegin() out of the loop. the fact that this works is pure luck.
second thing
My current exploration is using a
'cylinder' quadric to create a tube
along the line
This will not work as you expect. the 'cylinder' quadric has a flat top base and a flat bottom base. Even if you success in making the correct rotations according to the spline the edges of the flat tops are going to pop out of the volume of your intended tube and it will not be smooth. You can try it in 2D with just a pen and a paper. Try to draw a smooth tube using only shorter tubes with a flat bases. This is impossible.
Third, to your actual question, The definitive tool for such rotations are quaternions. Its a bit complex to explain in this scope but you can find plentyful information anywhere you look.
If you'd have used QT instead of FLTK you could have also used libQGLViewer. It has an integrated Quaternion class which would save you the implementation. If you still have a choice I strongly recommend moving to QT.
Have you considered gluLookAt? Put your control point as the eye point, the next point as the reference point, and make the up vector perpendicular to the difference between the two.