Implementing QuadTree Terrain on a Planet (Geomipmapping) - c++

I have a QuadTree which can be subdivided by placing objects in the nodes. I also have a planet made in OpenGL in the form of a Quad Sphere. The problem is i don't know how to put them together. How does a QuadTree store information about the Planet? Do i store vertices in the leaf Quad Tree nodes? And if so how do i split the vertex data into 4 sets without ruining the texturing and normals. If this is the case do i use Indices instead?
So my question in short really is:
How do i store my vertices data in a quad tree so that i can split the terrain on the planet up so that the planet will become higher detail at closer range. I assume this is done by using a Camera as the object that splits the nodes.
I've read many articles and most of them fail to cover this. The Quadtree is one of the most important things for my application as it will allow me to render many planets at the same time while still being able to get good definition at land. A pretty picture of my planet and it's HD sun:
A video of the planet can also be found Here.
I've managed to implement a simple quad tree on a flat plane but i keep getting massive holes as i think i'm getting the positions wrong. It's the last post on here - http://www.gamedev.net/topic/637956-opengl-procedural-planet-generation-quadtrees-and-geomipmapping/ and you can get the src there too. Any ideas how to fix it?

What you're looking for is an algorithm like ROAM (Real-time Optimally Adapting Mesh) to be able to increase or decrease the accuracy of your model based on the distance of the camera. The algorithm will make use of your quadtree then.
Check out this series on gamasutra on how to render a Real-time Procedural Universe.
Edit: the reason why you would use a quadtree with these methods is to minimize the number of vertices in areas where detail is not needed (flat terrain for example). The quadtree definition on wikipedia is pretty good, you should use that as a starting point. The goal is to create child nodes to your quadtree where you have changes in your "height" (you could generate the sides of your cube using an heightmap) until you reach a predefined depth. Maybe, as a first pass, you should try avoiding the quadtree and use a simple grid. When you get that working, you "optimize" your process by adding the quadtree.

To understand how quadtree and terrain data works together to achieve LOD based rendering
Read this paper. Easy to understand with illustrative examples.
I did once implement a LOD on a sphere. The idea is to start with a simple Dipyramid, the upper pyramid representing the northern sphere and the lower one representing the southern sphere. The the bases of the pyramids align with equator, the tips are on poles.
Then you subdivide each triangle into 4 smaller ones as much as you want by connecting the midpoints of the edges of the triangle.
The as much as you want is based on your needs, distance to camera and object placements could be your triggers for subdivision

Related

Mesh simplification of a grid-like structure

I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .

Clarification about octrees and how they work in a Voxel world

I read about octrees and I didn't fully understand how they world work/be implemented in a voxel world where the octree's purpose is to lower the amount of voxels you would render by connecting repeating voxels to one big "voxel".
Here are the questions I want clarification about:
What type of data structure would you use? How could turn a 3-D array of voxels into and array that has different sized voxels that take multiple locations in the array?
What are the nodes and what are they used for?
Does the octree connect the voxels so there are ONLY square shapes or could it be a rectangle or a L shape or an entire Y column of voxels or what?
Do the octrees really improve performance of a voxel game? If so usually by how much?
Quick answers:
A tree:Each node has 8 children, top-back-left, top-back-right, etc. down to a certain levelThe code for this can get quite complex, especially if the voxels can change at runtime.
The type of voxel (colour, material, a list of items)
yep. Cubes onlyMore specifically 1x1, 2x2, 4x4, 8x8 etc. It must be an entire node.If you really want to you could define some sort of patterns, but its no longer a octdtree.
yeah, but it depends on your data. Imagine describing 256 identical blocks individually, or describing it once (like air in Minecraft)
I'd start with trying to understand quadtrees first. You can do that on paper, or make a test program with it. You'll answer these questions yourself if you experiment
An octree done correctly can also help you with neighbour searches which enable you to determine if a face is considered to be "visible" (ie so you end up with a hull of voxels visible). Once you've established your octree you then use this to store your XYZ coords which you then extract into a single array. You then feed this array into your VERTEX Buffer (GL solutions require this) which you can then render in chunk forms as needed (as the camera moves forward etc).
Octree's also by there very nature collapse Cubes into bigger ones if there are ones of the same type... much like Tetris does when you have colors/shapes that "fit" one another.. this in turn can reduce your vertex count and at render you're really drawing a combination of squares and rectangles
If done correctly you will end up with a lot of chunks that only have the outfacing "faces" visible in the vertex buffers. Now you then have to also build your own Occlusion Culling algorithm which then reduces the visibility ontop of this resulting in less rendering required.
I did an example here:
https://vimeo.com/71330826
notice how the outside is only being rendered but the chunks themselves go all the way down to the bottom even though the chunks depth faces should cancel each other out? (needs more optimisation). Also note how the camera turns around and the faces are removed from the rendering buffers?

See what "block" the player is looking at

I'm creating a game where the world is formed out of cubes (like in Minecraft), but there's just one small problem I can't put my finger on. I've created the world, the player, the camera movement and rotation (glRotatef and glTranslatef). Now I'm stuck at finding out what block the player is looking at.
EDIT: In case I didn't make my question clear enough, I don't understand how to cast the ray to check for collision with the blocks. All the blocks that I'm drawing are stored in a 3D array, containing the block id (I know I need to use octrees, but I just want the algorithm to work, optimization comes along the way)
OpenGL is a drawing/rendering API, not some kind of game/graphics engine. You tell it to draw stuff, and that's what it does.
Tests like the one you intend are not covered by OpenGL, you've to implement them either yourself or use some library designed for this. In your case you want to test the world against the viewing frustum. The exact block the player looks on can be found by doing a ray geometry intersection test, i.e. you cast a ray from your player position into the direction the player looks and test which objects intersect with that ray. Using a spatial subdivision structure helps speeding things up. In the case of a world made of cubes the most easy and efficient structure is a octree, i.e. one large cube that gets subdivided into 8 sub-cubes of half the containing cube's edge length. Then those subcubes are divided and so on.
Traversing such a structure is easily implemented by recursive functions – don't worry about stack overflow, since already as litte as 10 subdivisions would yield 2^10^3 = 2^30 sub-sub-...-sub-cubes, with a requirement of at leat 8GB of data to build a full detailed mesh from them. But 10 function recursion levels are not very deep.
First imagine a vector from your eye point in the direction of the camera with a length equal to the player's "reach". If I remember correctly the reach in Minecraft is about 4 blocks (or 4 meters). For every block in your world that could intersect that vector (which can be as simple as a 3D loop over a cube of blocks bounded by the min/max x/y/z values for your reach vector) cast a ray at the cube (if it's not air) to see if you hit it. Raycasting at an AABB (axis aligned bounding box) is pretty straightforward and you can Google that algorithm. Now sort the results by distance and return the block that hit the ray first.

Techniques for generating a 2D game world

I want to make a 2D game in C++ using the Irrlicht engine. In this game, you will control a tiny ship in a cave of some sort. This cave will be created automatically (the game will have random levels) and will look like this:
Suppose I already have the the points of the polygon of the inside of the cave (the white part). How should I render this shape on the screen and use it for collision detection? From what I've read around different sites, I should use a triangulation algorithm to make meshes of the walls of the cave (the black part) using the polygon of the inside of the cave (the white part). Then, I can also use these meshes for collision detection. Is this really the best way to do it? Do you know if Irrlicht has some built-in functions that can help me achieve this?
Any advice will be apreciated.
Describing how to get an arbitrary polygonal shape to render using a given 3D engine is quite a lengthy process. Suffice to say that pretty much all 3D rendering is done in terms of triangles, and if you didn't use a tool to generate a model that is already composed of triangles, you'll need to generate triangles from whatever data you have there. Triangulating either the black space or the white space is probably the best way to do it, yes. Then you can build up a mesh or vertex list from that, and render those triangles that way. The triangles in the list then also double up for collision detection purposes.
I doubt Irrlicht has anything for triangulation as it's quite specific to your game design and not a general approach most people would take. (Typically they would have a tool which permits generation of the game geometry and the navigation geometry side by side.) It looks like it might be quite tricky given the shapes you have there.
One option is to use the map (image mask) directly to test for collision.
For example,
if map_points[sprite.x sprite.y] is black then
collision detected
assuming that your objects are images and they aren't real polygons.
In case you use real polygons you can have a "points sample" for every object shape,
and check the sample for collisions.
To check whether a point is inside or outside your polygon, you can simply count crossings. You know (0,0) is outside your polygon. Now draw a line from there to your test point (X,Y). If this line crosses an odd number of polygon edges (e.g. 1), it's inside the polygon . If the line crosses an even number of edges (e.g. 0 or 2), the point (X,Y) is outside the polygon. It's useful to run this algorithm on paper once to convince yourself.

Implementing Marching Cube Algorithm?

From My last question: Marching Cube Question
However, i am still unclear as in:
how to create imaginary cube/voxel to check if a vertex is below the isosurface?
how do i know which vertex is below the isosurface?
how does each cube/voxel determines which cubeindex/surface to use?
how draw surface using the data in triTable?
Let's say i have a point cloud data of an apple.
how do i proceed?
can anybody that are familiar with Marching Cube help me?
i only know C++ and opengl.(c is a little bit out of my hand)
First of all, the isosurface can be represented in two ways. One way is to have the isovalue and per-point scalars as a dataset from an external source. That's how MRI scans work. The second approach is to make an implicit function F() which takes a point/vertex as its parameter and returns a new scalar. Consider this function:
float computeScalar(const Vector3<float>& v)
{
return std::sqrt(v.x*v.x + v.y*v.y + v.z*v.z);
}
Which would compute the distance from the point and to the origin for every point in your scalar field. If the isovalue is the radius, you just figured a way to represent a sphere.
This is because |v| <= R is true for all points inside a sphere, or which lives on its interior. Just figure out which vertices are inside the sphere and which ones are on the outside. You want to use the less or greater-than operators because a volume divides the space in two. When you know which points in your cube are classified as inside and outside, you also know which edges the isosurface intersects. You can end up with everything from no triangles to five triangles. The position of the mesh vertices can be computed by interpolating across the intersected edges to find the actual intersection point.
If you want to represent say an apple with scalar fields, you would either need to get the source data set to plug in to your application, or use a pretty complex implicit function. I recommend getting simple geometric primitives like spheres and tori to work first, and then expand from there.
1) It depends on yoru implementation. You'll need to have a data structure where you can lookup the values at each corner (vertex) of the voxel or cube. This can be a 3d image (ie: an 3D texture in OpenGL), or it can be a customized array data structure, or any other format you wish.
2) You need to check the vertices of the cube. There are different optimizations on this, but in general, start with the first corner, and just check the values of all 8 corners of the cube.
3) Most (fast) algorithms create a bitmask to use as a lookup table into a static array of options. There are only so many possible options for this.
4) Once you've made the triangles from the triTable, you can use OpenGL to render them.
Let's say i have a point cloud data of an apple. how do i proceed?
This isn't going to work with marching cubes. Marching cubes requires voxel data, so you'd need to use some algorithm to put the point cloud of data into a cubic volume. Gaussian Splatting is an option here.
Normally, if you are working from a point cloud, and want to see the surface, you should look at surface reconstruction algorithms instead of marching cubes.
If you want to learn more, I'd highly recommend reading some books on visualization techniques. A good one is from the Kitware folks - The Visualization Toolkit.
You might want to take a look at VTK. It has a C++ implementation of Marching Cubes, and is fully open sourced.
As requested, here is some sample code implementing the Marching Cubes algorithm (using JavaScript/Three.js for the graphics):
http://stemkoski.github.com/Three.js/Marching-Cubes.html
For more details on the theory, you should check out the article at
http://paulbourke.net/geometry/polygonise/