See what "block" the player is looking at - opengl

I'm creating a game where the world is formed out of cubes (like in Minecraft), but there's just one small problem I can't put my finger on. I've created the world, the player, the camera movement and rotation (glRotatef and glTranslatef). Now I'm stuck at finding out what block the player is looking at.
EDIT: In case I didn't make my question clear enough, I don't understand how to cast the ray to check for collision with the blocks. All the blocks that I'm drawing are stored in a 3D array, containing the block id (I know I need to use octrees, but I just want the algorithm to work, optimization comes along the way)

OpenGL is a drawing/rendering API, not some kind of game/graphics engine. You tell it to draw stuff, and that's what it does.
Tests like the one you intend are not covered by OpenGL, you've to implement them either yourself or use some library designed for this. In your case you want to test the world against the viewing frustum. The exact block the player looks on can be found by doing a ray geometry intersection test, i.e. you cast a ray from your player position into the direction the player looks and test which objects intersect with that ray. Using a spatial subdivision structure helps speeding things up. In the case of a world made of cubes the most easy and efficient structure is a octree, i.e. one large cube that gets subdivided into 8 sub-cubes of half the containing cube's edge length. Then those subcubes are divided and so on.
Traversing such a structure is easily implemented by recursive functions – don't worry about stack overflow, since already as litte as 10 subdivisions would yield 2^10^3 = 2^30 sub-sub-...-sub-cubes, with a requirement of at leat 8GB of data to build a full detailed mesh from them. But 10 function recursion levels are not very deep.

First imagine a vector from your eye point in the direction of the camera with a length equal to the player's "reach". If I remember correctly the reach in Minecraft is about 4 blocks (or 4 meters). For every block in your world that could intersect that vector (which can be as simple as a 3D loop over a cube of blocks bounded by the min/max x/y/z values for your reach vector) cast a ray at the cube (if it's not air) to see if you hit it. Raycasting at an AABB (axis aligned bounding box) is pretty straightforward and you can Google that algorithm. Now sort the results by distance and return the block that hit the ray first.

Related

Clarification about octrees and how they work in a Voxel world

I read about octrees and I didn't fully understand how they world work/be implemented in a voxel world where the octree's purpose is to lower the amount of voxels you would render by connecting repeating voxels to one big "voxel".
Here are the questions I want clarification about:
What type of data structure would you use? How could turn a 3-D array of voxels into and array that has different sized voxels that take multiple locations in the array?
What are the nodes and what are they used for?
Does the octree connect the voxels so there are ONLY square shapes or could it be a rectangle or a L shape or an entire Y column of voxels or what?
Do the octrees really improve performance of a voxel game? If so usually by how much?
Quick answers:
A tree:Each node has 8 children, top-back-left, top-back-right, etc. down to a certain levelThe code for this can get quite complex, especially if the voxels can change at runtime.
The type of voxel (colour, material, a list of items)
yep. Cubes onlyMore specifically 1x1, 2x2, 4x4, 8x8 etc. It must be an entire node.If you really want to you could define some sort of patterns, but its no longer a octdtree.
yeah, but it depends on your data. Imagine describing 256 identical blocks individually, or describing it once (like air in Minecraft)
I'd start with trying to understand quadtrees first. You can do that on paper, or make a test program with it. You'll answer these questions yourself if you experiment
An octree done correctly can also help you with neighbour searches which enable you to determine if a face is considered to be "visible" (ie so you end up with a hull of voxels visible). Once you've established your octree you then use this to store your XYZ coords which you then extract into a single array. You then feed this array into your VERTEX Buffer (GL solutions require this) which you can then render in chunk forms as needed (as the camera moves forward etc).
Octree's also by there very nature collapse Cubes into bigger ones if there are ones of the same type... much like Tetris does when you have colors/shapes that "fit" one another.. this in turn can reduce your vertex count and at render you're really drawing a combination of squares and rectangles
If done correctly you will end up with a lot of chunks that only have the outfacing "faces" visible in the vertex buffers. Now you then have to also build your own Occlusion Culling algorithm which then reduces the visibility ontop of this resulting in less rendering required.
I did an example here:
https://vimeo.com/71330826
notice how the outside is only being rendered but the chunks themselves go all the way down to the bottom even though the chunks depth faces should cancel each other out? (needs more optimisation). Also note how the camera turns around and the faces are removed from the rendering buffers?

Implementing QuadTree Terrain on a Planet (Geomipmapping)

I have a QuadTree which can be subdivided by placing objects in the nodes. I also have a planet made in OpenGL in the form of a Quad Sphere. The problem is i don't know how to put them together. How does a QuadTree store information about the Planet? Do i store vertices in the leaf Quad Tree nodes? And if so how do i split the vertex data into 4 sets without ruining the texturing and normals. If this is the case do i use Indices instead?
So my question in short really is:
How do i store my vertices data in a quad tree so that i can split the terrain on the planet up so that the planet will become higher detail at closer range. I assume this is done by using a Camera as the object that splits the nodes.
I've read many articles and most of them fail to cover this. The Quadtree is one of the most important things for my application as it will allow me to render many planets at the same time while still being able to get good definition at land. A pretty picture of my planet and it's HD sun:
A video of the planet can also be found Here.
I've managed to implement a simple quad tree on a flat plane but i keep getting massive holes as i think i'm getting the positions wrong. It's the last post on here - http://www.gamedev.net/topic/637956-opengl-procedural-planet-generation-quadtrees-and-geomipmapping/ and you can get the src there too. Any ideas how to fix it?
What you're looking for is an algorithm like ROAM (Real-time Optimally Adapting Mesh) to be able to increase or decrease the accuracy of your model based on the distance of the camera. The algorithm will make use of your quadtree then.
Check out this series on gamasutra on how to render a Real-time Procedural Universe.
Edit: the reason why you would use a quadtree with these methods is to minimize the number of vertices in areas where detail is not needed (flat terrain for example). The quadtree definition on wikipedia is pretty good, you should use that as a starting point. The goal is to create child nodes to your quadtree where you have changes in your "height" (you could generate the sides of your cube using an heightmap) until you reach a predefined depth. Maybe, as a first pass, you should try avoiding the quadtree and use a simple grid. When you get that working, you "optimize" your process by adding the quadtree.
To understand how quadtree and terrain data works together to achieve LOD based rendering
Read this paper. Easy to understand with illustrative examples.
I did once implement a LOD on a sphere. The idea is to start with a simple Dipyramid, the upper pyramid representing the northern sphere and the lower one representing the southern sphere. The the bases of the pyramids align with equator, the tips are on poles.
Then you subdivide each triangle into 4 smaller ones as much as you want by connecting the midpoints of the edges of the triangle.
The as much as you want is based on your needs, distance to camera and object placements could be your triggers for subdivision

Fastest way to perform rotational transformations on a chain of dependent, attached objects

Suppose I have two (two for the example, it will actually be some n > 1) sort of rectangular prisms "attached to each other" such that the 4 vertices on their adjacent faces are the same vertex in memory. So like two wooden blocks, one stacked on the other, with 4 vertices on the bottom, 4 in the middle that are shared between the two, and 4 on the top. Now, I want to be able to first do a specific rotation on the "top" wooden block, as if it were on a hinge that has a centerpoint of those 4 shared vertices.
So like an elbow, let's say it can only flex up to 45 degrees at a specific angle, and to perform the rotation I rotate the 8 vertices that make up the object around that invisible hinge center point. In the process, the 4 shared vertices of the other block get somewhat moved, but since the hinge is the center point among them they aren't getting "translated" away from the bottom block. I guess calling them wooden is counter-intuitive, since they will morph in specific ways, but I was trying to set it up to visualize. Anyway, let's say I want to be able to rotate this bottom block in a different manner, but have the top block act like it is attached. Thus, if the bottom block moves, the top block is swung around with it, but also with whatever flex it has on the hinge between them.
I was considering incrementally doing the transformations either via axis angle or quaternions, starting with the "top most" block and working my way down the dependency chain, performing the rotation on the current block and every vertex on blocks "above" it. However, this would require messing with offsetting all the vertices to put the current hinge as the origin, performing the rotation, then reversing the previous offset, for each step in this chain. Is there a more efficient way of handling this? I mean efficiency in speed, having extra preprocessed data in memory isn't a big deal. There may also come a time when I can't count on having such a linear dependency chain (such as the top block ends up being attached to the bottom block to form a ring, perhaps). What would be the proper way to handle this for these kind of possibilities?
Sounds to me from your description that you basically want something like a long piece of "jello", i.e., if the top section of the block/prism moves, then there is some secondary movement in the rest of the segments of the block/prism-chain, sort of like how moving a chain or some soft-body will create secondary-movements in the rest of the segments that make-up the chain or ring.
If that is the case, then I suggest actually constructing some "bones", where each bone segment starts and ends at the center-point of the 4-vertices that make-up each start and end-face of the prism/blocks. Then you can calculate when you move one segment of the bone-chain, how much the other bones in the chain should move relative to the bone that was moved. From there, you can weight the rest of the vertices in the prism/block against this central "bone" so that they move the appropriate amount as the bone moves. You may also want to average the vertices attached to one "bone" against another bone segment as well so that there is a fall-off in the weight of the attached vertices, creating a smoother movement if you end up with too much pinching at each "joint".
Using bones with the vertices weighed against the bones should reduce the number of rotational transforms you need to calculate. Only the movement of the bone-joints needs the heavy-lifting calculations ... the vertices themselves are simply interpolated from the location of the bones in the chain.
Consider using an existing tool. Have a look at this question about linking rigid bodies:
https://physics.stackexchange.com/questions/19724/how-to-represent-the-effect-of-linking-rigid-bodies-together
The standard way to handle an articulated, single-ended chain is skeletal animation -- using a chain of "bone" elements (defined by a relative translation/rotation relation), with the option of doing linear interpolation based on the bones to determine the position of the "skin" vertices. (Note that you will need to determine the rotation angle of each "joint" to fully define the pose.)
A ring of elements is more difficult to handle, because you can no longer define the rotation of each joint independently of all others. To solve this problem, set up a physical simulation or other solver which includes all the constraints. Exactly what to do depends on how you need to manipulate the object -- if it's part of a game engine, physical simulation makes sense, but if it's to be hand-animated, you have a wide range of possibilities for semi-automated rigging (keyword: reverse-kinematic).

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.

Finding object under mouse

I'm developing a game that basically has its entire terrain made out of AABB boxes. I know the verticies, minimum, and maximum of each box. I also set up my camera like this:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(Camera.rotx,1,0,0);
glRotatef(Camera.roty,0,1,0);
glRotatef(Camera.rotz,0,0,1);
glTranslatef(-Camera.x,-Camera.y,-Camera.z);
What I'm trying to do is basically find the cube the mouse is on. I thought about giving the mouse position a forward directional vector and simply iterating through until the 'mouse bullet' hits something. However this envolves interating through all objects several times. Is there a way I could do it by only iterating through all the objects once?
Thanks
This is usually referred to as 'picking' This here looks like a good gl based link
If that is tldr, then a basic algorithm you could use
sort objects by z (or keep them sorted by z, or depth buffer tricks etc)
iterate and do a bounds test, stopping when you hit the first one.
This is called Ray Tracing (oops, my mistake, it's actually Ray Casting). Every Physics engine has this functionality. You can look at one of the simplest - ODE, or it's derivative - Bullet. They are open-source so you can take out what you don't need. They both have a handy math library that handles all oftenly needed matrix and vertex operations.
They all have demos on how to do exactly this task.
I suggest you consider looking at this issue from a bigger perspective.
The boxes are just points at a lower resolution. The trick is to reduce the resolution of the mouse to figure out which box it is on.
You may have to perform a 2d to 3d conversion (or vice versa). In most games, the mouse lives in a 2d coordinate world. The stuff "under" the mouse is a 2d projection of a 3d universe.
You want to use a 3D picking algorithm. The idea is that you draw a ray from the user's position in the virtual world in the direction of the click. This blog post explains very clearly how to implement such an algorithm. Essentially your screen coordinates need to be transformed from the screen space to the virtual world space. There's a website that has a very good description about the various transformations involved and I can't post the link due to my rank. Search for book of hook's mouse picking algorithm [I do not own the site and I haven't authored the document].
Once you get a ray in the desired direction, you need to perform tests for intersection with the geometries in the real world. Since you have AABB boxes entirely, you can use simple vector equations to check which geometry intersects the ray. I would say that approximating your boxes as a sphere would make life very easy since there is a very simple sphere-ray intersection test. So, your ray would be described by what you obtain from the first step (the ray drawn in the first step) and then you would need to use an intersection test. If you're ok with using spheres, the center of the sphere would be the point you draw your box and the diameter would be the width of your box.
Good Luck!