How to detect if camera cannot see it's target? - opengl

I would like to know how can I detect if an obstacle(gray cube on the screenshot below) is hindering camera's vision on it's target(pink cube on the screenshot below) so that I can move the camera temporarily closer to the target.

You might look into occlusion queries to solve this problem. You can test if any GL_SAMPLES_PASSED, when drawing the pink cube. If no samples were rendered, you know the pink cube isn't being drawn. You might run into issues if the object is out of the viewing frustum and no samples will pass, so you will be getting a false positive, but you could do a frustum cull check and only if the frustum cull fails, then the object is in frame, and then if no samples pass the occlusion test, you can move your camera forward.

Related

View frustum culling for animated meshes

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

Rendering Point Sprites across cameras in cube maps

I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)

How would I go about applying a skybox to the world, openGL C++

I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)

OpenGL Perspective Texture Flickering

I have a very simple OpenGL (3.2) setup, no lighting, perspective projection and a simple shader program (applies projection transformation and uses texture2D to read the color from the texture).
The camera is looking down the negative z-axis and I draw a few walls and pillars on the x-y-plane with a texture (http://i43.tinypic.com/2ryszlz.png).
Now I'm moving the camera in the x-y-plane and this is what it looks like:
http://i.imgur.com/VCrNcly.gif.
My question is now: How do I handle the flickering of the wall texture?
As the camera centers the walls, the view angle onto the texture compresses the texture for the screen, so one pixel on the screen is actually several pixels on the texture, but only one is chosen for display. From the information I have access to in the shaders, I don't see how to perform an operation which interpolates the required color.
As this looks like a problem nearly every 3D application should have, the solution is probably pretty simple (I hope?).
I can't seem to understand the images, but from what you are describing you seem to be looking for MIPMAPPING. Please google it, it's a very easy and very generally used concept. You will be able to use it by adding one or two lines to your program. Good Luck. I'd be more detailed but I am out of time for today.

Can someone describe the algorithm used by Ken Silverman's Voxlap engine?

From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.