OpenGL: getting clipping planes that will bound the entire scene - opengl

I am looking for a way to display my entire scene on the screen. This involves a call to glOrtho() with my clipping plane bounds.
However, the size of my scene is dynamic and as such, I need to find a way to determine a projection box that will contain the whole scene.
Any suggestions?

You will need to know the bounding boxes of every object in your scene. Then you can keep expanding your scene's bounding box by each object that is in it. You can see an example of this in OpenSceneGraph using their BoundingBox class.
If you need to get the bounding box for a particular object, you can just store the minimum and maximum values along each axis as you load the model (since bounding boxes are axis aligned).

Related

Retrieving occluded faces given a rectangular region

I am trying to do a click-and-drag selection to select all the visible faces of a model (similar to those in 3D modelling software such as Blender).
Initially I am thinking of using line intersection to find all the occluded faces in the scene: for every pixel in the viewport, trace a line into the scene and find the first intersection. Then the list of occluded faces would be the ones that did not get intersected. But then after experimentation I realized that this method is very slow.
I heard of another method which goes something like:
Assigning a unique color for each primitive.
Project all those onto a virtual plane coincides with the viewport
From the projected pixels, if the colors corresponding to a primitive are not present, then it is occluded.
The problem is that I have no idea how to go about creating such a "virtual" plane, and at the same time not revealing it to the end-user. Any help or better idea to solve this?

View frustum culling for animated meshes

I implemented frustum culling in my system, it tests the frustum planes on every object's bounding sphere, and it works great. (I find the PlaneVsAabb check unneeded)
However, the bounding sphere of the mesh is adjusted for its bind pose, so when the mesh starts moving (e.g the player attacks with his sword) some vertices could go out of the sphere.
This often results in a mesh getting culled, although there are some vertices that should be rendered (e.g the player's sword that went out of the sphere).
I could think of two possible solutions for this:
For every mesh in every frame, calculate its new bounding sphere based on bone changes. (I have no idea how to start with this...) Could this be too inefficient?
Add a fixed offset for every sphere radius (based on the entire mesh size maybe?), so there could be no chance of the mesh getting culled even when animated.
(1) would be inefficient in real-time yes. However you can do a mixture of both, by computing the largest possible bounding sphere statically i.e. when you load it. Using that in (2) would guarantee a better result than some arbitrary offset you make up.
(1) You can add locators to key elements (e.g. dummy bone on the tip of the sword) and transform their origin while animating. You can done it on CPU on each update and then calculate bounding box or bounding sphere. Or you can precompute bounding volumes for each frame of animation offline. Doom3 uses second approach.

Get screen coordinates and size of OpenGL 3D Object after transformation

I have a couple of 3D objects in OpenGL in a processing sketch and I need to find out if the mouse is hovering over those objects. Since there is constant transformation I can't compare the original coordinates and size to the mouse position. I already found the screenX() and screenY() methods which return the translated screen coordinates after transformation and translation but I would still need to get the displayed size after rotation.
Determining which object the mouse is over is called picking and there are 2 main approaches:
Color picking. Draw each object using a different color into the back buffer (this is only done when picking, the colored objects are never displayed on screen). Then use glReadPixels to read the pixel under the cursor and check its color to determine which object it is. If the mouse isn't over an object you'll get the background color. More details here: Lighthouse 3D Picking Tutorial, Color Coding
Ray casting. You cast a ray through the cursor location into the scene and check if it intersects any objects. More details here: Mouse picking with ray casting
From reading your description option 1 would probably be simpler and do what you need.

How to render CGAL objects in OpenGL properly?

I am quite new to CGAL as well as OpenGL. I know that CGAL provides a Qt interface to display objects but I want to use only OpenGL and I am able to render polyhedrons and nef polyhedrons in openGL(I referred to polyhedron demo). Question is, how to display polyhedrons of different size efficiently in openGL. I apply translation in my program using glTranslatef to view the objects properly. Problem is, it may not work for each and every object because of the difference in the size. Therefore I need to apply translations based on the size of the object. If I can find the longest diagonal of the object this may be possible by adjusting the value of the parameters that I pass to glTranslatef(). Is there any way to do this in CGAL?
Treat your objects as a collection of points, and create a bounding volume from it. The size of the bounding volume should give you the scaling required. For example, you might wish to center the view around the center of the bounding sphere, and scale the view based on its radius.
See the chapter on bounding volumes.
Also, you probably want to use glScale to scale the view in addition to glTranslate to center it.

Detect the surfaces of a 3D mesh, selected by mouse

I have an application that is used to display objects in 3D. Now I want to improve my application. If I double click in an area of my aillage, I want to retrieve them arrested and surfaces existing in this area, in order to subdivide this area then. Is it possible to receive them? Thanks.
Convert the click on the viewport to a ray in world space.
Then query your scene with the ray to find intersecting objects (based on axis-aligned bounding box search using your scene's octtree, if you have one).
Then if you need to detect triangles, test the ray against all triangles in the objects found by the scene query. You could optimize this step if necessary by building an octtree for the object's mesh. The one that is closest to the ray origin is the hit point.
For each object you can transform the ray into its own local coordinate system.