How to render CGAL objects in OpenGL properly? - opengl

I am quite new to CGAL as well as OpenGL. I know that CGAL provides a Qt interface to display objects but I want to use only OpenGL and I am able to render polyhedrons and nef polyhedrons in openGL(I referred to polyhedron demo). Question is, how to display polyhedrons of different size efficiently in openGL. I apply translation in my program using glTranslatef to view the objects properly. Problem is, it may not work for each and every object because of the difference in the size. Therefore I need to apply translations based on the size of the object. If I can find the longest diagonal of the object this may be possible by adjusting the value of the parameters that I pass to glTranslatef(). Is there any way to do this in CGAL?

Treat your objects as a collection of points, and create a bounding volume from it. The size of the bounding volume should give you the scaling required. For example, you might wish to center the view around the center of the bounding sphere, and scale the view based on its radius.
See the chapter on bounding volumes.
Also, you probably want to use glScale to scale the view in addition to glTranslate to center it.

Related

In OpenGL, how to clear drawn primitives in a 3D region

Let us say, I draw 3 points with glVertex3f at (0,0,0), (9,0,0) and (10,0,0)
I would like to clear all Vertex3f points in the bounding box region (2,-1,-1) to (15, 1, 1) which should include the last two points.
How does one do this in OpenGL?
Manage your point drawing outside of OpenGL. IE Don't use OpenGL to accomplish this. OpenGL is used for drawing data, not keeping track of it. If you want to get rid of certain objects, don't tell OpenGL to draw them. There are various space-partitioning data structures at your disposal for efficiently finding intersections.
The naive way is to check to see that all the points in your scene are outside that region before you draw them.
A better way is to use a kd-tree or an octree to exponentially narrow down the number of comparisons you must do.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

directx 9 mouse over an object

ok when i used to draw stuff using gdi+ i used to know the coordinates for objects but now when im using meshes in directx 9 i have no idea how to get the coordinates for the object so i can find if the mouse over an object. any idea how to find the coordinates ?
You need to cast the mouse position into the world and convert it to world-space coordinates, which then are tested against the various objects. You may be able to find a library to do this for you, I know OpenGL supports picking and most wrappers offer enhanced functions for that, but the principle is:
Find the mouse coordinates in the window. Using those coordinates, cast a ray (whether you actually use a ray in the system or simply do similar math isn't a big deal here) into the world. You'll use the current view matrix ("camera" angle and position) to calculate the direction and origin of the ray. Using that ray, test against your objects, their bounding boxes or geometry (whichever you choose) to find the object. Using the intersection coordinates, find the object that is at that location.
You can also use the depth buffer for this very easily, if your scene is relatively static. Simply render with a depth texture set as the Z buffer, then use the depth, mouse position and view matrix to find the point of intersection.
It may be possible to do this in reverse, that is, map each object to the appropriate screen coordinates, but you will likely run into issues with depth sorting and overlapping areas. Also, it can be unnecessarily slow mapping every object to window areas each frame.

OpenGL: getting clipping planes that will bound the entire scene

I am looking for a way to display my entire scene on the screen. This involves a call to glOrtho() with my clipping plane bounds.
However, the size of my scene is dynamic and as such, I need to find a way to determine a projection box that will contain the whole scene.
Any suggestions?
You will need to know the bounding boxes of every object in your scene. Then you can keep expanding your scene's bounding box by each object that is in it. You can see an example of this in OpenSceneGraph using their BoundingBox class.
If you need to get the bounding box for a particular object, you can just store the minimum and maximum values along each axis as you load the model (since bounding boxes are axis aligned).

Drawing "point-like" shapes in OpenGL, indifferent to zoom

I'm working with Qt and QWt3D Plotting tools, and extending them to provide some 3-D and 2-D plotting functionality that I need, so I'm learning some OpenGL in the process.
I am currently able to plot points using OpenGL, but only as circles (or "squares" by turning anti-aliasing off). These points act the way I like - i.e. they don't change size as I zoom in, although their x/y/z locations move appropriately as I zoom, pan, etc.
What I'd like to be able to do is plot points using a myriad of shapes (^,<,>,*,., etc.). From what I understand of OpenGL (which isn't very much) this is not trivial to accomplish because OpenGL treats everything as a "real" 3-D object, so zooming in on any openGL shape but a "point" changes the object's projected size.
After doing some reading, I think there are (at least) 2 possible solutions to this problem:
Use OpenGL textures. This doesn't seem to difficult, but I believe that the texture images will get larger and smaller as I zoom in - is that correct?
Use OpenGL polygons, lines, etc. and draw *'s, triangles, or whatever. But here again I run into the same problem - how do I prevent OpenGL from re-sizing the "points" as I zoom?
Is the solution to simply bite the bullet and re-draw the whole data set each time the user zooms or pans to make sure that the points stay the same size? Is there some way to just tell openGL to not re-calculate an object's size?
Sorry if this is in the OpenGL doc somewhere - I could not find it.
What you want is called a "point sprite." OpenGL1.4 supports these through the ARB_point_sprite extension.
Try this tutorial
http://www.ploksoftware.org/ExNihilo/pages/Tutorialpointsprite.htm
and see if it's what you're looking for.
The scene is re-drawn every time the user zooms or pans, anyway, so you might as well re-calculate the size.
You suggested using a textured poly, or using polygons directly, which sound like good ideas to me. It sounds like you want the plot points to remain in the correct position in the graph as the camera moves, but you want to prevent them from changing size when the user zooms. To do this, just resize the plot point polygons so the ratio between the polygon's size and the distance to the camera remains constant. If you've got a lot of plot points, computing the distance to the camera might get expensive because of the square-root involved, but a lookup table would probably solve that.
In addition to resizing, you'll want to keep the plot points facing the camera, so billboarding is your solution, there.
An alternative is to project each of the 3D plot point locations to find out their 2D screen coordinates. Then simply render the polygons at those screen coordinates. No re-scaling necessary. However, gluProject is quite slow, so I'd be very surprised if this wasn't orders of magnitude slower than simply rescaling the plot point polygons like I first suggested.
Good luck!
There's no easy way to do what you want to do. You'll have to dynamically resize the primitives you're drawing depending on the camera's current zoom. You can use a technique known as billboarding to make sure that your objects always face the camera.