Pixel based 3D Visualization - c++

I need to visualize 3D point clouds using C++, I started learning OpenGL but so far all I find is drawing shapes using vertices
What if I want the 3D scene to be built using pixels, does OpenGL support this ? if not what alternatives I have ?

Two approaches:
Render geometry using GL_POINTS mode. You'll end up with a
literal display of a point cloud (i.e. bigger and smaller dots, no
vertices, no solid faces). This is very simple to implement.
Process your data so that you'll have solid geometry (i.e. triangles) representing the original shape. There is a couple of algorithms which try to generate a mesh from a 3D bitmap. Most notable are Marching Cubes and Marching Tetrahedrons. These are commonly used i.e. in medicine (to create a 3D mesh of an organ after it's scanned by MRI or something). You'll find plenty of resources for them on Google.

I think what you are looking for is Point Sprites. There are some examples of Point Sprites and particle clouds on http://www.codesampler.com/oglsrc/oglsrc_6.htm (although I haven't tried these examples myself).

Related

maximal convex patching in Computer graphics

Given a 3D object in Computer graphics, whose surface is represented as a 3D triangular mesh (mesh of 3D triangle objects), I need to find the maximum continual Convex patches on the surface of the given 3D object.
I am using OpenGl to render the graphics within a C++ program. What kind of methods or algorithms should I use to find the convex patches.
I have to apply different colors to the different convex patches on the object to signify the selection.
Say I have a sphere then the whole sphere is one maximal convex patch. Any portion of the sphere surface will be a convex patch, by maximal I mean the maximum continuous convex patch that can be found. Well in the rendering, depending on the viewing angles, the maximal convex patches visible to the viewer will have to colored.
Start from any triangle. Traverse it's edge's and check that the angle between the two triangles is less than 180deg. If it is add it to the current selection and continue expanding.
The check is actually really simple if you use vector geometry. Say A - B is the common edge with C on the selected side and D on the other. Then just check if dot(cross((A-B), (D-B)), cross((A-B), (C-B)) < 0.
Unfortunately OpenGL doesn't help with object algorithms. It only handles converting triangles to pixels.
I need to do it using OpenGL
Then you're out of luck. OpenGL only draws points, lines and triangles. OpenGL is not a 3D modelling library, OpenGL is not a scene graph, OpenGL is not a graphics engine.
It does not do all purpose geometry processing (it may be possible to use a combination of geometry/tesselation shaders, transform feedback and compute shaders to do it, but it would be very cumbersome to implement).

Merging a Sphere and Cylinder

I want to render a spring using spheres and cylinders. Each Cylinder has two Spheres at each end and all the cylinders are placed along the spring centre line. I could achieve this .. and rendering is good. I am presently doing it using gluSphere and gluCylinder.
Now when I look at the performance its not good its very slow. So I want to know if the following are possible :
Is it possible that I combine the surfaces of the spheres and cylinders and render only the outer hull but not the inner covered parts of the sphere ... ?
I also read about VBOs .. is it possible to use gluSphere and gluCylinder with VBOs .. ?
I cannot use a display list because the properties of the spring keep changing ... !
Can any one suggest a better suggestion?
You might want to reconsider the way you are drawing springs. In my opinion there are two valid approaches.
Load a spring model using Assimp or some other model loading software that is easily integrated with OpenGL. Free 3D models can be found at Turbo Squid or through Google's 3D warehouse (while in Google Sketch-Up).
Draw the object purely in OpenGL. The idiomatic way to draw this kind of object using the post fixed function OpenGL pipeline is by drawing volumetric 3D lines. The more lines you draw the more curvature you can give to your spring at the expense of rendering time.
For drawing springs I would recommend that you define a set of points (with adjacency) that define the shape of your spring and draw these points with a primitive type of GL_LINE_STRIP_ADJACENCY. Then, in the shader program use a geometry shader to expand this pixel-bound line strip into a set of volumetric 3D lines composed of triangle strips.
This blog post gives an excellent description of the technique.
Your best bet would probably be to take a quick tutorial in any 3D modeling software (Blender comes to mind) and then model your spring in its rest pose using CSG operations.
This approach not only rids you of redundant primitives but also makes it very easy to use your model with VBOs. All you have to do is to parse the output file of Blender (easiest would be .obj), retrieving arrays filled with vertex data (positions, normals, possibly texture coordinates).
Lastly, to "animate" your spring, you can use the vertex shader. You just have to pass it another uniform describing how much the spring is deformed and do the rest of the transformation there.

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

ExtrudeCut in OpenGl

Hi
How can I extrude cut (like solidworks) a 3D model?
Is there an easy way or I have to do some complex calculation?
What you want to do is part of a discipline called Constructive Solid Geometry (CSG) and it's about one of the trickiest subjects of 3D graphics and processing. There are several approaches how to tackle the problem:
If you're just interested in rendering CSG in a raytracer things get actually quite easy: At every ray/surface intersection you increment/decrement a counter. CSG combinations can also be transformed into surface count. By compariring ray intersection counter and CSG surface count you can apply the CSG operations on the traced ray
If you're interested on doing CSG on triangulated models, the most common approach is to build BSP trees from the geometry and apply the CSG operations on the BSP. Then from the resulting BSP you recreate the mesh. This is how it's implemented in mesh based modellers (take a look at Blender's source code, which does exactly this)
CSG on analytical surfaces is extremely difficult. There are no closed solutions for the intersection of curves or curved surfaces. The best approach is to numerically find a number of sampling points in the intersection and fit a curve along the intersection. This can get numerically unstable.
Tesselation Phase Processing (this is what I implemented (or even invented maybe) for my 3D engine): When rendering curves or curved patches on 3D hardware, one usually must tesselate them into triangular meshes before. In this tesselation phase you can test if the edges of a newly created triangle intersect with curves/curved surfaces; use a few iterations in a Newton zero crossing solver to find the point of intersection of both curves/surfaces and store this as a sampling point at the edge for both patches involved (so that the tesselation of the other surface will share its vertices' positions with the first surface). After the first tesselation stage use a relaxation method (basically apply a Laplacian) on the vertices, while constraining them to the surface (remember that your surfaces are mathematical exact and it's very easy to fiddle with the variables of the surface, but use the resulting positions as metric). It works very well as long as not intersections with ordinary triangulated meshes are to be considered (each triangle of the mesh had to be turned into a surface patch, slowing down the method)
You tagged this OpenGL, so to get this straight: OpenGL can't help you there, as OpenGL is just drawing triangles, not processing complex geometry.
Citing OpenGl faq:
What is OpenGL?
OpenGL stands for Open Graphics
Library. It is an API for doing 3D
graphics.
In more specific terms, it is an API
that is used to "draw triangles on
your scene". In this age of GPUs, it
is about talking to the GPU so that it
does the job of drawing. It does not
deal with file formats. It does not
open bmp, png and any image format. It
does not open 3d object formats like
obj, max, maya. It does not do
animation. It does not handle
keyboard, mouse and any input devices.
It does not create a window, and so
on.
All that stuff should be handled by an
external library (GLUT is one example
that is used for creating and
destroying a window and handling mouse
and keyboard).
GL has gone through a number of
versions.
So the answer is no. Things like extrude cut are complex operations. You have to implement it by your own, ore use third party libraries.

Why is there no circle or ellipse primitive in OpenGL?

Circles are one of the basics geometric entities. Yet there is no primitives defined in OpenGL for this, like lines or polygons. Why so? It's a little annoying to include custom headers for this all the time!
Any specific reason to omit it?
While circles may be basic shapes they aren't as basic as points, lines or triangles when it comes to rasterisation. The first graphic cards with 3D acceleration were designed to do one thing very well, rasterise triangles (and lines and points because they were trivial to add). Adding any more complex shapes would have made the card a lot more expensive while adding only little functionality.
But there's another reason for not including circles/ellipses. They don't connect. You can't build a 3D model out of them and you can't connect triangles to them without adding gaps or overlapping parts. So for circles to be useful you also need other shapes like curves and other more advanced surfaces (e.g. NURBS). Circles alone are only useful as "big points" which can also be done with a quad and a circle shaped texture, or triangles.
If you are using "custom headers" for circles you should be aware that those probably create a triangle model that form your "circles".
Because historically, video cards have rendered points, lines, and triangles.
You calculate curves using short enough lines so the video card doesn't have to.
Because graphic cards operate on 3-dimensional points, lines and triangles. A circle requires curves or splines. It cannot be perfectly represented by a "normal" 3D primitive, only approximated as an N-gon (so it will look like a circle at a certain distance). If you want a circle, write the routine yourself (it isn't hard to do). Either draw it as an N-gon, or make a square (2 triangles) and cut a circle out of it it using fragment shader (you can get a perfect circle this way).
You could always use gluSphere (if a three-dimensional shape is what you're looking for).
If you want to draw a two-dimensional circle you're stuck with custom methods. I'd go with a triangle fan.
The primitives are called primitives for a reason :)