Merging a Sphere and Cylinder - opengl

I want to render a spring using spheres and cylinders. Each Cylinder has two Spheres at each end and all the cylinders are placed along the spring centre line. I could achieve this .. and rendering is good. I am presently doing it using gluSphere and gluCylinder.
Now when I look at the performance its not good its very slow. So I want to know if the following are possible :
Is it possible that I combine the surfaces of the spheres and cylinders and render only the outer hull but not the inner covered parts of the sphere ... ?
I also read about VBOs .. is it possible to use gluSphere and gluCylinder with VBOs .. ?
I cannot use a display list because the properties of the spring keep changing ... !
Can any one suggest a better suggestion?

You might want to reconsider the way you are drawing springs. In my opinion there are two valid approaches.
Load a spring model using Assimp or some other model loading software that is easily integrated with OpenGL. Free 3D models can be found at Turbo Squid or through Google's 3D warehouse (while in Google Sketch-Up).
Draw the object purely in OpenGL. The idiomatic way to draw this kind of object using the post fixed function OpenGL pipeline is by drawing volumetric 3D lines. The more lines you draw the more curvature you can give to your spring at the expense of rendering time.
For drawing springs I would recommend that you define a set of points (with adjacency) that define the shape of your spring and draw these points with a primitive type of GL_LINE_STRIP_ADJACENCY. Then, in the shader program use a geometry shader to expand this pixel-bound line strip into a set of volumetric 3D lines composed of triangle strips.
This blog post gives an excellent description of the technique.

Your best bet would probably be to take a quick tutorial in any 3D modeling software (Blender comes to mind) and then model your spring in its rest pose using CSG operations.
This approach not only rids you of redundant primitives but also makes it very easy to use your model with VBOs. All you have to do is to parse the output file of Blender (easiest would be .obj), retrieving arrays filled with vertex data (positions, normals, possibly texture coordinates).
Lastly, to "animate" your spring, you can use the vertex shader. You just have to pass it another uniform describing how much the spring is deformed and do the rest of the transformation there.

Related

What is the graphics technique for drawing 3D holes?

How to draw a circular disc with thickness and then "drill" holes (of any shape) into it at runtime?
The desired outcome would look like CAD drawings without textures.
I am using OpenGL, but I guess this is independant of the graphics API.
I guess what you're after is Constructive solid geometry. Some current graphics/game engines (like Unreal) use it, but most don't do the real thing but approximate (fake) the results with textures or switching a solid geometry with a prepared multipart model. Another approach would involve using voxels, like Minecraft or Voxatron.
OpenCSG should do what you want.
Look into CGAL innards of OpenSCAD if you need the CSG'd geometry and not just a rendered image.
This could be an interesting use of Geometry Shaders. Take in the disc geometry and add the extra vertices for the holes and then pass to the Fragment Shader.

OpenGL: create complex and smoothed polygons

In my OpenGL project, I want to dynamically create smoothed polygons, similiar like this one:
The problem relies mainly in the smoothing process. My procedure up to this point, is firstly to create a VBO with randomly placed vertices.
Then, in my fragment shader, (I'm using the programmable function pipeline) there should happen the smoothing process, or in other words, created the curves out of the previously defined "lines" between the vertices.
And exactly here is the problem: I am not very familiar with thoose complex mathematical algorithms, which would examine, if a point is inside the "smoothed polygon" or not.
First up, you can't really do it in the fragment shader. The fragment shader is limited to setting the final(ish) color of a "pixel" (which is basically, but not exactly, an actual pixel) before it gets written to the screen. It can't create new points on a curve.
This page gives a nice overview of the different algorithms for creating smooth curves.
The general approach is to break a couple of points into multiple points using a geometry shader, and then render them just like a normal polygon. But I don't know the details. Try a google search for bezier geometry shader for example.
Wait, I lie. I found a program here that does it in the fragment shader.

Pixel based 3D Visualization

I need to visualize 3D point clouds using C++, I started learning OpenGL but so far all I find is drawing shapes using vertices
What if I want the 3D scene to be built using pixels, does OpenGL support this ? if not what alternatives I have ?
Two approaches:
Render geometry using GL_POINTS mode. You'll end up with a
literal display of a point cloud (i.e. bigger and smaller dots, no
vertices, no solid faces). This is very simple to implement.
Process your data so that you'll have solid geometry (i.e. triangles) representing the original shape. There is a couple of algorithms which try to generate a mesh from a 3D bitmap. Most notable are Marching Cubes and Marching Tetrahedrons. These are commonly used i.e. in medicine (to create a 3D mesh of an organ after it's scanned by MRI or something). You'll find plenty of resources for them on Google.
I think what you are looking for is Point Sprites. There are some examples of Point Sprites and particle clouds on http://www.codesampler.com/oglsrc/oglsrc_6.htm (although I haven't tried these examples myself).

rendered 3D Scene to point cloud

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

GL_POINT and GL_LINES - real use?

I've been using OpenGL since some time now for making 3D applications, but I never really understood the use of the GL_POINT and GL_LINES primitive drawing types for 3D games in the production phase.
(Where) are point and line primitives in OpenGL still used in modern games?
You know, OpenGL is not just for games and there are other kind of programs than just games. Think CAD programs, or map editors, where wireframes are still very usefull.
GL_POINTS are used in games for point sprites (either via the pointsprite functionality or by generating a quad from a point in the geometry shader) both for "sparkle" effects and volumetric clouds.
They are also used in some special algorithms just when, well... when points are needed. Such as in building histograms in the geometry shader as by the chapter in one of the later GPU Gems books. Or, for GPU instance culling via transform feedback.
GL_LINES have little use in games (mostly useful for CAD or modelling apps). Besides not being needed often, if they are needed, you will normally want lines with a thickness greater than 1, which is not well supported (read as: fast) on all implementations.
In such a case, one usually draws thick lines with triangle strips.
Who ever said those primitives were used in modern games?
GL_LINES is critical for wireframe views in 3D modeling tools.
(Where) are point and line primitives in OpenGL still used in modern games?
Where do you want them to be used?
Under standard methods, points can be used to build point sprites, which are 2D flatcards that always face the camera and are of a particular size. They are always square in window-space. Sadly, the OpenGL specification makes using them somewhat dubious, as point sprites are clipped based on the center of the point, not the size of the two triangles that are used to render it.
Lines are perfectly reasonable for line drawing. Once upon a time, lines weren't available in consumer hardware, but they have been around for many years now. Of course, antialiased line rendering (GL_LINE_SMOOTH) is another matter.
More importantly is the interaction of these things with geometry shaders. You can convert points into a quad. Or a triangle. Or whatever you want, really. Each "point" is just an execution of the geometry shader. You can have points which contain the position and radius of a sphere, and the geometry shader can output a window-aligned quad that is the appropriate size for the fragment shader to do some raytracing logic on it.
GL_POINTS just means "one vertex per geometry shader". GL_LINES means "two vertices per geometry shader." How you use it is up to you.
I'd say for debugging purposes, but that is just from my own perspective.
Some primitives can be used in areas where you don't think they can be applied, such as a particle system.
I agree with Pompe de velo about lines being useful for debugging. They can be useful when debugging AI and collision detection algorithms so that you can visualize the data that is being used by the AI or collision detection. Some example uses for AI, the lines can be used to show AI paths or path meshes. Lines can be used to show steering data that the AI is using. Lines can be used to show what an AI is aiming at. The data that is shown can be displayed in text form but sometimes it is easier to see it in visual form.
In most cases particles are based on GL_POINT, considering that there can be a huge number of particles on the screen it would be very expensive to use 4 vertices per particle, so GL_POINT solves this problem
GL_LINES good for debugging purposes, wireframe mode can be used in various cases. As mentioned above - in CAD apps, but if you're interesed in gamedev use - it's good for a scene editor.
In terms of collision detection, they come in handy when you want to visualize bounding volumes(boxes,spheres,k-dops) and contact manifolds in wireframe mode. Setting the colour of these primitives based on the status of collisions as well is incredibly useful.