rendered 3D Scene to point cloud - opengl

Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?

Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.

I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.

If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!

Related

Is there a way of keeping track of the relationship between vertices and pixels using either OpenGL or DirectX? 11

I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.

ExtrudeCut in OpenGl

Hi
How can I extrude cut (like solidworks) a 3D model?
Is there an easy way or I have to do some complex calculation?
What you want to do is part of a discipline called Constructive Solid Geometry (CSG) and it's about one of the trickiest subjects of 3D graphics and processing. There are several approaches how to tackle the problem:
If you're just interested in rendering CSG in a raytracer things get actually quite easy: At every ray/surface intersection you increment/decrement a counter. CSG combinations can also be transformed into surface count. By compariring ray intersection counter and CSG surface count you can apply the CSG operations on the traced ray
If you're interested on doing CSG on triangulated models, the most common approach is to build BSP trees from the geometry and apply the CSG operations on the BSP. Then from the resulting BSP you recreate the mesh. This is how it's implemented in mesh based modellers (take a look at Blender's source code, which does exactly this)
CSG on analytical surfaces is extremely difficult. There are no closed solutions for the intersection of curves or curved surfaces. The best approach is to numerically find a number of sampling points in the intersection and fit a curve along the intersection. This can get numerically unstable.
Tesselation Phase Processing (this is what I implemented (or even invented maybe) for my 3D engine): When rendering curves or curved patches on 3D hardware, one usually must tesselate them into triangular meshes before. In this tesselation phase you can test if the edges of a newly created triangle intersect with curves/curved surfaces; use a few iterations in a Newton zero crossing solver to find the point of intersection of both curves/surfaces and store this as a sampling point at the edge for both patches involved (so that the tesselation of the other surface will share its vertices' positions with the first surface). After the first tesselation stage use a relaxation method (basically apply a Laplacian) on the vertices, while constraining them to the surface (remember that your surfaces are mathematical exact and it's very easy to fiddle with the variables of the surface, but use the resulting positions as metric). It works very well as long as not intersections with ordinary triangulated meshes are to be considered (each triangle of the mesh had to be turned into a surface patch, slowing down the method)
You tagged this OpenGL, so to get this straight: OpenGL can't help you there, as OpenGL is just drawing triangles, not processing complex geometry.
Citing OpenGl faq:
What is OpenGL?
OpenGL stands for Open Graphics
Library. It is an API for doing 3D
graphics.
In more specific terms, it is an API
that is used to "draw triangles on
your scene". In this age of GPUs, it
is about talking to the GPU so that it
does the job of drawing. It does not
deal with file formats. It does not
open bmp, png and any image format. It
does not open 3d object formats like
obj, max, maya. It does not do
animation. It does not handle
keyboard, mouse and any input devices.
It does not create a window, and so
on.
All that stuff should be handled by an
external library (GLUT is one example
that is used for creating and
destroying a window and handling mouse
and keyboard).
GL has gone through a number of
versions.
So the answer is no. Things like extrude cut are complex operations. You have to implement it by your own, ore use third party libraries.

OpenGL, applying texture from image to isosurface

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.
Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).
My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.
I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.
My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.
To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.
Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?
Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.
Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.
My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.
I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).
Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.
Not sure if any other clarification is necessary, but I'll update the question as needed.
On the texture combination part of the question:
Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.
An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.
An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.
You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.
On the coordinates part:
Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.
I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.
This is a hard and interesting problem.
The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.
The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.
You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).
I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)
Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).
For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)

3d rendering of a surface from a depthmap

Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/

Are there any easy ways to generate OpenGL code for drawing shapes from a GUI?

I have enjoyed learning to use OpenGL under the context of games programming, and I have experimented with creating small shapes. I'm wondering if there are any resources or apps that will generate code similar to the following with a simple paint-like interface.
glColor3f(1.0, 0.0, 0.0);
glBegin(GL_LINE_STRIP);
glVertex2f(1, 0);
glVertex2f(2, 3);
glVertex2f(4, 5);
glEnd();
I'm having trouble thinking of the correct dimensions to generate shapes and coming up with the correct co-ordinates.
To clarify, I'm not looking for a program I can just freely draw stuff in and expect it to create good code to use. Just more of a visual way of representing and modifying the sets of coordinates that you need.
I solved this to a degree by drawing a shape in paint and measuring the distances between the pixels relative to a single point, but it's not that elegant.
It sounds like you are looking for a way to import 2d geometry into your application. The best approach in my opinion would be to develop a content pipeline. It goes something like this:
You would create your content in a 3d modeling program like Google's Sketchup. In your case you would draw 2d shapes using polygons.
You need a conversion tool to get the data out of the original format and into a format that your target application can understand. One way to get polygon and vertex data out of Sketchup is to export to Collada and have your tool read and process it. (The simplest format would be a list of triangles or lines.)
Write a geometry loader in your code that reads the data created by your conversion tool. You need to write opengl code that uses vertex arrays to display the geometry.
The coordinates you'll use just depend on how you define your viewport and the resolution you're operating in. In fact, you might think about collecting the coordinates of the mouse clicks in whatever arbitrary coordinate system you want and then mapping those coordinates to opengl coordinates.
What kind of library are you expecting?
something like
drawSquare(dx,dy);?
drawCircle(radius);?
drawPoly(x1,y1,x2,y2....);?
Isn't that exactly the same as glVertex but with a different name? Where is the abstraction?
I made one of these... it would take a bitmap image, and generate geometry from it. try looking up triangulation.
the first step is generating the edge of the shape, converting it from pixels to vertices and edges, find all the edge pixels and put a vertex at each one, then based on either the distance between vertices, or (better) the difference in gradient between edges to cull out vertices and reduce the poly count of the mesh.
if your shape drawing program works with 'vector graphics' rather than pixels, i.e. plotting points and having lines drawn between them, then you can skip that first step and you just need to do triangulation.
the second step, once you have your edges and vertices is triangulation, in order to generate triangles, ear clipping is a simple method for instance.
as for the coordinates to use? that’s entirely up to you as others have said, to keep it simple, Id just work in pixel coordinates.
you can then scale and translate as needed to transform the shape for use.