I have been brought in on a project where I need to render a 3D volume from a series of images of the volume. The images have been created by a couple of techniques such that they are vertical slices of the object in question.
The data set is similar to this question, but the asker is looking for a Matlab solution.
The goal is to have this drawing be in something near real time (>1Hz update rate), and from my research openGL seems to be the fastest option for drawing. Is there a built in function in openGL render the volume in openGL other than the following psuedocode algorithm.
foreach(Image in Folder)
foreach(Pixel in Image)
pointColour(pixelColour)
pointLocation(Pixel.X,Pixel.Y,Image.Z)
drawPoint
I am not concerned about interpolating between images, the current spacing is small enough that there no need for it.
I'm afraid if you're thinking about volume rendering, you will need to first understand the volume rendering integral because the resultant color of a pixel on the screen is a function of all the voxels that line up with it for the current viewing angle.
There are two methods to render a volume in real-time using conventional graphics hardware.
Render the volume as a set of 2D view-aligned slices that intersect the 3D texture (proxy geometry). Explanation here.
Use a raycaster that uses programmable graphics hardware, tutorial here.
This is not an easy problem to solve - but depending on what you need to do things might be a little simpler. For example: Do you care about having an interactive transfer function? Do you want perspective views, or will orthographic projection suffice? Are you rendering iso-surfaces? Are you using this only for MPR-type views?
Related
I'm working on a 3D geographical renderer with building models on a terrain surface. These building models are captured through photogrammetry, and a problem we have is that the terrain surface sometimes pokes through the building model since the surface data and building model don't match exactly.
We want to mask away the terrain surface in the area that is covered by the building model footprint. I've been thinking of using the stencil buffer, maybe extruding some kind of shadow volume from the model and filling the z buffer with high values in the area covered by the building model's footprint before rendering the model. This would require quite a bit of processing though, and I'm hoping that there is smarter and more efficient way of doing things. Another idea is making an orthographic 2d texture of the model rendered from above and using this to fill the z-buffer in some creative way using shaders.
So if anyone have done something similar before or have any ideas, I'd be real glad to hear them :-)
I'm limited to OpenGL ES 3.0, so I can't use geometry shaders or other fancy features.
Cheers,
Thomas
You must know both the terrain mesh, and where the buildings actually are on the terrain. The most obvious fix would be to preprocess the terrain mesh to "flatten" the area around the foundations of each building. This only needs doing once, so it's only a one-off cost rather than a per-frame cost.
Can't think of any immediately obvious neater method - the need for depth testing, except when you don't want it, doesn't really nicely turn into an algorithm ;)
A question sort of addressing the problem and another question asking a related question.
I have a 2D texture that has 12x12 slices of a volume layered in a grid like this:
What I am doing now is to calculate the offset and sampling based of the 3D coordinate inside the volume using HLSL code myself. I have followed the descriptions found here and here, where the first link also talks about 3D sampling from a 2D sliced texture. I have also heard that modern hardware have the ability to sample 3D textures.
That being said, I have not found any description or example code that samples the 3D texture. What HLSL, or OpenGL, function can I use to sample this flipbook type of texture? If you can, please add a small example snippet with explanations. If you cant, pointing me to one or the documentation would be appreciated. I have found no sampler function where I can provide the number of layers in the U and V directions so I dont see how it can sample without knowing how many slices are per axis.
If I am misunderstanding this completely I would also appreciate being told so.
Thank you for your help.
OpenGL has support for true 3D textures for ages (actually 3D texture support already appeared in OpenGL-1.2). With that you upload your 3D texture not as a "flipbook" but simply as a stack of 2D images, using the function glTexImage3D. In GLSL you then just use the regular texture access function, but with a sampler3D and a 3 component texture coordinate vector (except in older versions of GLSL, i.e. before GLSL-1.5/OpenGL-3 where you use texture3D).
I've implemented the volume render using ray-casting in CUDA. Now I need to add other 3D objects (like 3D terrain in my case) in the scene and then make it interact with the volume-render result. For example, when I move the volume-render result overlapping the terrain, I wish to modulate the volume render result such as clipping the overlapping part in the volume render result.
However, the volume render result comes from a ray accumulating color, so it is a 2D picture with no depth. So how to implement the interaction makes me very confuse. Somebody can give me a hint?
First you render your 3D rasterized objects. Then you take the depth buffer and use it as an additional data source in the volume raycaster as additional constraint on the integration limits.
Actually, I think the result of ray-casting is a 2D image, it cannot interact with other 3D objects as the usual way. So my solution is to take the ray-casting 2D image as a texture and blend it in the 3D scene. If I can control the view position and direction, we can map the ray-casting result in the exact place in the 3D scene. I'm still trying to implement this solution, but I think this idea is all right!
Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!
Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/