Is there any conventional way to do a per voxel shader programming? - opengl

I'm finding a way to do 3d filters in directx or opengl shaders, same as the gaussian filter for images.In detail, it is to do proccessing for every voxel of a 3d texture.
Maybe store the volume data in slices can do it, but it is not a friendly way to access the volume data and not easy to write in shaders.
sorry for my poor english, any reply will be appreciate.
p.s.:Cuda's texture memory can do this work, but my poor gpu can only run in a very low frame rate with debug model,and i don't know why.

There is a 3D texture target in both Direct3D and OpenGL. Of course target framebuffers are still 2D. So using a compute shader, OpenCL or DirectCompute may be better suited for pure filter purposes, that don't include rendering to screen.

Related

Masking away an area of a terrain surface in OpenGL

I'm working on a 3D geographical renderer with building models on a terrain surface. These building models are captured through photogrammetry, and a problem we have is that the terrain surface sometimes pokes through the building model since the surface data and building model don't match exactly.
We want to mask away the terrain surface in the area that is covered by the building model footprint. I've been thinking of using the stencil buffer, maybe extruding some kind of shadow volume from the model and filling the z buffer with high values in the area covered by the building model's footprint before rendering the model. This would require quite a bit of processing though, and I'm hoping that there is smarter and more efficient way of doing things. Another idea is making an orthographic 2d texture of the model rendered from above and using this to fill the z-buffer in some creative way using shaders.
So if anyone have done something similar before or have any ideas, I'd be real glad to hear them :-)
I'm limited to OpenGL ES 3.0, so I can't use geometry shaders or other fancy features.
Cheers,
Thomas
You must know both the terrain mesh, and where the buildings actually are on the terrain. The most obvious fix would be to preprocess the terrain mesh to "flatten" the area around the foundations of each building. This only needs doing once, so it's only a one-off cost rather than a per-frame cost.
Can't think of any immediately obvious neater method - the need for depth testing, except when you don't want it, doesn't really nicely turn into an algorithm ;)

How do I speed up my Offscreen OpenGL pointcloud warp rendering code?

I'm working on a visual odometry algorithm that tracks movement of the camera between images. An integral part of this algorithm is being able to generate incremental dense warped images of a reference image, where each pixel has a corresponding depth (so it can be considered a point cloud of width x height dimensions)
I haven't had much experience working with OpenGL in the past, but having gone through a few tutorials, I managed to setup an offscreen rendering pipeline to take in a transformation matrix and render the pointcloud from the new perspective. I'm using VBOs to load the data in the GPU and renderbuffers to render, and glReadPixels() to read into CPU memory.
On my Nvidia card, I can render at ~1 ms per warp. Is that the fastest I can render the data (640x480 3D points)? This step is proving to be a major bottleneck for my algorithm, so I'd really appreciate any performance tips!
(I thought that one optimization could be rendering only in grayscale, since I don't really care about colour, but it seems like internally OpenGL uses colour anyway)
My current implementation is at
https://gist.github.com/icoderaven/1212c7623881d8cd5e1f1e0acb7644fb,
and the shaders at
https://gist.github.com/icoderaven/053c9a6d674c86bde8f7246a48e5c033
Thanks!

OpenGL Geometry Extrusion with geometry Shader

With the GLE Tubing and Extrusion Library (http://www.linas.org/gle/) I am able to extrude 2D countours into 3D objects using OpenGL. The Library does all the work on the CPU and uses OpenGL immediate mode.
I guess doing the extrusion on the GPU using Geometry Shaders might be faster especially when rendering a lot of geometry. Since I do not yet have any experience with Geometry Shaders in OpenGL i would like to know if that is possible and what I have to pay attention to. Do you think it is a good Idea to move those computations to the GPU and that it will increase performance? It should also be possible to get the rendered geometry back to the CPU from the GPU, possibly using "Render to VBO".
If the geometry indeed changes every frame, you should do it on the GPU.
Keep in mind that every other solution that doesn't rely on the immediate mode will be faster than what you have right now. You might not even have to do it on the GPU.
But maybe you want to use shadow mapping instead, which is more efficient in some cases. It will also make it possible to render shadows for alpha tested objects like grass.
But it seems like you really need the resulting shadow geometry, so I'm not sure if that's an option for you.
Now back to the shadow volumes.
Extracting the shadow silhouette from a mesh using geometry shaders is a pretty complex process. But there's enough information about it on the internet.
Here's an article by Nvidia, which explains the process in detail:
Efficient and Robust Shadow Volumes Using Hierarchical Occlusion Culling and Geometry Shaders.
Here's another approach (from 2003) which doesn't even require geometry shaders, which could be interesting on low-end hardware:
http://de.slideshare.net/stefan_b/shadow-volumes-on-programmable-graphics-hardware
If you don't need the most efficient solution (using the shadow silhouette), you can also simply extract every triangle of the mesh on it's own. This is very easy using a geometry shader. I'd try that first before trying to implement silhouette extraction on the GPU.
About the "render to VBO" part of your question:
As far as I know there's no way to read the output of the geometry shader back to the CPU. Don't quote me on this, but I've never heard of a way to do this.

Render 3D Volume from 2D Image Stack

I have been brought in on a project where I need to render a 3D volume from a series of images of the volume. The images have been created by a couple of techniques such that they are vertical slices of the object in question.
The data set is similar to this question, but the asker is looking for a Matlab solution.
The goal is to have this drawing be in something near real time (>1Hz update rate), and from my research openGL seems to be the fastest option for drawing. Is there a built in function in openGL render the volume in openGL other than the following psuedocode algorithm.
foreach(Image in Folder)
foreach(Pixel in Image)
pointColour(pixelColour)
pointLocation(Pixel.X,Pixel.Y,Image.Z)
drawPoint
I am not concerned about interpolating between images, the current spacing is small enough that there no need for it.
I'm afraid if you're thinking about volume rendering, you will need to first understand the volume rendering integral because the resultant color of a pixel on the screen is a function of all the voxels that line up with it for the current viewing angle.
There are two methods to render a volume in real-time using conventional graphics hardware.
Render the volume as a set of 2D view-aligned slices that intersect the 3D texture (proxy geometry). Explanation here.
Use a raycaster that uses programmable graphics hardware, tutorial here.
This is not an easy problem to solve - but depending on what you need to do things might be a little simpler. For example: Do you care about having an interactive transfer function? Do you want perspective views, or will orthographic projection suffice? Are you rendering iso-surfaces? Are you using this only for MPR-type views?

What is the most efficient way to draw voxels (cubes) in opengl?

I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.