OpenGL/GLSL: What is the best algorithm to render clouds/smoke out of volumetric data? - opengl

I would like to render the 3D volume data: Density(can be mapped to Alpha channel), Temperature(can be mapped to RGB).
Currently I am simulationg maximum intensity projection, eg: rendering the most dense/opaque pixel in the end.But this method looses the depth perception.
I would like to imitate the effect like a fire inside the smoke.
So my question is what is the techniques in OpenGL to generate images based on available data?
Any idea is welcome.
Thanks Arman.

I would try a volume ray caster first.
You can google "Volume Visualization With Ray Casting" and that should give you most of what you need. NVidia has a great sample (using openg) of ray casting through a 3D texture.
On your specific implementation, you would just need to keep stepping through the volume accumlating the temperature until you reach the wanted density.
If your volume doesn't fit in video memory, you can do the ray casting in pieces and then do a composition step.
A quick description of ray casting:
CPU:
1) Render a six sided cube in world space as the drawing primitive make sure to use depth culling.
Vertex shader:
2) In the vertex shader store off the world position of the vertices (this will interpolate per fragmet)
Fragment shader:
3) Use the interpolated position minus the camera position to get the vector of traversal through the volume.
4) Use a while loop to step through the volume from the point on the cube through the other side. 3 ways to know when to end.
A) at each step test if the point is still in the cube.
B) do a ray intersection with cube and calculate the distance between the intersections.
C) do a prerender of the cube with forward face culling and store the depths into a second texture map then just sampe at the screen pixel to get the distance.
5) accumulate while you loop and set the pixel color.

Related

Curved Frosted Glass Shader?

Well making something transparent isn't that difficult, but i need that transparency to be different based on an object's curve to make it look like it isn't just a flat object. Something like the picture below.
The center is more transparent than the sides of the cylinder, it is more black which is the background color. Then there is the bezel which seems to have some sort of specular lighting at the top to make it more shiny, but i'd have no idea how to go about that transparency in that case. Using the normals of the surface relative to the eye position to determine the transparency value? Any help would be appreciated.
(moved comments into answer and added some more details)
Use (Sub Surface) scattering instead of transparency.
You can simplify things a lot for example by assuming the light source is constant along whole surface/volume ... so you need just the view ray integration not the whole volume integral per ray... I do it in my Atmospheric shader and it still looks pretty awesome almost indistinguisable from the real thing see some newer screenshots ... have compared it to the photos from Earth and Mars and the results where pretty close without any REALLY COMPLICATED MATH.
There are more options how to achieve this:
Voxel map (volume rendering)
It is easy to implement scattering into volume render engine but needs a lot of memory and power.
use 2 depth buffers (front and back face)
this need 2 passes with Cull face on and CW/CCW settings. This is also easy to implement but this can not handle multiple objects in the same view along Z axis of camera view. The idea is to pass both depth buffers to shader and integrating the pixel rays along its path cumulating/absorbing light from light source. Something like this:
render geometry to both depth buffers as 2 textures.
render quad covering whole screen
for each fragment compute the ray line (green)
compute the intersection points in booth depth buffers
obtain 'length,ang'
integrate along the length using scattering to compute pixel color
I use something like this:
vec3 p,p0,p1; // p0 front and p1 back face ray/depth buffer intersection points
int n=16; // integration steps
dl=(p1-p0)/float(n); // integration step vector
vec3 c=background color;
float q=dot(normalize(p1-p0),light)=fabs(cos(ang)); // normal light shading
for (p=p1,i=0;i<n;p1-=dp,i++) // p = p1 -> p0 path through object
{
b=B0.rgb*dl; // B0 is saturated color of object
c.r*=1.0-b.r; // some light is absorbed
c.g*=1.0-b.g;
c.b*=1.0-b.b;
c+=b*q; // some light is scattered in
} // here c is the final fragment color
After/durring the integration you should normalize the color ... so that the resulting color is saturated around the real view depth of the rendered material. for more informatio see the Atmospheric scattering link below (this piece of code is extracted from it)
analytical object representation
If you know the surface equation then you can compute the light path intersections inside shader without the need for depth buffers or voxel map. This Simple GLSL Atmospheric shader of mine uses this approach as ellipsoids are really easily handled this way.
Ray tracer
If you need precision and can not use Voxel maps then you can try ray-tracing engines instead. But all scattering renderers/engines (#1,#2,#3 included) are ray tracers anyway... As you can see all techniques discussed here are the same the only difference is the method of obtaining the ray/object boundary intersection points.

OpenGL beam spotlight

After reading up on OpenGL and GLSL I was wondering if there were examples out there to make something like this http://i.stack.imgur.com/FtoBj.png
I am particular interesting in the beam and intensity of light (god ray ?) .
Does anybody have a good start point ?
OpenGL just draws points, lines and triangles to the screen. It doesn't maintain a scene and the "lights" of OpenGL are actually just a position, direction and color used in the drawing calculations of points, lines or triangles.
That being said, it's actually possible to implement an effect like yours using a fragment shader, that implements a variant of the shadow mapping method. The difference would be, that instead of determining if a surface element of a primitive (point, line or triangle) lies in the shadow or not, you'd cast rays into a volume and for every sampling position along the ray test if that volume element (voxel) lies in the shadow or not and if it's illuminated add to the ray accumulator.

Mesh and cone intersection algorithm

I am looking for an efficient algorithm for mesh (set of triangles) and cone (given by origin, direction and angle from that direction) intersection. More precisely I want to find intersection point which is closest to the cone's origin. For now all what I can think about is to intersect a mesh with several rays from the cone origin and get the closest point. (Of course some spatial structure will be constructed for mesh to reject unnecessary intersections)
Also I found the following algo with brief description:
"Cone to mesh intersection is computed on the GPU by drawing the cone geometry with the mesh and reading the minimum depth value marking the intersection point".
Unfortunately it's implementation isn't obvious for me.
So can anyone suggest something more efficient than I have or explain in more details how it can be done on GPU using OpenGL?
on GPU I would do it like this:
set view
to cones origin
directing outwards
covering the bigest circle slice
for infinite cone use max Z value of mesh vertexes in view coordinate system
clear buffers
draw mesh
but in fragment shader draw only pixels intersecting cone
|fragment.xyz-screen_middle|=tan(cone_ang/2)*fragment.z
read z-buffer
read fragments and from valid (filled) select the closest one to cones origin
[notes]
if your gfx engine can handle also output values from your fragment shader
then you can skip bullet 4 and do the min distance search inside bullet 3 instead of rendering ...
that will speed up the process considerably (need just single xyz vector)

Raycasting Voxels and OpenGL

I'm currently looking into raycasting and voxels, which is a nice combination. A Voxelrenderer by Sebastian Scholz implements this pretty nicely, but also uses OpenGL. I'm wondering how his formula is working; how can you use OpenGL with raycasting and voxels? Isn't the idea for raycasting that a ray is casted for every pixel (or line ie in Doom) and then to draw the result?
The mentioned raycaster is a Voxelrenderer, i.e. a method do visualize volumetric data, like opacities stored in a 3D texture. Doom's raycasting algorithm has another intention: For every pixel on the screen find the first planar surface of the map and draw the color of that there. The rasterizing capabilited of modern GPUs obsoleted this use of raycasters.
Realtime visualizing volumetric data still is a task done by special hardware, typically found in medical and geodesic imaging systems. Basically those are huge bulks of RAM (several dozens of GB) holding volumetric RGBA data. Then for every on screen pixel a ray is cast through the volume and the RGBA data integrated over that ray. A GPU Voxelrenderer does the same thing by a fragment shader; pseudocode:
vec4f prev_color;
for(i=0; i<STEPS; i++) {
p = ray_direction * i*STEP_DELTA;
voxel = texture3D(volumedata, p);
prev_color = combine(voxel, prev_color);
}
final_color = finalize(prev_color);
finalize and combine depend on the kind of data and what you want to visualize. For example if you want to integrate the density (like in an X ray image), combine would be a summing operation and finalize a normalization. If you were to visualize a cloud, you'd alpha blend between voxels.
Raycasting in a voxel space wouldn't use pixels, it would be inefficient.
You already have an array to say what spaces are empty and which ones have a voxel cube.
So a fast version is tracing a line which checks the emptimess of every voxel in the direction of the line, until it reaches a full voxel.
That would take a few hundred read ops from the memory and 2-3 multiplications of the ray vector for every read op.
to read a billion memory positions of voxels takes about 1 second, so a few hundred would be very fast and always within a frame.
Raycasting often uses optmizations to detect fractional places in space where a maths formula stars, where a mesh vertex is based on it's bounding box and then it's mesh, and in voxels it's just checks of a line in an integer array progressively until you find a non void.

Mapping from 2D projection back to 3D point cloud

I have a 3D model consisting of point vertices (XYZ) and eventually triangular faces.
Using OpenGL or camera-view-matrix-projection I can project the 3D model to a 2D plane, i.e. a view window or an image with m*n resolution.
The question is how can I determine the correspondence between a pixel from the 2D projection plan and its corresponding vertex (or face) from the original 3D model.
Namely,
What is the closest vertices in 3D model for a given pixel from 2D projection?
It sounds like picking in openGL or raytracing problem. Is there however any easy solution?
With the idea of ray tracing it is actually about finding the first vertex/face intersected with the ray from a view point. Can someone show me some tutorial or examples? I would like to find an algorithm independent from using OpenGL.
Hit testing in OpenGL usually is done without raytracing. Instead, as each primitive is rendered, a plane in the output is used to store the unique ID of the primitive. Hit testing is then as simple as reading the ID plane at the cursor location.
My (possibly-naive) thought would be to create an array of the vertices and then sort them by their distance (or distance-squared, for speed) once projected to your screen point. The first item in the list will be closest. It will be O(n) for n vertices, but no worse.
Edit: Better for speed and memory: simply loop through all vertices and keep track of the vertex whose projection is closest (distance squared) to your viewport pixel. This assumes that you are able to perform the projection yourself, without relying on OpenGL.
For example, in pseudo-code:
function findPointFromViewPortXY( pointOnViewport )
closestPoint = false
bestDistance = false
for (each point in points)
projectedXY = projectOntoViewport(point)
distanceSquared = distanceBetween(projectedXY, pointOnViewport)
if bestDistance==false or distanceSquared<bestDistance
closestPoint = point
bestDistance = distanceSquared
return closestPoint
In addition to Ben Voigt's answer:
If you do a separate pass over pickable objects, then you can set the viewport to contain only a single pixel that you will read.
You can also encode triangle ID by using geometry shader (gl_PrimitiveID).