Curved Frosted Glass Shader? - glsl

Well making something transparent isn't that difficult, but i need that transparency to be different based on an object's curve to make it look like it isn't just a flat object. Something like the picture below.
The center is more transparent than the sides of the cylinder, it is more black which is the background color. Then there is the bezel which seems to have some sort of specular lighting at the top to make it more shiny, but i'd have no idea how to go about that transparency in that case. Using the normals of the surface relative to the eye position to determine the transparency value? Any help would be appreciated.

(moved comments into answer and added some more details)
Use (Sub Surface) scattering instead of transparency.
You can simplify things a lot for example by assuming the light source is constant along whole surface/volume ... so you need just the view ray integration not the whole volume integral per ray... I do it in my Atmospheric shader and it still looks pretty awesome almost indistinguisable from the real thing see some newer screenshots ... have compared it to the photos from Earth and Mars and the results where pretty close without any REALLY COMPLICATED MATH.
There are more options how to achieve this:
Voxel map (volume rendering)
It is easy to implement scattering into volume render engine but needs a lot of memory and power.
use 2 depth buffers (front and back face)
this need 2 passes with Cull face on and CW/CCW settings. This is also easy to implement but this can not handle multiple objects in the same view along Z axis of camera view. The idea is to pass both depth buffers to shader and integrating the pixel rays along its path cumulating/absorbing light from light source. Something like this:
render geometry to both depth buffers as 2 textures.
render quad covering whole screen
for each fragment compute the ray line (green)
compute the intersection points in booth depth buffers
obtain 'length,ang'
integrate along the length using scattering to compute pixel color
I use something like this:
vec3 p,p0,p1; // p0 front and p1 back face ray/depth buffer intersection points
int n=16; // integration steps
dl=(p1-p0)/float(n); // integration step vector
vec3 c=background color;
float q=dot(normalize(p1-p0),light)=fabs(cos(ang)); // normal light shading
for (p=p1,i=0;i<n;p1-=dp,i++) // p = p1 -> p0 path through object
{
b=B0.rgb*dl; // B0 is saturated color of object
c.r*=1.0-b.r; // some light is absorbed
c.g*=1.0-b.g;
c.b*=1.0-b.b;
c+=b*q; // some light is scattered in
} // here c is the final fragment color
After/durring the integration you should normalize the color ... so that the resulting color is saturated around the real view depth of the rendered material. for more informatio see the Atmospheric scattering link below (this piece of code is extracted from it)
analytical object representation
If you know the surface equation then you can compute the light path intersections inside shader without the need for depth buffers or voxel map. This Simple GLSL Atmospheric shader of mine uses this approach as ellipsoids are really easily handled this way.
Ray tracer
If you need precision and can not use Voxel maps then you can try ray-tracing engines instead. But all scattering renderers/engines (#1,#2,#3 included) are ray tracers anyway... As you can see all techniques discussed here are the same the only difference is the method of obtaining the ray/object boundary intersection points.

Related

How to reflect a chrome sphere in a scene with a procedural texture

My scene background is a procedural texture that draws an ocean, or a lava floor, or some such other background. It extends completely under as well, as if you were inside a cubemap. It would be easier if I could assume the view was the same in all directions, but if there's a sun, for example, you cannot.
Now if I wanted to put a chrome sphere in the middle, what does it reflect? Does the sphere see the same thing as the main camera does?
Assume it's expensive to render the background, and I do not want to do it multiple times per frame. I can save a copy to use in the reflection if that helps.
Can someone suggest a general approach? Here's an example of the procedural texture I mean (this is all in the shader, no geometry other than a quad):
https://www.shadertoy.com/view/XtS3DD
To answer your first question: In the real world, the reflection you see in the sphere depends on both the position of the camera, and the position of the sphere itself. However, taking both positions into account is prohibitively expensive for a moving sphere when using cube mapping (the most common approach), since you have to re-render all six faces of the cubemap with each frame. Thus, most games "fake" reality by using a cubemap that is centered about the origin ((0, 0, 0) in world-space) and only rendering static objects (trees, etc.) into the cube map.
Since your background is entirely procedural, you can skip creating cubemap textures. If you can define your procedural background texture as function of direction (not position!) from the origin, then you can use normal vector of each point on the sphere, plus the sphere's position, plus the camera position to sample from your background texture.
Here's the formula for it, using some glsl pseudocode:
vec3 N = normal vector for point on sphere
vec3 V = position of camera
vec3 S = position of point on sphere
vec3 ray = normalize(reflect(V-S,N));
// Reflect the vector pointing from the a point on the sphere to
// the camera over the normal vector for the sphere.
vec4 color = proceduralBackgroundFunc(ray);
Above, color is the final output of the shader for point S on the sphere's surface.
Alternatively, you can prerender the background into a cube texture, and then sample from it like so (changing only the last line of code from above):
vec4 color = texture(cubeSample,ray);

OpenGL beam spotlight

After reading up on OpenGL and GLSL I was wondering if there were examples out there to make something like this http://i.stack.imgur.com/FtoBj.png
I am particular interesting in the beam and intensity of light (god ray ?) .
Does anybody have a good start point ?
OpenGL just draws points, lines and triangles to the screen. It doesn't maintain a scene and the "lights" of OpenGL are actually just a position, direction and color used in the drawing calculations of points, lines or triangles.
That being said, it's actually possible to implement an effect like yours using a fragment shader, that implements a variant of the shadow mapping method. The difference would be, that instead of determining if a surface element of a primitive (point, line or triangle) lies in the shadow or not, you'd cast rays into a volume and for every sampling position along the ray test if that volume element (voxel) lies in the shadow or not and if it's illuminated add to the ray accumulator.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.

Doubts in RayTracing with GLSL

I am trying to develop a basic Ray Tracer. So far i have calculated intersection with a plane and blinn-phong shading.i am working on a 500*500 window and my primary ray generation code is as follows
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f ?
vec3 rayDirection = vec3( gl_FragCoord.x-250.0,gl_FragCoord.y-250.0 , 10.0);
Now i doubt that above method is right or wrong. Please give me some insights.
There's no right or wrong with projections. You could as well map viewport pixels to azimut and elevation angle. Actually your way of doing this is not so bad at all. I'd just pass the viewport dimensions in a additional uniform, instead of hardcoding, and normalize the vector. The Z component literally works like focal lengths.
I am also having doubt that do we need to construct geometry in OpenGL code while rayTracing in GLSL. for example if i am trying to raytrace a plane do i need to construct plane in OpenGL code using glVertex2f?
Raytracing works on a global description containing the full scene. OpenGL primitives however are purely local, i.e. just individual triangles, lines or points, and OpenGL doesn't maintain a scene database. So geometry passed using the usual OpenGL drawing function can not be raytraced (at least not that way).
This is about the biggest obstacle for doing raytracing with GLSL: You somehow need to implement a way to deliver the whole scene as some freely accessible buffer.
It is possible to use Ray Marching to render certain types of complex scenes in a single fragment shader. Here are some examples: (use Chrome or FireFox, requires WebGL)
Gift boxes: http://glsl.heroku.com/e#820.2
Torus Journey: http://glsl.heroku.com/e#794.0
Christmas tree: http://glsl.heroku.com/e#729.0
Modutropolis: http://glsl.heroku.com/e#327.0
The key to making this stuff work is writing "distance functions" that tell the ray marcher how far it is from the surface of an object. For more info on distance functions, see:
http://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm

OpenGL/GLSL: What is the best algorithm to render clouds/smoke out of volumetric data?

I would like to render the 3D volume data: Density(can be mapped to Alpha channel), Temperature(can be mapped to RGB).
Currently I am simulationg maximum intensity projection, eg: rendering the most dense/opaque pixel in the end.But this method looses the depth perception.
I would like to imitate the effect like a fire inside the smoke.
So my question is what is the techniques in OpenGL to generate images based on available data?
Any idea is welcome.
Thanks Arman.
I would try a volume ray caster first.
You can google "Volume Visualization With Ray Casting" and that should give you most of what you need. NVidia has a great sample (using openg) of ray casting through a 3D texture.
On your specific implementation, you would just need to keep stepping through the volume accumlating the temperature until you reach the wanted density.
If your volume doesn't fit in video memory, you can do the ray casting in pieces and then do a composition step.
A quick description of ray casting:
CPU:
1) Render a six sided cube in world space as the drawing primitive make sure to use depth culling.
Vertex shader:
2) In the vertex shader store off the world position of the vertices (this will interpolate per fragmet)
Fragment shader:
3) Use the interpolated position minus the camera position to get the vector of traversal through the volume.
4) Use a while loop to step through the volume from the point on the cube through the other side. 3 ways to know when to end.
A) at each step test if the point is still in the cube.
B) do a ray intersection with cube and calculate the distance between the intersections.
C) do a prerender of the cube with forward face culling and store the depths into a second texture map then just sampe at the screen pixel to get the distance.
5) accumulate while you loop and set the pixel color.