I am trying to create the effect of the water surface thickness with a vertex-fragment shader.
I am in a 3D game environment but It's a scroll view so a "2D" view.
Here is a good tutorial of creating such effect in real 2D using fragment shader.
But this can't be used in my case I think.
For the moment I have only a plane were I apply refraction.
And I want to apply the water thickness effect. But I don't know how to do it.
I am not trying to create some water deformation/displacement using vertex for the moment, this is not the point.
I don't know if it's possible with a simple quad maybe should I use an object like this.
Here are some examples.
Thanks a lot !
[EDIT] Added Rayman water effect to have a better reference of the effect.
I am trying to create 2D Water effect with a vertex-fragment shader on a simple quad.
Your first misconception is thinking in 2D. What you see in your right picture is the interaction of light with a 2 surface in a 3D space. A simple quad will not suffice.
For water you need some surface displacement. You can either simulate this by solving some wave equation. Or you're using a fourier transform based approach. I suggest the second. Next you render your scene "regular" for everything above the water, then "murky and refracted" for everything below the water line. Render both to textures.
Then You render the water surface. When looking at the Air→Water Interface (i.e. from above) use a Fresnel reflection term, i.e. mix between top reflection and see through depending on the angle of incidence, and for a too small angle emulate Brewster reflection. For the Water→Air Interface (i.e. from below) you do similar, only you don't need the Fresnel term, but only the Brewster term, to account for total internal reflection.
Since you do all mixing in the fragment shader, you don't need blending, hence no need to sort drawing operations for the water depth.
Yes, rendering water is not trivial.
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I'm trying to create an OpenGL application with water waves and refraction. I need to either cast rays from the sun and then the camera and figure out where they intersect, or I need to start from the ocean floor and figure out in which direction(s, if any) I have to go in order to hit the sun or the camera. I'm kind of stuck, can any one give me an inpoint into either OpenGL ray casting or a crash course in advanced geometry? I don't want the ocean floor to be at a constant depth and I don't want the water waves to be simple sinusoidal waves.
First things first: The effect you're trying to achieve can be implemented using OpenGL, but it is not a feature of OpenGL. OpenGL by itself is just a sophisticated triangle to screen drawing API. You got some input data and write a program that performs relatively simple rasterizing drawing operations based on the input data using the OpenGL API. Shaders give it some space; you can implement a raytracer in the fragment shader.
In your case that means, you must implement a some algorithm that generates a picture like you intend. For water is must be some kind of raytracer or fake refraction method to get the effect of looking into the water. The caustics require either a full features photon mapper, or you're good with a fake effect based on the 2nd derivative of the water surface.
There is a WebGL demo, rendering stunningly good looking, interactive water: http://madebyevan.com/webgl-water/ And here's a video of it on YouTube http://www.youtube.com/watch?v=R0O_9bp3EKQ
This demo uses true raytracing (the water surface, the sphere and the pool are raytraced), the caustics are a "fake caustics" effect, based on projecting the 2nd derivative of the water surface heightmap.
There's nothing very OpenGL-specific about this.
Are you talking about caustics? Here's another good Gamasutra article.
Reflections are normally achieved by reflecting the camera in the plane of the mirror and rendering to a texture, you can apply distortion and then use it to texture the water surface. This only works well for small waves.
What you're after here is lots of little ways to cheat :-)
Techincally, all you perceive is a result of lightwaves/photons bouncing off the surfaces and propagating through mediums. For the "real deal" you'll have to trace the light directly from the Sun with each ray following the path:
hit the water surface
refract+reflect, reflected goes into the camera(*), refracted part goes further
hits the ocean bottom
reflects
hits the water from beneath
reflect+refracts, refracted part gets out of the water and hits the camera(*), reflected again goes to the ocean bottom, reflects etc.
(*) Actually, most of the rays will miss the camera, but that will be overly expensive, so this is a cheat.
Do this for at least three wavelengths - "red", "green" and "blue". Each of them will refract and reflect differently. You'll get the whole picture by combining the three.
Then you just create a texture with the rays that got into the camera and overlay it in OpenGL.
That's a straighforward, simple and very computationally expensive way that gives an approximation to the physics beyond the caustics.
I want to render a fire effect in OpenGL based on a particle simulation. I have hundreds of particles which have a position and a temperature (and therefore a color) as well as with all their other properties. Simply rendering a solidSphere using glut doesn't look very realistic, as the particles are spread too wide. How can I draw the fire based on the particles information?
If you are just trying to create a realistic fire effect I would use some kind of re-existing library as recommended in other answers. But it seems to me you that you are after a display of the simulation.
A direct solution worth trying might be replace your current spheres with billboards (i.e. graphic image that always faces toward the camera) which are solid white in the middle and fade to transparent towards the edges - obviously positioning and colouring the images according to your particles.
A better solution I feel is to approach the flame as a set of 2D Grids on which you can control the transparency and colour of each vertex on the grid. One could do this in OpenGL by constructing a plane from quads and use you particle system to calculate (via interpolation from the nearest particles you have) the colour and transparency of each vertex. OpenGL will interpolate each pixel between vertexes for you and give you a smooth looking picture of the 'average particles in the area'.
You probably want to use a particle system to render a fire effect, here's a NeHe tutorial on how to do just that: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=19
From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.
I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.
I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.
Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.
Thank you
S.
I can give you two solutions.
The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.
After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.
The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.
By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.
You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.