I'm trying to create an OpenGL application with water waves and refraction. I need to either cast rays from the sun and then the camera and figure out where they intersect, or I need to start from the ocean floor and figure out in which direction(s, if any) I have to go in order to hit the sun or the camera. I'm kind of stuck, can any one give me an inpoint into either OpenGL ray casting or a crash course in advanced geometry? I don't want the ocean floor to be at a constant depth and I don't want the water waves to be simple sinusoidal waves.
First things first: The effect you're trying to achieve can be implemented using OpenGL, but it is not a feature of OpenGL. OpenGL by itself is just a sophisticated triangle to screen drawing API. You got some input data and write a program that performs relatively simple rasterizing drawing operations based on the input data using the OpenGL API. Shaders give it some space; you can implement a raytracer in the fragment shader.
In your case that means, you must implement a some algorithm that generates a picture like you intend. For water is must be some kind of raytracer or fake refraction method to get the effect of looking into the water. The caustics require either a full features photon mapper, or you're good with a fake effect based on the 2nd derivative of the water surface.
There is a WebGL demo, rendering stunningly good looking, interactive water: http://madebyevan.com/webgl-water/ And here's a video of it on YouTube http://www.youtube.com/watch?v=R0O_9bp3EKQ
This demo uses true raytracing (the water surface, the sphere and the pool are raytraced), the caustics are a "fake caustics" effect, based on projecting the 2nd derivative of the water surface heightmap.
There's nothing very OpenGL-specific about this.
Are you talking about caustics? Here's another good Gamasutra article.
Reflections are normally achieved by reflecting the camera in the plane of the mirror and rendering to a texture, you can apply distortion and then use it to texture the water surface. This only works well for small waves.
What you're after here is lots of little ways to cheat :-)
Techincally, all you perceive is a result of lightwaves/photons bouncing off the surfaces and propagating through mediums. For the "real deal" you'll have to trace the light directly from the Sun with each ray following the path:
hit the water surface
refract+reflect, reflected goes into the camera(*), refracted part goes further
hits the ocean bottom
reflects
hits the water from beneath
reflect+refracts, refracted part gets out of the water and hits the camera(*), reflected again goes to the ocean bottom, reflects etc.
(*) Actually, most of the rays will miss the camera, but that will be overly expensive, so this is a cheat.
Do this for at least three wavelengths - "red", "green" and "blue". Each of them will refract and reflect differently. You'll get the whole picture by combining the three.
Then you just create a texture with the rays that got into the camera and overlay it in OpenGL.
That's a straighforward, simple and very computationally expensive way that gives an approximation to the physics beyond the caustics.
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I am trying to create the effect of the water surface thickness with a vertex-fragment shader.
I am in a 3D game environment but It's a scroll view so a "2D" view.
Here is a good tutorial of creating such effect in real 2D using fragment shader.
But this can't be used in my case I think.
For the moment I have only a plane were I apply refraction.
And I want to apply the water thickness effect. But I don't know how to do it.
I am not trying to create some water deformation/displacement using vertex for the moment, this is not the point.
I don't know if it's possible with a simple quad maybe should I use an object like this.
Here are some examples.
Thanks a lot !
[EDIT] Added Rayman water effect to have a better reference of the effect.
I am trying to create 2D Water effect with a vertex-fragment shader on a simple quad.
Your first misconception is thinking in 2D. What you see in your right picture is the interaction of light with a 2 surface in a 3D space. A simple quad will not suffice.
For water you need some surface displacement. You can either simulate this by solving some wave equation. Or you're using a fourier transform based approach. I suggest the second. Next you render your scene "regular" for everything above the water, then "murky and refracted" for everything below the water line. Render both to textures.
Then You render the water surface. When looking at the Air→Water Interface (i.e. from above) use a Fresnel reflection term, i.e. mix between top reflection and see through depending on the angle of incidence, and for a too small angle emulate Brewster reflection. For the Water→Air Interface (i.e. from below) you do similar, only you don't need the Fresnel term, but only the Brewster term, to account for total internal reflection.
Since you do all mixing in the fragment shader, you don't need blending, hence no need to sort drawing operations for the water depth.
Yes, rendering water is not trivial.
Given a naive take on 3D graphics rendering it seems that stereo 3D rendering should be essentially transparent to the developer and be entirely a feature of the graphics hardware and drivers. Wherever an OpenGL window is displaying a scene, it takes the geometry, lighting, camera and texture etc. information to render a 2D image of the scene.
Adding stereo 3D to the scene seems to essentially imply using two laterally offset cameras where there was originally one, and all other scene variables stay the same. The only additional information then would be how far apart to make the cameras and how far out to to make their central rays converge. Given this it would seem trivial to take a GL command sequence and interleave the appropriate commands at driver level to drive a 3D rendering.
It seems though applications need to be specially written to make use of special 3D hardware architectures making it cumbersome and prohibitive to implement. Would we expect this to be the future of stereo 3D implementations or am I glossing over too many important details?
In my specific case we are using a .net OpenGL viewport control. I originally hoped that simply having stereo enabled hardware and drivers would be enough to enable stereo 3D.
Your assumptions are wrong. OpenGL does not "take geometry, lighting camera and texture information to render a 2D image". OpenGL takes commands to manipulate its state machine and commands to execute draw calls.
As Nobody mentions in his comment, the core profile does not even care about transformations at all. The only thing it really provides you with now is ways to provide arbitrary data to a vertex shader, and an arbitrary 3D cube to do rendering to. Wether that corresponds or not to the actual view, GL does not care, nor should it.
Mind you, some people have noticed that a driver can try to guess what's the view and what's not, and this is what the nvidia driver tries to do when doing automatic stereo rendering. This requires some specific guess-work, which amounts to actual analysis of game rendering to tweak the algorithms so that the driver guesses right. So it's typically a per-title, in-driver change. And some developers have noticed that the driver can guess wrong, and when that happens, it starts to get confusing. See some first-hand account of those questions.
I really recommend you read that presentation, because it makes some further points as to where the camera should be pointing towards (should the 2 view directions be parallel and such).
Also, It turns out that is essentially costs twice as much rendering for everything that is view dependent. Some developers (including, for example, the Crytek guys, see Part 2), figured out that to a great extent, you can do a single render, and fudge the picture with additional data to generate the left and right eye pictures.
The amount of saved work here is worth a lot by itself, for the developer to do this themselves.
Stereo 3D rendering is unfortunately more complex than just adding a lateral camera offset.
You can create stereo 3D from an original 'mono' rendered frame and the depth buffer. Given the range of (real world) depths in the scene, the depth buffer for each value tells you how far away the corresponding pixel would be. Given a desired eye separation value, you can slide each pixel left or right depending on distance. But...
Do you want parallel axis stereo (offset asymmetrical frustums) or 'toe in' stereo where the two cameras eventually converge? If the latter, you will want to tweak the camera angles scene by scene to avoid 'reversing' bits of geometry beyond the convergence point.
For objects very close to the viewer, the left and right eyes see quite different images of the same object, even down to the left eye seeing one side of the object and the right eye the other side - but the mono view will have averaged these out to just the front. If you want an accurate stereo 3D image, it really does have to be rendered from different eye viewpoints. Does this matter? FPS shooter game, probably not. Human surgery training simulator, you bet it does.
Similar problem if the viewer tilts their head to one side, so one eye is higher than the other. Again, probably not important for a game, really important for the surgeon.
Oh, and do you have anti-aliasing or transparency in the scene? Now you've got a pixel which really represents two pixel values at different depths. Move an anti-aliased pixel sideways and it probably looks worse because the 'underneath' color has changed. Move a mostly-transparent pixel sideways and the rear pixel will be moving too far.
And what do you do with gunsight crosses and similar HUD elements? If they were drawn with depth buffer disabled, the depth buffer values might make them several hundred metres away.
Given all these potential problems, OpenGL sensibly does not try to say how stereo 3D rendering should be done. In my experience modifying an OpenGL program to render in stereo is much less effort than writing it in the first place.
Shameless self promotion: this might help
http://cs.anu.edu.au/~Hugh.Fisher/3dteach/stereo3d-devel/index.html
From what I gathered he used sparse voxel octrees and raycasting. It doesn't seem like he used opengl or direct3d and when I look at the game Voxelstein it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square. Which caught me off guard I'm not sure how he is doing that without opengl or direct3d.
I tried to read through the source code but it was difficult for me to understand what was going on. I would like to implement something similar and would like the algorithm to do so.
I'm interested in how he performed rendering, culling, occlusion, and lighting. Any help is appreciated.
The algorithm is closer to ray-casting than ray-tracing. You can get an explanation from Ken Silverman himself here:
https://web.archive.org/web/20120321063223/http://www.jonof.id.au/forum/index.php?topic=30.0
In short: on a grid, store an rle list of surface voxels for each x,y stack of voxels (if z means 'up'). Assuming 4 degrees of freedom, ray-cast across it for each vertical line on the screen, and maintain a list of visible spans which is clipped as each cube is drawn. For 6 degrees of freedom, do something similar but with scanlines which are tilted in screenspace.
I didn't look at the algorithm itself, but I can tell the following based off the screenshots:
it appears that miniature cubes are actually being drawn instead of just a bunch of 2d square
Yep, that's how ray-tracing works. It doesn't draw 2d squares, it traces rays. If you trace your rays against many miniature cubes, you'll see many miniature cubes. The scene is represented by many miniature cubes (voxels), hence you see them when you look up close. It would be nice to actually smoothen the data somehow (trace against smoothed energy function) to make them look smoother.
I'm interested in how he performed rendering
by ray-tracing
culling
no need for culling when ray-tracing, particularly in a voxel scene. As you move along the ray you check only the voxels that the ray intersects.
occlusion
voxel-voxel occlusion is handled naturally by ray-tracing; it would return the first voxel hit, which is the closest. If you draw sprites you can use a Z-buffer generated by the ray-tracer.
and lighting
It's possible to approximate the local normal by looking at nearby cells and looking which are occupied and which are not. Then performing the lighting calculation. Alternatively each voxel can store the normal along with its color or other material properties.
The Processing project website has an example of implementing a 3D textured sphere with rotational capabilities. I'm trying to understand the code, but I'm having trouble comprehending many of the code blocks since I don't have a background in graphics.
Any higher-level explanation of what each block is trying to accomplish, perhaps referencing the relevant algorithm, would allow me to read up on the concepts and better understand the implementation.
After just a few minutes looking at the code, I'd say the draw() function is called by the Processing runtime system each time the image should be redrawn. This just paints a black background, then renders the globe with the renderGlobe() function.
The renderGlobe() function sets up the environment for drawing the globe, calculating position, turing on lights, setting the texture to IMAGE, etc. Then it calls texturedSphere to draw the globe. After that, it cleans up and adjusts the position variables for the next time through.
The initializeSphere() function calculates the vertex locations for the sphere. This is simple trigonometry.
The texturedSphere() function draws the sphere. First it draws the southern cap, which is really a cone, a very flat cone. Next it draws rings for each section of the sphere, and then tops it off with another cone for the northern cap.
Although I haven't gone through the Processing learning materials, the headings indicate that if you start from the beginning, and try everything in order, you'll easily understand this code.