Light Propagation Volumes interpolation - opengl

I am currently implementing LPV in my engine.
As mentionned here I am supposed to interpolate the values from the octree to obtain smooth colors.
For example check out this picture
You have no interpolation (left) and with interpolation ( right ).
I actually only pick the values from the 3D texture ( octree ) and i obtain the pixelated result:
How could I do interpolation to have smooth colors?

I think there is no real issue. Single LPV doesn't work over big distances.
For radiosity in large scenes, good options are:
Cascading LPV in combination with SSGI like in cry engine
Monte carlo path tracing
Voxel cone tracing

Related

Interpolate color between voxels

I have a 3D texture containing voxels and I am ray tracing and, everytime i hit a voxel i display the color. The result is nice but you can clearly see the different blocks being separated by one another. i would like to get a smoothing color going from one voxel to the other so I was thinking of doing interpolation.
My problem is that when I hit the voxel I am not sure which other neighbouring voxels to extract the colors from because i don't know if the voxel is part of a wall parallel to some axis or if it is a floor or an isolate part of the scene. Ideally I would have to get, for every voxel, the 26 neighbouring voxels, but that can be quite expensive. Is there any fast and approximate solution for such thing?
PS: I notice that in minecraft there is smooth shadows that form when voxels are placed near each other, maybe that uses a technique that might be adapted for this purpose?

2D Water top surface profile

I am trying to create the effect of the water surface thickness with a vertex-fragment shader.
I am in a 3D game environment but It's a scroll view so a "2D" view.
Here is a good tutorial of creating such effect in real 2D using fragment shader.
But this can't be used in my case I think.
For the moment I have only a plane were I apply refraction.
And I want to apply the water thickness effect. But I don't know how to do it.
I am not trying to create some water deformation/displacement using vertex for the moment, this is not the point.
I don't know if it's possible with a simple quad maybe should I use an object like this.
Here are some examples.
Thanks a lot !
[EDIT] Added Rayman water effect to have a better reference of the effect.
I am trying to create 2D Water effect with a vertex-fragment shader on a simple quad.
Your first misconception is thinking in 2D. What you see in your right picture is the interaction of light with a 2 surface in a 3D space. A simple quad will not suffice.
For water you need some surface displacement. You can either simulate this by solving some wave equation. Or you're using a fourier transform based approach. I suggest the second. Next you render your scene "regular" for everything above the water, then "murky and refracted" for everything below the water line. Render both to textures.
Then You render the water surface. When looking at the Air→Water Interface (i.e. from above) use a Fresnel reflection term, i.e. mix between top reflection and see through depending on the angle of incidence, and for a too small angle emulate Brewster reflection. For the Water→Air Interface (i.e. from below) you do similar, only you don't need the Fresnel term, but only the Brewster term, to account for total internal reflection.
Since you do all mixing in the fragment shader, you don't need blending, hence no need to sort drawing operations for the water depth.
Yes, rendering water is not trivial.

OpenGL lighting question?

Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)

GPU Render onto sphere

I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.
I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.
Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.
Thank you
S.
I can give you two solutions.
The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.
After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.
The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.
By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.
You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.

CLOD Planet wide texturing in OpenGL

I'm finishing up on a 3D planet with ROAM (continuous level of detail).
My goal now is to have good quality render using textures.
I'm trying to find a way I can use a tiling system (small good textures combined), but in a way I can take advantage of my CLOD mesh.
Current algorithms (from what I've found) using this tiling systems produce a huge texture and then direcly apply it. That is not what I want... the planet is very big, and I want more power than simply increasing the texture size.
Is there any known algorithm/opengl feature for this kind of stuff?
I don't know much about shaders, but Is it possible to create one that paints a objects alone... I mean, not giving the texcoords, but putting the right color for every pixel (not vertex) of the mesh?
PS: My world is built using perlin noise... so I can get the height in any world point (height map with infinite resolution)
You have used 3D Perlin noise for the terrain, why not generate the texture as well? Generally, programs like Terragen, Vistapro and the like use altitude to randomly select a range of color from the palette, modify that color based on slope, and perhaps add detail from smaller textures based on both slope and altitude. In your case, distance could also modify detail. For that matter, 2d perlin noise would work well for detail texture.
Have you modified the heightmap at all? Something like an ocean would be hard to achieve with pure 3d Perlin noise, but flattening everything below a certain altitude and applying a nice algorithmic ocean texture (properly tuned 2d Perlin noise with transparency below a certain level) would look good.