atmospheric scattering and sky geometry - opengl

I'm trying to implement an atmospheric scattering in my graphics (game) engine based on the gpu gems article: link. An example implementation from that article uses a skydome. My scene is different - I don't render
a whole earth with an atmosphere which can be also visible from the space but some finite flat (rectangle)
area with objects, for example a race track. In fact this is the most common scenario in many games. Now
I wonder how to render a sky in such case:
1.What kind of geometry I should use: skydome, skybox or a full screen quad - then I have to
move almost all calculations to the fragment shader, but I don't know if it makse sense in terms
quality/performance ?
2.How to place sky geometry on the scene ? My idea:
I have a hemisphere (skydome) geometry with radius = 1 and center in vec3(0, 0, 0) - in object space.
Those vertices are sent to the atmospheric scattering vertex shader:
layout(location=0) in vec3 inPosition;
Next, In the vertex shader I transform vertex this way:
v3Pos = inPosition * 0.25f + 10.0f;
Uniform v3CameraPos = vec3(0.0f, 10.0f, 0.0f), uniform fInnerRadius = 10.0f, uniform fCameraHeight = 10.0f
Then I have correct an inner/outer radius propotion (10/10.25),right? I also send to the vertex shader a model matrix which sets a position
of the hemisphere to the postion of the mobile camera vec3(myCamera.x, myCamera.y, myCamera.z):
vec4 position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0);
gl_Position = position.xyww; // always fails depth test.
The hemisphere moves together with the camera (encloses only some space around camera with radius = 1, but
it also always fails a depth test.)
Unfortunately a sky color which I get is not correct: screen1
3.What about a "sky curve"? Here is a picture which demonstrate what I mean: image1
How should I set a sky curve ?
Edit1 - debugging: In the vertex shader I assigned to v3Pos position of the "highest" vertex in the hemisphere:
vec3 v3Pos = vec3(0.0f, 10.25f, 0.0f);
Now the whole sky contains a color of that vertex:
screen2

here is my simplified atmosphere scattering
https://stackoverflow.com/a/19659648/2521214
work for both in outside and inside atmosphere
For realistic scattering You need to compute the curve integral through atmosphere depth
so you need to know how thick is atmosphere in which direction. Your terrain is 'flat QUAD +/- some bumps' but you must know where on Earth it is position (x,y,z) and normal (nx,ny,nz). For atmosphere you can use sphere or like me ellipsoid unless you want implement the whole scattering (integration through area not curve) process then what you really need is just the ray length from your camera to end of atmosphere.
so you can also do a cube-map texture with precomputed atmosphere edge distance because the movement inside your flat area does not matter much.
You also need the Sun position
can be simplified to just rotating normal around earth axis 1 round per day or you can also implement the season trajectory shift
Now just before rendering fill screen with sky
You can do it all inside shaders. one Quad is enough.
in fragment: get the atmosphere thickness form cube-map or from pixel direction intersection with atmosphere sphere/ellipsoid and then do the integration (full or simplified). Output the color to screen pixel

It really doesn't matter what geometry you use as long as (a) it covers the scren and (b) you feed the fragment shader a reasonable value for v3Direction. For all cases is a typical on-the-earth game like one with a race track, the sky will be behind all other objects, so accurate Z is not such a big deal.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Bypass classical deferred shading light volumes

I would like to "bypass" the classical light volume approach of deferred lighting.
Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.
I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.
So the main problem is to know which fragment will have to be discarded.
How could I do that:
Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius.
Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points.
And I can build my sphere equation.
Finally I can know if the line intersect the sphere.
Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?
Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.
So what?
The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.
So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.
Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.
As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.
However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:
Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.
Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:
float A = dot(cam_position, cam_position);
float B = -2 * (dot(cam_position, cam_sphere_center);
float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
float Discriminant = (B * B) - 4 * A * C;
If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.

How to reflect a chrome sphere in a scene with a procedural texture

My scene background is a procedural texture that draws an ocean, or a lava floor, or some such other background. It extends completely under as well, as if you were inside a cubemap. It would be easier if I could assume the view was the same in all directions, but if there's a sun, for example, you cannot.
Now if I wanted to put a chrome sphere in the middle, what does it reflect? Does the sphere see the same thing as the main camera does?
Assume it's expensive to render the background, and I do not want to do it multiple times per frame. I can save a copy to use in the reflection if that helps.
Can someone suggest a general approach? Here's an example of the procedural texture I mean (this is all in the shader, no geometry other than a quad):
https://www.shadertoy.com/view/XtS3DD
To answer your first question: In the real world, the reflection you see in the sphere depends on both the position of the camera, and the position of the sphere itself. However, taking both positions into account is prohibitively expensive for a moving sphere when using cube mapping (the most common approach), since you have to re-render all six faces of the cubemap with each frame. Thus, most games "fake" reality by using a cubemap that is centered about the origin ((0, 0, 0) in world-space) and only rendering static objects (trees, etc.) into the cube map.
Since your background is entirely procedural, you can skip creating cubemap textures. If you can define your procedural background texture as function of direction (not position!) from the origin, then you can use normal vector of each point on the sphere, plus the sphere's position, plus the camera position to sample from your background texture.
Here's the formula for it, using some glsl pseudocode:
vec3 N = normal vector for point on sphere
vec3 V = position of camera
vec3 S = position of point on sphere
vec3 ray = normalize(reflect(V-S,N));
// Reflect the vector pointing from the a point on the sphere to
// the camera over the normal vector for the sphere.
vec4 color = proceduralBackgroundFunc(ray);
Above, color is the final output of the shader for point S on the sphere's surface.
Alternatively, you can prerender the background into a cube texture, and then sample from it like so (changing only the last line of code from above):
vec4 color = texture(cubeSample,ray);

3D Volume rendering and multiple point of view occlusion

I've a `W x H x D' volumetric data that is zero everywhere except for little spherical volumes containing 1.
I have written the shader to extract the "intersection" of that 3D volume with a generic object made of vertices.
Vertex shader
varying vec3 textureCoordinates;
uniform float objectSize;
uniform vec3 objectTranslation;
void main()
{
vec4 v=gl_Vertex;
textureCoordinates= vec3( ((v.xz-objectTranslation.xz)/objectSize+1.0)*0.5, ((v.y-objectTranslation.y)/objectSize+1.0)*0.5);
gl_Position = gl_ModelViewProjectionMatrix*v;
}
Fragment shader
varying vec3 textureCoordinates;
uniform sampler3D volumeSampler;
void main()
{
vec4 uniformColor = vec4(1.0,1.0,0.0,1.0); //it's white
if ( textureCoordinates.x <=0.0 || textureCoordinates.x >= 1.0 || textureCoordinates.z <= 0.0 || textureCoordinates.z >= 1.0)
gl_FragColor =vec4(0.0,0.0,0.0,1.0); //Can be uniformColor to color again the thing
else
gl_FragColor = uniformColor*texture3D(volumeSampler, textureCoordinates);
}
In the OpenGL program, I'm looking the centered object with those almost-spherical patches of white on it from (0,100,0) eye coordinates, but I want that for another viewer (0,0,0) the spheres that lie on the same line-of-sight are correctly occluded, so that only the parts that I underlined in red in the picture are emitted.
Is this an application of raycasting or similar?
It seems what you want is occlusion culling, you have two main options to implement occlusion culling
Using GPU occlusion queries
This is essentially about asking the hardware if a certain fragment will be draw or not if not you can cull the object.
Occlusion queries count the number of fragments (or samples) that pass the depth test, which is useful to determine visibility of objects.
This algorithm is more complex than can be explained here, here is an excellent Nvidia article on the topic.
Using CPU ray casting
This is simply check each object (or possibly it's bounding volume), if a ray hits the object then it possibly hides other objects behind it. The objects need to be Spatially sorted using Octree or BSP Tree, so you don't end up checking every object and you only check objects near the camera.
For more on culling techniques check my answer here.
Is this an application of raycasting or similar?
This is in essence the raytracing shadow algorithm: Once you've hit a (visible) surface with your view-ray, you take that point as point of origin for a trace toward the other point (a light source or whatever) and if you can reach that point (without) "bumping" into something else use that information as further input into rendering calculations.

OpenGL - Create a border over a textured polygon

I'm working with cocos2d-x 2.0.4. I illustrate what I am trying to do through these two images.
What i want to do is to create a blurred border or a border with a gradient on it programmatically. I have two ideas to do that but I'm not sure if it is the correct way to do. First solution would be to triangulate the polygon containing only the blurred color (concave polygon with a hole in this case) and rendering color on it with a gradient, vertices on the outside of the polygon would be full-alpha and vertices on the inside zero-alpha. The interpolation would do the job of gradient then.
Second solution would be to do it inside the shader itself. All I need is to calculate the distance from a pixel and the closest edge of the polygon to it. Then under a certain threshold I affect pixel white color with a certain alpha value depending on that distance (the shortest the distance is, the biggest alpha is).
Anyway I am very new to openGL stuff and I am afraid that the second solution will end up with big processing time as I have to calculate the distance for every pixel of the polygon. What do you think about this guys? Any ideas the tend to confirm my guesses or am I completely wrong on this?
EDIT:
The solution I finally chose was to use the bisector of every angle (easy to calculate with 3 consecutive vertices) in the polygon and take a point on that bisector that would become a vertex for the inner polygon. Then i take either a outer polygon vertex or a inner polygon vertex to build an array of vertices that can fit the GL_TRIANGLE_STRIP parameter. I put the image below to understand better.
Will a rim lighting shader do what you want? Link to an example
Example code for a GLSL rim lighting shader:
const float rimStart = 0.5f;
const float rimEnd = 1.0f;
const float rimMultiplier = 0.0f;
vec3 rimColor = vec3(1.0f, 1.0f, 1.0f);
float NormalToCam = 1.0 - dot(normalize(outNormal), normalize(camPos - vertexWorldPos.xyz));
float rim = smoothstep(rimStart, rimEnd, NormalToCam) * rimMultiplier;
outColor.rgb += (rimColor * rim);
In order to make this look right from any viewpoint in a 3D scene you will need to perform some silhouetting. This essentially involves using a geometry shader to determine what edges of an object have an adjacent face that is facing the screen and an adjacent face that is not facing the screen. I believe this can be achieved by testing if the dot product between one adjacent face normal and your camera direction is <= 0 while the dot product of the other adjacent face normal and your camera direction is > 0.
Once you know all the edges that outline your polygon at a certain angle, you can tessellate the polygon defined by that border into triangle-strips (still in geometry shader). Then, you will pass a color per vertex to your fragment shader; where all vertices lying on the border pass the border color at full alpha and non-border points pass a color at zero alpha . The fragment shader will interpolate from border color to center alpha color at intermediate fragments giving you the gradient you want. Your total approach should be something like this:
Draw object with non-border shader program as the background color.
Enable alpha blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
Draw object with silhouetting program determining the edges that make up the borders with the border color, and drawing non-border points as zero alpha.
glDisable(GL_BLEND);