How to implement a ground fog GLSL shader - opengl

I'm trying to implement a ground fog shader for my terrain rendering engine.
The technique is described in this article: http://www.iquilezles.org/www/articles/fog/fog.htm
The idea is to consider the ray going from the camera to the fragment and integrate the fog density function along this ray.
Here's my shader code:
#version 330 core
in vec2 UV;
in vec3 posw;
out vec3 color;
uniform sampler2D tex;
uniform vec3 ambientLightColor;
uniform vec3 camPos;
const vec3 FogBaseColor = vec3(1., 1., 1.);
void main()
{
vec3 light = ambientLightColor;
vec TexBaseColor = texture(tex,UV).rgb;
//***************************FOG********************************************
vec3 camFrag = posw - camPos;
float distance = length(camFrag);
float a = 0.02;
float b = 0.01;
float fogAmount = a * exp(-camPos.z*b) * ( 1.0-exp( -distance*camFrag.z*b ) ) / (b*camFrag.z);
color = mix( light*TexBaseColor, light*FogBaseColor, fogAmount );
}
The first thing is that I don't understand how to choose a and b and what are their physical role in the fog density function.
Then, the result is not what I expect…
I have a ground fog but the transition of fogAmount from 0 to 1 is always centered at the camera altitude. I've tried a lot of different a and b but when I don't have a transition at camera altitude, I either have a full fogged or not fogged at all terrain.
I checked the data I use and everything's correct:
camPos.z is the altitude of my camera
camFrag.z is the vertical component of the vector going from the camera to the fragment
I can't get to understand what part of the equation cause this.
Any idea about this ?
EDIT : Here's the effect I'm looking for :
image1
image2

This is a pretty standard application of atmospheric scattering.
It is discussed under the umbrella of volumetric lighting usually, which involves transmittance of light through different media (e.g. smoke, air, water). In cutting-edge shader-based graphics, this can be achieved in real-time using ray-marching or if there is only one uniform participating medium (as it is in this case -- the fog only applies to air), simplified to integration over some distance.
Ordinarily you would ray-march through participating media in order to determine the properties of light transfer, but this application is simplified to assume a medium that has well-defined distribution characteristics and that is where the coefficients you are confused about come from. The density of fog varies exponentially with distance, and this is what b is controlling, likewise it also varies with altitude (not shown in the equation directly below).
   
(source: iquilezles.org)
What this article introduces to the discussion, however, are poorly named coefficients a and b. These control in-scattering and extinction. The author repeatedly refers to the extinction coefficient as extintion, which really makes no sense to me - hopefully this is just because English was not the author's native language. Extinction can be thought of as how quickly light is absorbed, and it describes the opacity of a medium. If you want a more theoretical basis for all of this, have a look at the following paper.
With this in mind, take another look at the code from your article:
vec3 applyFog( in vec3 rgb, // original color of the pixel
in float distance, // camera to point distance
in vec3 rayOri, // camera position
in vec3 rayDir ) // camera to point vector
{
float fogAmount = c*exp(-rayOri.y*b)*(1.0-exp(-distance*rayDir.y*b))/rayDir.y;
vec3 fogColor = vec3(0.5,0.6,0.7);
return mix( rgb, fogColor, fogAmount );
}
You can see that c in this code actually a from the original equation.
More importantly, there is an additional expression here:
   
This additional expression controls the density with respect to altitude. Judging by your implementation of the shader, you have not correctly implemented the second expression. camFrag.z is very likely not altitude, but rather depth. Furthermore, I do not understand why you are multiplying it by the b coefficient.

I found a method that gives the result I was looking for.
The method is described in this article of Eric Lengyel: http://www.terathon.com/lengyel/Lengyel-UnifiedFog.pdf
It explains how to create a fog layer with density and altitude parameters. You can fly through it, it progressively blends all the geometry above the fog.

Related

Is it possible to use a shader to map 3d coordinates with Mercator-like projection?

The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!

Shadertoy - fragCoord vs iResolution vs fragColor

I'm fairly new to Shadertoy and GLSL in general. I have successfully duplicated numerous Shadertoy shaders into Blender without actually knowing how it all works. I have looked for tutorials but I'm more of a visual learner.
If someone could explain or, even better, provide some images that describe the difference between fragCoord, iResolution, & fragColor. That would be great!
I'm mainly interested in the Numbers. Because I use Blender I'm used to the canvas being 0 to 1 -or- -1 to 1
This one in particular has me a bit confused.
vec2 u = (fragCoord - iResolution.xy * .5) / iResolution.y * 8.;
I can't reproduce the remaining code in Blender without knowing the coordinate system.
Any help would be greatly appreciated!
It is normal, you cannot reproduce this code in blender without knowing the coordinate system.
The Shadertoy documentation states:
Image shaders implement the mainImage() function to generate
procedural images by calculating a color for each pixel in the image.
This function is invoked once in each pixel and the host application
must provide the appropriate input data and retrieve the output color
to assign it to the corresponding pixel on the screen. The signature
of this function is:
void mainImage( out vec4 fragColor, in vec2 fragCoord);
where fragCoord contains the coordinates of the pixel for which the
shader must calculate a color. These coordinates are counted in pixels
with values from 0.5 to resolution-0.5 over the entire rendering
surface and the resolution of this surface is transmitted to the
shader via the uniform iResolution variable.
Let me explain.
The iResolution variable is a uniform vec3 which contains the dimensions of the window and is sent to the shader with some openGL code.
The fragCoord variable is a built-in variable that contains the coordinates of the pixel where the shader is being applied.
More concretely:
fragCoord : is a vec2 that is between 0 > 640 on the X axis and 0 > 360 on the Y axis
iResolution : is a vec2 with an X value of 640 and a Y value of 360
quick note on how vectors work in OpenGL:
if you have also a hard time understanging how vector work in OpenGL, I highly recommand to read the anwser of Homan bellow, a very usefull introduction to OpenGL swizzling.
This image was calculated with the following code:
// Normalized pixel coordinates (between 0 and 1)
vec2 uv = fragCoord/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from 0,0 in the lower-left and 1,1 in the upper-right. This is the default lower-left windows space set by OpenGL.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 and 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position
vec3 col = vec3(uv.x,uv.y,0);
// Output to screen
fragColor = vec4(col,1.0);
The output ranges from -0.5,-0.5 in the lower-left and 0.5,0.5 because
in the first step we subtract half of the window size [0.5] from each pixel coordinate [fragCoord]. You can see the effect in the way the red and green values don't kick into visibility until much later.
You might also want to normalize only the y axis by changing the first step to
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.y;
Depending our your purpose the image can seem strange if you normalize both axes so this is a possible strategy.
This an image was calculated with the following code:
// Normalized pixel coordinates (between -0.5 to 0.5)
vec2 uv = (fragCoord - iResolution.xy * 0.5)/iResolution.xy;
// Set R and G values based on position using ceil() function
// The ceil() function returns the smallest integer that is greater than the uv value
vec3 col = vec3(ceil(uv.x),ceil(uv.y),0);
// Output to screen
fragColor = vec4(col,1.0);
The ceil() function allows us to see that the center of the image is 0, 0
As for the second part of the shadertoy documentation:
The output color is returned in fragColor as a four-component vector,
the last component being ignored by the client. The result is
retrieved in an "out" variable in anticipation of the future addition
of several rendering targets.
Really all this means is that fragColor contains four values that are shopped to the next stage in the rendering pipeline. You can find more about in and out variables here.
The values in fragColor determine the color of the pixel where the shader is being applied.
If you want to learn more about shaders these are some good starting places:
the book of shader - uniform
learn OpenGL - shader
Not to take away from the accepted answer, which is very thorough. But in case anyone else was confused about the types, iResolution is a 'uniform highp 3-component vector of float'... so actually a vec3? That's why we see in examples that fragCoord (actually a vec2) is divided by iResolution.xy (the .xy gives us a vec2). But what is the '.xy' thing? Is it a method? An attribute or property? With some random googling I found out that the '.xy' tacked on at the end is called 'swizzling'
https://www.khronos.org/opengl/wiki/Data_Type_(GLSL)#Vectors
(for convenience the gist of it is here below)
Swizzling
You can access the components of vectors using the following syntax:
vec4 someVec;
someVec.x + someVec.y;
This is called swizzling. You can use x, y, z, or w, referring to the
first, second, third, and fourth components, respectively.
The reason it has that name "swizzling" is because the following syntax is entirely valid:
vec2 someVec;
vec4 otherVec = someVec.xyxx;
vec3 thirdVec = otherVec.zyy;
You can use any combination of up to 4 of the letters to create a vector (of the same basic type) of that length. So otherVec.zyy is a vec3, which is how we can initialize a vec3 value with it. Any combination of up to 4 letters is acceptable, so long as the source vector actually has those components. Attempting to access the 'w' component of a vec3 for example is a compile-time error.
Swizzling also works on l-values (left values?):
vec4 someVec;
someVec.wzyx = vec4(1.0, 2.0, 3.0, 4.0); // Reverses the order.
someVec.zx = vec2(3.0, 5.0); // Sets the 3rd component of someVec to 3.0 and the 1st component to 5.0
However, when you use a swizzle as a way of setting component values, you cannot use the same swizzle component twice. So someVec.xx = vec2(4.0, 4.0); is not allowed.
Additionally, there are 3 sets of swizzle masks. You can use xyzw, rgba (for colors), or stpq (for texture coordinates). These three sets have no actual difference; they're just syntactic sugar. You cannot combine names from different sets in a single swizzle operation. So ".xrs" is not a valid swizzle mask.
In OpenGL 4.2 or ARB_shading_language_420pack, scalars can be swizzled as well. They obviously only have one source component, but it is legal to do this:
float aFloat;
vec4 someVec = aFloat.xxxx;
// -1 to 1
vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.xy;
vec3 col = vec3(uv.x, uv.y, 0.0);
fragColor = vec4(col1, 1.0);

Atmospheric light scattering implementation

I'm trying to implement an atmospheric scattering in openGL. I'm using this "paper" as tutorial:
http://developer.amd.com/wordpress/media/2012/10/GDC_02_HoffmanPreetham.pdf
However I have some difficulties to understand certain points and to figure out some constants.
Basically I've to implement these formulas:
Firstly i don't know if s is the distance from eye to the dome or the distance from eye to the light source (here sun) position.
Same for the angle theta I can't figure out if it's the angle from ground to sun or to the dome position the eye is looking at.
Secondly in this slide:
It tells me the blue color of the sky will appears. I know it's cause of rayleigh scattering but there is something i can't understand. All the calculation in the formulas above give me a scalar: so how a white light of the sun wich is basically a vec3(1,1,1), will become blue when I multiply it by scalars, it will only get in gray scale because I will have for result for example vec3(0.8,0.8,0.8). I mean , if some different sky color appears, I must multiply the sun light with a vec3 to change the RGB value differently.
Now I encountered some difficulties to implement my shader.
Here is the code for the sky shader:
#version 330
in vec3 vpoint;
in vec2 vtexcoord;
out vec2 uv;
out vec3 atmos;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
mat4 MVP = P*V*M;
//uniform vec3 lpos;
vec3 lpos = vec3(100,0,0);
uniform vec3 cpos;
vec3 br = vec3(5.5e-6, 13.0e-6, 22.4e-6);
vec3 bm = vec3(21e-6);
float g = -0.75f;
vec3 Esun = vec3(2000,2000,2000);
vec3 Br(float theta){
return 3/(16*3.14) * br * (1+cos(theta)*cos(theta));
}
vec3 Bm(float theta){
return 1/(4*3.14) * bm * ((1 - g)*(1 - g))/(pow(1+g*g-
2*g*cos(theta),3/2));
}
vec3 atmospheric(float theta, float s){
return (Br(theta)*Bm(theta))/(br+bm) * Esun * (1- exp( -(br+bm)*s ));
}
void main() {
gl_Position = MVP * vec4(vpoint, 1.0);
uv = vtexcoord;
vec3 domePos = vec3(M*vec4(vpoint,1.0));
vec3 ldir = lpos - domePos;
float s = length(domePos-cpos);
float theta = acos(dot(normalize(ldir-domePos),normalize(domePos-
cpos)*vec3(1,1,0)));
atmos = atmospheric(theta,s)*1000000*5;
}
I don't get what I'm expected, here is what I get:
I only have the blue, and no redish sunset, yet the sun is low and according to the different tutorials I have seen, i should see some redish color appear when the sun goes low.
Warning I'm not an expert on this field, take this with a grain of salt.
This pretty much says it all.
s is the distance between the vertex/pixel and the camera.
θ is the angle between the sun and the line of sight.
In order to compute θ you need to know the "yellow line" and the "line of sight".
The latter is ordinary shader math; the former is just a way to express how high on the sky the sun is. You can model it as a ray from the sun to a point on the ground.
All the formula above gives you vectors.
L0 is a vector.
Esun is also a vector.
The slides basically says that the physic concepts like Radiance and Irradiance (Esun) are continuous on the spectrum and one should use a Spectral Power Distribution to describe lights and colors.
A faster approach however is to do the math only on three points of the spectrum, the one for the R, G and B wavelengths.
In practice this says that Esun is a vector describing the irradiance of the sun for the three RGB wavelength.
The blue of the sky comes from the parameter βR which depends on θ which depends on the "line of sight" which depends on the altitude of the fragment of the sky being coloured.

openGL: Losing details from a loaded obj file

I'm loading a .obj file into my program (without a .mtl file).
In the vertex shader, I have this:
#version 330
layout(location = 0) in vec3 in_position;
layout(location = 1) in vec3 in_color;
and my vertex structure looks like this:
struct VertexFormat {
glm::vec3 position;
glm::vec3 color;
glm::vec3 normal;
glm::vec2 texcoord;
VertexFormat() { every atribute is glm::vec3(0, 0, 0); }
VertexFormat(glm::vec3 _position, glm::vec3 _normal, glm::vec2 _texcoord, glm::vec3 _color) {
position = _position;
normal = _normal;
texcoord = _texcoord;
// color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));
color = normal;
}
}
Because I don't have a .mtl file, the color attribute depends on the other vertex attributes.
If I let color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));, the object loses some of the details (like a human face is just an ellipsoid).
This does not happen when I let color = normal;.
I want the color to not depend only on the normal attribute because then every object is colored as a rainbow.
Any idea why and how can I make it work?
EDIT:
This is an object with color = normal:
And this is with color = glm::vec3(texcoord, cos(texcoord.x + texcoord.y));:
The only things changed between the two pictures are the fact that I commented color = normal; and decommented the other.
In your comment you wrote
I would prefer to not use lighting at all. I don't understand why without lighting the first works (shows the details), while the other one doesn't
Perceived details depend on the color contrast in the final picture. The stronger the contrast, the stronger the detail (there's a strong relation to so called spatial frequencies as well).
Anyway, creased, edges, bulges, etc. in the mesh create a strong local position depending variation of the surface normal, which is what you see. In mathematical terms you could write this as
|| ∂/∂r n(r) ||
where n denotes the normal and r denotes the position, which becomes very large for creases and such.
The variation of a color depending position c(r) however would be
|| ∂/∂r c(r) ||
But since c(r) depends on only r and no local features c acts just like a constant and the local spatial variation in color is constant as well, i.e. has no strong features.
Essentially it means that you can make details visible only based on derivatives of surface features such as the normals.
The easiest way to do this is to use illumination. But you can use other methods as well, for example you can calculate the local variations of the normals (giving you the curvature of the surface) and make stronger curves areas brighter. Or you perform post processing on the screen space geometry, applying something like a first or second order gradient filter.
But you will not get around to apply math to it. There's no such thing as a free meal. Also don't expect people to write code for you without being clear what you actually want.

Blending lightmap with diffuse texture

I'm using c++, opengl 4.0 and glsh shader language.
I'm wondering how to correctly blend diffuse texture with lightmap texture.
Let's assume that we have a room. Every object has diffuse texture and lightmap. In every forum like gamedev.net or stackoverflow people say, that those textures should be multiplied. And in most cases it gives good results, but sometimes some objects are very close to light source (for example white bulb). This light source for close objects generates white lightmap. But when we multiply diffuse texture with white lightmap, then we get original diffuse texture color.
But if light source is close to some object, then color of light should be dominant
It means, that if white, strong light is close to red wall, then some part of this wall should be white, not red!
I think I need something more than just one lightmap. Lightmap don't have information about light intensity. It means, that the most shiny color is just maximum diffuse color.
Maybe I should have 2 textures - shadowmap and lightmap? Then equations should looks like this:
vec3 color = shadowmapColor * diffuseTextureColor + lightmapColor;
Is it good approach?
Generally speaking, if you're still using lightmaps, you are probably also not using HDR rendering. And without that, what you want is not particularly reasonable. Unless your light map provides the light intensity as an HDR floating-point value (perhaps in a GL_R11F_G11F_B10F or GL_RGBA16F format), this is not going to work very well.
And of course, you'll have to do the usual stuff that you do with HDR, such as tone mapping and so forth.
Lastly, your additive equation makes no sense. If the light map color represents the diffuse interaction between the light and the surface, then simply adding the light map color doesn't mean anything. The standard diffuse lighting equation is C * (dot(N, L) * I * D), where I is the light intensity, D is the distance attenuation factor, and C is the diffuse color. The value from the lightmap is presumably the parenthesized quantity. So adding it doesn't make sense.
It still needs to multiply with the surfaces's diffuse color. Any over-brightening will be due to the effective intensity of the light as a function of D.
What you need is the distance (or to save some sqrt-ing, the squared distance) of the light source to the fragment being illuminated. Then you can, in the simplest case, interpolate linearly between the light map and light source contributions:
The distance is a simple calculation which can be done per vertex in you vertex shader:
in vec4 VertexPosition; // let's assume world space for simplicity
uniform vec4 LightPosisiton; // world-space - might also be part of a uniform block etc.
out float LightDistance; // pass the distance to the fragment shader
// other stuff you need here ....
void main()
{
// do stuff
LightDistance = length(VertexPosition - LightPosisiton);
}
In your fragment shader, you use the distance to compute interpolation factors betweem light source and lightmap contributions:
in float LightDistance;
const float MAX_DISTANCE = 10.0;
uniform sampler2D LightMap;
// other stuff ...
out vec4 FragColor;
void main()
{
vec4 LightContribution;
// calculate illumination (including shadow map evaluation) here
// store in LightContribution
vec4 LightMapConstribution = texture(LightMap, /* tex coords here */);
// The following DistanceFactor will map distances in the range [0, MAX_DISTANCE] to
// [0,1]. The idea is that at LightDistance >= MAX_DISTANCE, the light source
// doesn't contribute anymore.
float DistanceFactor = min(1.0, LightDistance / MAX_DISTANCE);
// linearly interpolat between LightContribution and LightMapConstribution
vec4 FinalContribution = mix(LightContribution, LightMapConstribution, DistanceFactor);
FragColor = WhatEverColor * vec4(FinalContribution.xyz, 1.0);
}
HTH.
EDIT: To factor in Nicol Bolas' remarks, I assume that the LightMap stores the contribution encoded as an RGB color, storing the contributions for each channel. If you actually have a single channel lightmap which only store monochromatic contributions, you'll have to either use the surface color, use the color of the light source or reduce the light source contribution to a single channel.
EDIT2: Although this works mathematically, it's definitely not physically sound. You might need some correction of the final contribution to make it at least physically plausible. If your only aiming for effect, you can simply play around with correction factors until you're satisfied with the result.