I'm trying to implement an atmospheric scattering in openGL. I'm using this "paper" as tutorial:
http://developer.amd.com/wordpress/media/2012/10/GDC_02_HoffmanPreetham.pdf
However I have some difficulties to understand certain points and to figure out some constants.
Basically I've to implement these formulas:
Firstly i don't know if s is the distance from eye to the dome or the distance from eye to the light source (here sun) position.
Same for the angle theta I can't figure out if it's the angle from ground to sun or to the dome position the eye is looking at.
Secondly in this slide:
It tells me the blue color of the sky will appears. I know it's cause of rayleigh scattering but there is something i can't understand. All the calculation in the formulas above give me a scalar: so how a white light of the sun wich is basically a vec3(1,1,1), will become blue when I multiply it by scalars, it will only get in gray scale because I will have for result for example vec3(0.8,0.8,0.8). I mean , if some different sky color appears, I must multiply the sun light with a vec3 to change the RGB value differently.
Now I encountered some difficulties to implement my shader.
Here is the code for the sky shader:
#version 330
in vec3 vpoint;
in vec2 vtexcoord;
out vec2 uv;
out vec3 atmos;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
mat4 MVP = P*V*M;
//uniform vec3 lpos;
vec3 lpos = vec3(100,0,0);
uniform vec3 cpos;
vec3 br = vec3(5.5e-6, 13.0e-6, 22.4e-6);
vec3 bm = vec3(21e-6);
float g = -0.75f;
vec3 Esun = vec3(2000,2000,2000);
vec3 Br(float theta){
return 3/(16*3.14) * br * (1+cos(theta)*cos(theta));
}
vec3 Bm(float theta){
return 1/(4*3.14) * bm * ((1 - g)*(1 - g))/(pow(1+g*g-
2*g*cos(theta),3/2));
}
vec3 atmospheric(float theta, float s){
return (Br(theta)*Bm(theta))/(br+bm) * Esun * (1- exp( -(br+bm)*s ));
}
void main() {
gl_Position = MVP * vec4(vpoint, 1.0);
uv = vtexcoord;
vec3 domePos = vec3(M*vec4(vpoint,1.0));
vec3 ldir = lpos - domePos;
float s = length(domePos-cpos);
float theta = acos(dot(normalize(ldir-domePos),normalize(domePos-
cpos)*vec3(1,1,0)));
atmos = atmospheric(theta,s)*1000000*5;
}
I don't get what I'm expected, here is what I get:
I only have the blue, and no redish sunset, yet the sun is low and according to the different tutorials I have seen, i should see some redish color appear when the sun goes low.
Warning I'm not an expert on this field, take this with a grain of salt.
This pretty much says it all.
s is the distance between the vertex/pixel and the camera.
θ is the angle between the sun and the line of sight.
In order to compute θ you need to know the "yellow line" and the "line of sight".
The latter is ordinary shader math; the former is just a way to express how high on the sky the sun is. You can model it as a ray from the sun to a point on the ground.
All the formula above gives you vectors.
L0 is a vector.
Esun is also a vector.
The slides basically says that the physic concepts like Radiance and Irradiance (Esun) are continuous on the spectrum and one should use a Spectral Power Distribution to describe lights and colors.
A faster approach however is to do the math only on three points of the spectrum, the one for the R, G and B wavelengths.
In practice this says that Esun is a vector describing the irradiance of the sun for the three RGB wavelength.
The blue of the sky comes from the parameter βR which depends on θ which depends on the "line of sight" which depends on the altitude of the fragment of the sky being coloured.
Related
The background:
I am writing some terrain visualiser and I am trying to decouple the rendering from the terrain generation.
At the moment, the generator returns some array of triangles and colours, and these are bound in OpenGL by the rendering code (using OpenTK).
So far I have a very simple shader which handles the rotation of the sphere.
The problem:
I would like the application to be able to display the results either as a 3D object, or as a 2D projection of the sphere (let's assume Mercator for simplicity).
I had thought, this would be simple — I should compile an alternative shader for such cases. So, I have a vertex shader which almost works:
precision highp float;
uniform mat4 projection_matrix;
uniform mat4 modelview_matrix;
in vec3 in_position;
in vec3 in_normal;
in vec3 base_colour;
out vec3 normal;
out vec3 colour2;
vec3 fromSphere(in vec3 cart)
{
vec3 spherical;
spherical.x = atan(cart.x, cart.y) / 6;
float xy = sqrt(cart.x * cart.x + cart.y * cart.y);
spherical.y = atan(xy, cart.z) / 4;
spherical.z = -1.0 + (spherical.x * spherical.x) * 0.1;
return spherical;
}
void main(void)
{
normal = vec3(0,0,1);
normal = (modelview_matrix * vec4(in_normal, 0)).xyz;
colour2 = base_colour;
//gl_Position = projection_matrix * modelview_matrix * vec4(fromSphere(in_position), 1);
gl_Position = vec4(fromSphere(in_position), 1);
}
However, it has a couple of obvious issues (see images below)
Saw-tooth pattern where triangle crosses the cut meridian
Polar region is not well defined
3D case (Typical shader):
2D case (above shader)
Both of these seem to reduce to the statement "A triangle in 3-dimensional space is not always even a single polygon on the projection". (... and this is before any discussion about whether great circle segments from the sphere are expected to be lines after projection ...).
(the 1+x^2 term in z is already a hack to make it a little better - this ensures the projection not flat so that any stray edges (ie. ones that straddle the cut meridian) are safely behind the image).
The question: Is what I want to achieve possible with a VertexShader / FragmentShader approach? If not, what's the alternative? I think I can re-write the application side to pre-transform the points (and cull / add extra polygons where needed) but it will need to know where the cut line for the projection is — and I feel that this information is analogous to the modelViewMatrix in the 3D case... which means taking this logic out of the shader seems a step backwards.
Thanks!
I'm trying to apply a lighting per-pixel in my 3d engine but I'm having some trouble understanding what can be wrong with my geometry. I'm a beginner in OpenGL so please bear with me if my question may sound stupid, I'll explain as best as I can.
My vertex shader:
#version 400 core
layout(location = 0) in vec3 position;
in vec2 textureCoordinates;
in vec3 normal;
out vec2 passTextureCoordinates;
out vec3 normalVectorFromVertex;
out vec3 vectorFromVertexToLightSource;
out vec3 vectorFromVertexToCamera;
uniform mat4 transformation;
uniform mat4 projection;
uniform mat4 view;
uniform vec3 lightPosition;
void main(void) {
vec4 mainPosition = transformation * vec4(position, 1.0);
gl_Position = projection * view * mainPosition;
passTextureCoordinates = textureCoordinates;
normalVectorFromVertex = (transformation * vec4(normal, 1.0)).xyz;
vectorFromVertexToLightSource = lightPosition - mainPosition.xyz;
}
My fragment-shader:
#version 400 core
in vec2 passTextureCoordinates;
in vec3 normalVectorFromVertex;
in vec3 vectorFromVertexToLightSource;
layout(location = 0) out vec4 out_Color;
uniform sampler2D textureSampler;
uniform vec3 lightColor;
void main(void) {
vec3 versor1 = normalize(normalVectorFromVertex);
vec3 versor2 = normalize(vectorFromVertexToLightSource);
float dotProduct = dot(versor1, versor2);
float lighting = max(dotProduct, 0.0);
vec3 finalLight = lighting * lightColor;
out_Color = vec4(finalLight, 1.0) * texture(textureSampler, passTextureCoordinates);
}
The problem: Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up in such a way that whenever the pipeline goes to the fragment shader, my dot product between the vector that goes from my vertex to the light source and my normal is probably outputting <= 0, indicating that the lightsource is in an angle that is >= π/2 and therefore all my pixels are outputting rgb(0,0,0,1). But for the weirdest reason that I cannot understand geometrically, if I calculate transformation * vec4(normal, 1.0) the lighting appears to work kind of fine, except for extremely weird behaviours, like 'reacting' to distance. I mean, using this very simple lighting technique the vertex brightness is completely agnostic to distance, since it would imply the calculation of the vectors length, but I'm normalizing them before applying the dot product so there is no way that this is expected to me.
One thing that is clearly wrong to me, is that my transformation matrix have the translation components applied before multiplying the normal vectors, which will "move and point" the normals in the direction of the translation, which is wrong. Still I'm not sure if I should be getting this results. Any insights are appreciated.
Whenever I multiply my transformation matrix for the normal vector with a homogeneous coordinate of 0.0 like so: transformation * vec4(normal, 0.0), my resulting vector is getting messed up
What if you have non-uniform scaling in that transformation matrix?
Imagine a flat square surface, all normals are pointing up. Now you scale that surface to stretch in the horizontal direction: what would happen to normals?
If you don't adjust your transformation matrix to not have the scaling part in it, the normals will get skewed. After all, you only care about the object's orientation when considering the normals and the scale of the object is irrelevant to where the surface is pointing to.
Or think about a circle:
img source
You need to apply inverse transpose of the model view matrix to avoid scaling the normals when transforming the normals. Another SO question discusses it, as well as this video from Jaime King teaching Graphics with OpenGL.
Additional resources on transforming normals:
LearnOpenGL: Basic Lighting
Lighthouse3d.com: The Normal Matrix
First of all, I must apologize for posting yet another question on this subject (there are a lot already!). I did search for other related questions and answers, but unfortunately none of them showed me the solution. Now I'm desperate! :D
It is worth mentioning that the code posted below gives a satisfying 'bumpy' effect. It is the scene enlightenment that seems to be wrong.
The scene: is dead simple! A cube in the center, a light source rotating around it (parallel to the ground) and above.
My approach is to start from my basic light shader, which gives me adequate outputs (or so I think!). The first step is to modify it to do the calculations in tangent space, then use the normal extracted from a texture.
I tried to comment the code nicely, but in short I have two questions:
1) Doing only basic lighting (no normal mapping), I expect the scene to look exactly the same, with or without transforming my vectors into tangent space with the TBN matrix. Am I wrong?
2) Why do I get incorrect enlightenment?
A couple of screenshots to give you an idea (EDITED) - following LJ's comment, I am no longer summing normals and tangent per vertex/face. Interestingly, it highlights the issue (see on the capture, I have marked how the light moves).
Basically it is as if the cube was rotated 90 degrees to the left, or, as if the light was turing vertically instead of horizontally
Result with normal mapping:
Version with simple light:
Vertex shader:
// Information about the light.
// Here we care essentially about light.Position, which
// is set to be something like vec3(cos(x)*9, 5, sin(x)*9)
uniform Light_t Light;
uniform mat4 W; // The model transformation matrix
uniform mat4 V; // The camera transformation matrix
uniform mat4 P; // The projection matrix
in vec3 VS_Position;
in vec4 VS_Color;
in vec2 VS_TexCoord;
in vec3 VS_Normal;
in vec3 VS_Tangent;
out vec3 FS_Vertex;
out vec4 FS_Color;
out vec2 FS_TexCoord;
out vec3 FS_LightPos;
out vec3 FS_ViewPos;
out vec3 FS_Normal;
// This method calculates the TBN matrix:
// I'm sure it is not optimized vertex shader code,
// to have this seperate method, but nevermind for now :)
mat3 getTangentMatrix()
{
// Note: here I must say am a bit confused, do I need to transform
// with 'normalMatrix'? In practice, it seems to make no difference...
mat3 normalMatrix = transpose(inverse(mat3(W)));
vec3 norm = normalize(normalMatrix * VS_Normal);
vec3 tang = normalize(normalMatrix * VS_Tangent);
vec3 btan = normalize(normalMatrix * cross(VS_Normal, VS_Tangent));
tang = normalize(tang - dot(tang, norm) * norm);
return transpose(mat3(tang, btan, norm));
}
void main()
{
// Set the gl_Position and pass color + texcoords to the fragment shader
gl_Position = (P * V * W) * vec4(VS_Position, 1.0);
FS_Color = VS_Color;
FS_TexCoord = VS_TexCoord;
// Now here we start:
// This is where supposedly, multiplying with the TBN should not
// change anything to the output, as long as I apply the transformation
// to all of them, or none.
// Typically, removing the 'TBN *' everywhere (and not using the normal
// texture later in the fragment shader) is exactly the code I use for
// my basic light shader.
mat3 TBN = getTangentMatrix();
FS_Vertex = TBN * (W * vec4(VS_Position, 1)).xyz;
FS_LightPos = TBN * Light.Position;
FS_ViewPos = TBN * inverse(V)[3].xyz;
// This line is actually not needed when using the normal map:
// I keep the FS_Normal variable for comparison purposes,
// when I want to switch to my basic light shader effect.
// (see later in fragment shader)
FS_Normal = TBN * normalize(transpose(inverse(mat3(W))) * VS_Normal);
}
And the fragment shader:
struct Textures_t
{
int SamplersCount;
sampler2D Samplers[4];
};
struct Light_t
{
int Active;
float Ambient;
float Power;
vec3 Position;
vec4 Color;
};
uniform mat4 W;
uniform mat4 V;
uniform Textures_t Textures;
uniform Light_t Light;
in vec3 FS_Vertex;
in vec4 FS_Color;
in vec2 FS_TexCoord;
in vec3 FS_LightPos;
in vec3 FS_ViewPos;
in vec3 FS_Normal;
out vec4 frag_Output;
vec4 getPixelColor()
{
return Textures.SamplersCount >= 1
? texture2D(Textures.Samplers[0], FS_TexCoord)
: FS_Color;
}
vec3 getTextureNormal()
{
// FYI: the normal texture is always at index 1
vec3 bump = texture(Textures.Samplers[1], FS_TexCoord).xyz;
bump = 2.0 * bump - vec3(1.0, 1.0, 1.0);
return normalize(bump);
}
vec4 getLightColor()
{
// This is the one line that changes between my basic light shader
// and the normal mapping one:
// - If I don't do 'TBN *' earlier and use FS_Normal here,
// the enlightenment seems fine (see second screenshot)
// - If I do multiply by TBN (including on FS_Normal), I would expect
// the same result as without multiplying ==> not the case: it looks
// very similar to the result with normal mapping
// (just has no bumpy effect of course)
// - If I use the normal texture (along with TBN of course), then I get
// the result you see in the first screenshot.
vec3 N = getTextureNormal(); // Instead of 'normalize(FS_Normal);'
// Everything from here on is the same as my basic light shader
vec3 L = normalize(FS_LightPos - FS_Vertex);
vec3 E = normalize(FS_ViewPos - FS_Vertex);
vec3 R = normalize(reflect(-L, N));
// Ambient color: light color times ambient factor
vec4 ambient = Light.Color * Light.Ambient;
// Diffuse factor: product of Normal to Light vectors
// Diffuse color: light color times the diffuse factor
float dfactor = max(dot(N, L), 0);
vec4 diffuse = clamp(Light.Color * dfactor, 0, 1);
// Specular factor: product of reflected to camera vectors
// Note: applies only if the diffuse factor is greater than zero
float sfactor = 0.0;
if(dfactor > 0)
{
sfactor = pow(max(dot(R, E), 0.0), 8.0);
}
// Specular color: light color times specular factor
vec4 specular = clamp(Light.Color * sfactor, 0, 1);
// Light attenuation: square of the distance moderated by light's power factor
float atten = 1 + pow(length(FS_LightPos - FS_Vertex), 2) / Light.Power;
// The fragment color is a factor of the pixel and light colors:
// Note: attenuation only applies to diffuse and specular components
return getPixelColor() * (ambient + (diffuse + specular) / atten);
}
void main()
{
frag_Output = Light.Active == 1
? getLightColor()
: getPixelColor();
}
That's it! I hope you have enough information and of course, your help will be greatly appreciated! :) Take care.
I am experiancing a very similar problem, and i can not explain why the lighting doesn't work right, but i can answer your first question and at the very least explain how i somehow got lighting working acceptably (though your problem may not necesarrily be the same is mine).
Firstly in theory if you tangents and bitangents are calculated correctly, then you should get exactly the same lighting result when doing the calculation in tangentspace with a tangentspace normal [0,0,1].
Secondly while it is common knowledge that you should transform your normals from model to cameraspace by multiplying by inverse transpose model-view matrix as explained by this tutorial, i found that the problem with the lighting being transformed wrong can be solved if you transform the normal tangent by the model-view matrix rather than the inverse transpose model-view. Ie use normalMatrix = mat3(W); instead of normalMatrix = transpose(inverse(mat3(W)));.
In my case this did »fix« the problems with the light, but i don't know why this fixed it, but i make no guarantee that it does not (in fact i assume that it does) introduce other problems with the shading
In per-fragment point lighting, what role role does a vertex normal play in point light calculations?
My understanding is that the brightness of a fragment is based solely on it's distance from the light source, and that a directional vector would be irrelevant.
Example Shader
precision mediump float;
uniform vec3 u_LightPos;
varying vec3 v_Position;
varying vec4 v_Color;
varying vec3 v_Normal;
void main()
{
float distance = length(u_LightPos - v_Position);
vec3 lightVector = normalize(u_LightPos - v_Position);
float diffuse = max(dot(v_Normal, lightVector), 0.1);
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
gl_FragColor = v_Color * diffuse;
}
Looks like you're using Lambertian diffuse lighting. In this case, the surface brightness is proportional to the intensity of incoming light (the inverse square term in your equation) and additionally to the cosine of the angle between the surface normal and the direction of the incoming light (the dot product in your equation).
This makes a lot of sense after some thought. Consider the two extremes: you would expect a piece of paper held perpendicular to the beam of a flashlight to appear much brighter than one held parallel to the beam.
For more information, see this section of the relevant wikipedia page.
I recently wrote a Phong shader in GLSL as part of a school assignment. I started with tutorials, then played around with the code until I got it working. It works perfectly fine as far as I can tell, but there's one line in particular I wrote where I don't understand why it does work.
The vertex shader:
#version 330
layout (location = 0) in vec3 Position; // Vertex position
layout (location = 1) in vec3 Normal; // Vertex normal
out vec3 Norm;
out vec3 Pos;
out vec3 LightDir;
uniform mat3 NormalMatrix; // ModelView matrix without the translation component, and inverted
uniform mat4 MVP; // ModelViewProjection Matrix
uniform mat4 ModelView; // ModelView matrix
uniform vec3 light_pos; // Position of the light
void main()
{
Norm = normalize(NormalMatrix * Normal);
Pos = Position;
LightDir = NormalMatrix * (light_pos - Position);
gl_Position = MVP * vec4(Position, 1.0);
}
The fragment shader:
#version 330
in vec3 Norm;
in vec3 Pos;
in vec3 LightDir;
layout (location = 0) out vec4 FragColor;
uniform mat3 NormalMatrix;
uniform mat4 ModelView;
void main()
{
vec3 normalDirCameraCoords = normalize(Norm);
vec3 vertexPosLocalCoords = normalize(Pos);
vec3 lightDirCameraCoords = normalize(LightDir);
float dist = max(length(LightDir), 1.0);
float intensity = max(dot(normalDirCameraCoords, lightDirCameraCoords), 0.0) / pow(dist, 1.001);
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
float intSpec = max(dot(h, normalDirCameraCoords), 0.0);
vec4 spec = vec4(0.9, 0.9, 0.9, 1.0) * (pow(intSpec, 100) / pow(dist, 1.2));
FragColor = max((intensity * vec4(0.7, 0.7, 0.7, 1.0)) + spec, vec4(0.07, 0.07, 0.07, 1.0));
}
So I'm doing the method where you calculate the half vector between the light vector and the camera vector, then dot it with the normal. That's all good. However, I do two things that are strange.
Normally, everything is done in eye coordinates. However, Position, which I pass from the vertex shader to the fragment shader, is in local coordinates.
This is the part that baffles me. On the line vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords); I'm subtracting the light vector in camera coordinates with the vertex position in local coordinates. This seems utterly wrong.
In short, I understand what this code is supposed to be doing, and how the half vector method of phong shading works.
But why does this code work?
EDIT: The starter code we were provided is open source, so you can download the completed project and look at it directly if you'd like. The project is for VS 2012 on Windows (you'll need to set up GLEW, GLM, and freeGLUT), and should work on GCC with no code changes (maybe a change or two to the makefile library paths).
Note that in the source files, "light_pos" is called "gem_pos", as our light source is the little gem you move around with WSADXC. Press M to get Phong with multiple lights.
The reason this works is happenstance, but it's interesting to see why it still works.
Phong shading is three techniques in one
With phong shading, we have three terms: specular, diffuse, and ambient; these three terms represent the three techniques used in phong shading.
None of these terms strictly require a vector space; you can make phong shading work in world, local, or camera spaces as long as you are consistant. Eye space is usually used for lighting, as it is easier to work with and the conversions are simple.
But what if you are at origin? Now you are multiplying by zero; it's easy to see that there's no difference between any of the vector spaces at origin. By coincidence, at origin, it doesn't matter what vector space you are in; it'll work.
vec3 h = normalize(lightDirCameraCoords - vertexPosLocalCoords);
Notice that it's basically subtracting 0; this is the only time local is used, and it's used in the one place that it can do the least damage. Since the object is at origin, all it's vertices should be at or very close to origin as well. At origin, the approximation is exact; all vector spaces converge. Very close to origin, it's very close to exact; even if we used exact reals, it'd be a very small divergence, but we don't use exact reals, we use floats, compounding the issue.
Basically, you got lucky; this wouldn't work if the object wasn't at origin. Try moving it and see!
Also, you aren't using Phong shading; you are using Blinn-Phong shading (that's the name for the replacement of reflect() with a half vector, just for reference).