After changing my current deferred renderer to use a logarithmic depth buffer I can not work out, for the life of me, how to reconstruct world-space depth from the depth buffer values.
When I had the OpenGL default z/w depth written I could easily calculate this value by transforming from window-space to NDC-space then perform inverse perspective transformation.
I did this all in the second pass fragment shader:
uniform sampler2D depth_tex;
uniform mat4 inv_view_proj_mat;
in vec2 uv_f;
vec3 reconstruct_pos(){
float z = texture(depth_tex, uv_f).r;
vec4 pos = vec4(uv_f, z, 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
and got a result that looked pretty correct:
But now the road to a simple z value is not so easy (doesn't seem like it should be so hard either).
My vertex shader for my first pass with log depth:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 uv;
uniform mat4 mvp_mat;
uniform float FC;
out vec2 uv_f;
out float logz_f;
out float FC_2_f;
void main(){
gl_Position = mvp_mat * vec4(pos, 1.0);
logz_f = 1.0 + gl_Position.w;
gl_Position.z = (log2(max(1e-6, logz_f)) * FC - 1.0) * gl_Position.w;
FC_2_f = FC * 0.5;
}
And my fragment shader:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
// other uniforms and output variables
in vec2 uv_f;
in float FC_2_f;
void main(){
gl_FragDepth = log2(logz_f) * FC_2_f;
}
I have tried a few different approaches to get back the z-position correctly, all failing.
If I redefine my reconstruct_pos in the second pass to be:
vec3 reconstruct_pos(){
vec4 pos = vec4(uv_f, get_depth(), 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
This is my current attempt at reconstructing Z:
uniform float FC;
float get_depth(){
float log2logz_FC_2 = texture(depth_tex, uv_f).r;
float logz = pow(2, log2logz_FC_2 / (FC * 0.5));
float pos_z = log2(max(1e-6, logz)) * FC - 1.0; // pos.z
return pos_z;
}
Explained:
log2logz_FC_2: the value written to depth buffer, so log2(1.0 + gl_Position.w) * (FC / 2)
logz: simply 1.0 + gl_Position.w
pos_z: the value of gl_Position.z before perspective devide
return value: gl_Position.z
Of course, that's just my working. I'm not sure what these values actually hold in the end, because I think I've screwed up some of the math or not correctly understood the transformations going on.
What is the correct way to get my world-space Z position from this logarithmic depth buffer?
In the end I was going about this all wrong. The way to get the world-space position back using a log buffer is:
Retrieve depth from texture
Reconstruct gl_Position.w
linearize reconstructed depth
translate to world space
Here's my implementation in glsl:
in vec2 uv_f;
uniform float nearz;
uniform float farz;
uniform mat4 inv_view_proj_mat;
float linearize_depth(in float depth){
float a = farz / (farz - nearz);
float b = farz * nearz / (nearz - farz);
return a + b / depth;
}
float reconstruct_depth(){
float depth = texture(depth_tex, uv_f).r;
return pow(2.0, depth * log2(farz + 1.0)) - 1.0;
}
vec3 reconstruct_world_pos(){
vec4 wpos =
inv_view_proj_mat *
(vec4(uv_f, linearize_depth(reconstruct_depth()), 1.0) * 2.0 - 1.0);
return wpos.xyz / wpos.w;
}
Which gives me the same result (but with better precision) as when I was using the default OpenGL depth buffer.
Related
I'm creating a terrain mesh, and following this SO answer I'm trying to migrate my CPU computed normals to a shader based version, in order to improve performances by reducing my mesh resolution and using a normal map computed in the fragment shader.
I'm using MapBox height map for the terrain data. Tiles look like this:
And elevation at each pixel is given by the following formula:
const elevation = -10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1);
My original code first creates a dense mesh (256*256 squares of 2 triangles) and then computes triangle and vertices normals. To get a visually satisfying result I was diving the elevation by 5000 to match the tile's width & height in my scene (in the future I'll do a proper computation to display the real elevation).
I was drawing with these simple shaders:
Vertex shader:
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
attribute vec3 a_Position;
attribute vec3 a_Normal;
attribute vec2 a_TextureCoordinates;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
v_Normal = vec3(u_View * u_Model * vec4(a_Normal, 0.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
uniform sampler2D texture;
void main() {
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(v_Normal, lightVector), 0.1);
highp vec4 textureColor = texture2D(texture, v_TextureCoordinates);
gl_FragColor = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It was slow but gave visually satisfying results:
Now, I removed all the CPU based normals computation code, and replaced my shaders by those:
Vertex shader:
#version 300 es
precision highp float;
precision highp int;
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
in vec3 a_Position;
in vec2 a_TextureCoordinates;
out vec3 v_Position;
out vec2 v_TextureCoordinates;
out mat4 v_Model;
out mat4 v_View;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Model = u_Model;
v_View = u_View;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
#version 300 es
precision highp float;
precision highp int;
in vec3 v_Position;
in vec2 v_TextureCoordinates;
in mat4 v_Model;
in mat4 v_View;
uniform sampler2D u_dem;
uniform sampler2D u_texture;
out vec4 color;
const vec2 size = vec2(2.0,0.0);
const ivec3 offset = ivec3(-1,0,1);
float getAltitude(vec4 pixel) {
float red = pixel.x;
float green = pixel.y;
float blue = pixel.z;
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1)) * 6.0; // Why * 6 and not / 5000 ??
}
void main() {
float s01 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.xy));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
float s12 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yz));
vec3 va = (vec3(size.xy, s21 - s01));
vec3 vb = (vec3(size.yx, s12 - s10));
vec3 normal = normalize(cross(va, vb));
vec3 transformedNormal = normalize(vec3(v_View * v_Model * vec4(normal, 0.0)));
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(transformedNormal, lightVector), 0.1);
highp vec4 textureColor = texture(u_texture, v_TextureCoordinates);
color = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It now loads nearly instantly, but something is wrong:
in the fragment shader I had to multiply the elevation by 6 rather than dividing by 5000 to get something close to my original code
the result is not as good. Especially when I tilt the scene, the shadows are very dark (the more I tilt the darker they get):
Can you spot what causes that difference?
EDIT: I created two JSFiddles:
first version with CPU computed vertices normals: http://jsfiddle.net/tautin/tmugzv6a/10
second version with GPU computed normal map: http://jsfiddle.net/tautin/8gqa53e1/42
The problem appears when you play with the tilt slider.
There were three problems I could find.
One you saw and fixed by trial and error, which is that the scale of your height calculation was wrong. In CPU, your color coordinates varies from 0 to 255, but on GLSL, texture values are normalized from 0 to 1, so the correct height calculation is:
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0)) / Z_SCALE;
But for this shader purpose, the -10000.00 doesn't matter, so you can do:
return (red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0 / Z_SCALE;
The second problem is that the scale of your x and y coordinates was also wrong. In the CPU code the distance between two neighbor points is (SIZE * 2.0 / (RESOLUTION + 1)), but in GPU, you had set it to 1. The correct way to define your size variable is:
const float SIZE = 2.0;
const float RESOLUTION = 255.0;
const vec2 size = vec2(2.0 * SIZE / (RESOLUTION + 1.0), 0.0);
Notice that I increased the resolution to 255 because I assume this is what you want (one minus the texture resolution). Also, this is needed to match the value of offset, which you defined as:
const ivec3 offset = ivec3(-1,0,1);
To use a different RESOLUTION value, you will have to adjust offset accordingly, e.g. for RESOLUTION == 127, offset = ivec3(-2,0,2), i.e. the offset must be <real texture resolution>/(RESOLUTION + 1), which limits the possibilities for RESOLUTION, since offset must be integer.
The third problem is that you used a different normal calculation algorithm in the GPU, which strikes to me as having lower resolution than the one used on CPU, because you use the four outer pixels of a cross, but ignores the central one. It seems that this is not the full story, but I can't explain why they are so different. I tried to implement the exact CPU algorithm as I thought it should be, but it yield different results. Instead, I had to use the following algorithm, which is similar but not exactly the same, to get an almost identical result (if you increase the CPU resolution to 255):
float s11 = getAltitude(texture(u_dem, v_TextureCoordinates));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
vec3 va = (vec3(size.xy, s21 - s11));
vec3 vb = (vec3(size.yx, s10 - s11));
vec3 normal = normalize(cross(va, vb));
This is the original CPU solution, but with RESOLUTION=255: http://jsfiddle.net/k0fpxjd8/
This is the final GPU solution: http://jsfiddle.net/7vhpuqd8/
I'm trying to implement this version of ssao with this tutorial:
http://www.learnopengl.com/#!Advanced-Lighting/SSAO
Here is what I end up with for my render textures.
When I move the camera the shadows seem to follow
Seems like I am missing some kind of matrix multiplication with the camera.
CODE
gBuffer Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
out vec3 position;
out vec3 normal;
uniform mat4 m;
uniform mat4 v;
uniform mat4 p;
uniform mat4 n;
void main()
{
vec4 viewPos = v * m * vec4(vertexPosition, 1.0f);
position = viewPos.xyz;
gl_Position = p * viewPos;
normal = vec3(n * vec4(vertexNormal, 0.0f));
}
gBuffer Fragment
#version 330 core
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec3 gNormal;
layout (location = 2) out vec4 gColor;
in vec3 position;
in vec3 normal;
const float NEAR = 0.1f;
const float FAR = 50.0f;
float LinearizeDepth(float depth)
{
float z = depth * 2.0f - 1.0f;
return (2.0 * NEAR * FAR) / (FAR + NEAR - z * (FAR - NEAR));
}
void main()
{
gPosition.xyz = position;
gPosition.a = LinearizeDepth(gl_FragCoord.z);
gNormal = normalize(normal);
gColor.rgb = vec3(1.0f);
}
SSAO Vertex
#version 330 core
layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec2 texCoords;
out vec2 UV;
void main(){
gl_Position = vec4(vertexPosition, 1.0f);
UV = texCoords;
}
SSAO Fragment
#version 330 core
out float FragColor;
in vec2 UV;
uniform sampler2D gPositionDepth;
uniform sampler2D gNormal;
uniform sampler2D texNoise;
uniform vec3 samples[32];
uniform mat4 projection;
// parameters (you'd probably want to use them as uniforms to more easily tweak the effect)
int kernelSize = 32;
float radius = 1.0;
// tile noise texture over screen based on screen dimensions divided by noise size
const vec2 noiseScale = vec2(1024.0f/4.0f, 1024.0f/4.0f);
void main()
{
// Get input for SSAO algorithm
vec3 fragPos = texture(gPositionDepth, UV).xyz;
vec3 normal = texture(gNormal, UV).rgb;
vec3 randomVec = texture(texNoise, UV * noiseScale).xyz;
// Create TBN change-of-basis matrix: from tangent-space to view-space
vec3 tangent = normalize(randomVec - normal * dot(randomVec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 TBN = mat3(tangent, bitangent, normal);
// Iterate over the sample kernel and calculate occlusion factor
float occlusion = 0.0;
for(int i = 0; i < kernelSize; ++i)
{
// get sample position
vec3 sample = TBN * samples[i]; // From tangent to view-space
sample = fragPos + sample * radius;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset; // from view to clip-space
offset.xyz /= offset.w; // perspective divide
offset.xyz = offset.xyz * 0.5 + 0.5; // transform to range 0.0 - 1.0
// get sample depth
float sampleDepth = -texture(gPositionDepth, offset.xy).w; // Get depth value of kernel sample
// range check & accumulate
float rangeCheck = smoothstep(0.0, 1.0, radius / abs(fragPos.z - sampleDepth ));
occlusion += (sampleDepth >= sample.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / kernelSize);
FragColor = occlusion;
}
I've read around and saw someone had a similar issue and passed the view matrix into the ssao shader and multiplied the sampleDepth:
float sampleDepth = (viewMatrix * -texture(gPositionDepth, offset.xy)).w;
But seems like it just makes things worse.
Heres another view from up top where you can see the shadows move with the camera
If I position my camera in certain ways things line up
Although I can only assume the value of your normal matrix n in the gBuffer vertex shader, it seems like you don't store your normals in view space but in world space. Since the SSAO calculations are done in screen space, this could (at least partially) explain the unexpected behavior. In that case, you either need to multiply your view matrix v to your normals before storing them to the gBuffer (potentially more efficient, but may interfere with your other shading calculations) or after retrieving them.
I am working on the beginnings of omnidirectional shadow mapping in my engine. For now I am only producing one shadowmap as a test. I am getting an odd result when using my current shaders. Here is a screenshot which shows the problem:
I am using a near value of 0.5 and a far value of 5.0 in the projection matrix for the shadowmap render. As near as I can tell, any value with a light-space z larger than my far plane distance is being computed by my fragment shader as in shadow.
This is my fragment shader:
in vec2 st;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowmapTexture;
uniform mat4 invProj;
uniform mat4 lightProj;
uniform vec3 lightPosition;
out vec3 color;
void main () {
vec3 clipSpaceCoords;
clipSpaceCoords.xy = st.xy * 2.0 - 1.0;
clipSpaceCoords.z = texture(depthTexture, st).x * 2.0 - 1.0;
vec4 position = invProj * vec4(clipSpaceCoords,1.0);
position.xyz /= position.w;
vec4 lightSpace = lightProj * vec4(position.xyz,1.0);
lightSpace.xyz /= lightSpace.w;
lightSpace.xyz = lightSpace.xyz * 0.5 + 0.5;
float lightDepth = texture(shadowmapTexture, lightSpace.xy).x;
vec3 normal = texture(normalTexture, st);
vec3 diffuse;
float shadowFactor = 1.0;
if(lightSpace.w > 0.0 && lightSpace.z > lightDepth+0.0042) {
shadowFactor = 0.2;
}
else {
float k = 0.00001;
vec3 distanceToLight = lightPosition - position.xyz;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
}
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz*shadowFactor+diffuse, gamma);
}
How can I fix this issue (Other than increasing my far plane distance)?
One other question, as this is the first time I have attempted shadowmapping: am I doing the lighting in relation to the shadows correctly?
I am attempting to reconstruct my fragment's position from a depth value stored in a GL_DEPTH_ATTACHMENT. To do this, I linearize the depth then multiply the depth by a ray from the camera position and to the corresponding point on the far plane.
This method is the second one described here. In order to get the ray from the camera to the far plane, I retrieve rays to the four corners of the far planes, pass them to my vertex shader, then interpolate into the fragment shader. I am using the following code to get the rays from the camera to the far plane's corners in world space.
std::vector<float> Camera::GetFlatFarFrustumCorners() {
// rotation is the orientation of my camera in a quaternion.
glm::quat inverseRotation = glm::inverse(rotation);
glm::vec3 localUp = glm::normalize(inverseRotation * glm::vec3(0.0f, 1.0f, 0.0f));
glm::vec3 localRight = glm::normalize(inverseRotation * glm::vec3(1.0f, 0.0f, 0.0f));
float farHeight = 2.0f * tan(90.0f / 2) * 100.0f;
float farWidth = farHeight * aspect;
// 100.0f is the distance to the far plane. position is the location of the camera in word space.
glm::vec3 farCenter = position + glm::vec3(0.0f, 0.0f, -1.0f) * 100.0f;
glm::vec3 farTopLeft = farCenter + (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farTopRight = farCenter + (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
glm::vec3 farBottomLeft = farCenter - (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farBottomRight = farCenter - (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
return {
farTopLeft.x, farTopLeft.y, farTopLeft.z,
farTopRight.x, farTopRight.y, farTopRight.z,
farBottomLeft.x, farBottomLeft.y, farBottomLeft.z,
farBottomRight.x, farBottomRight.y, farBottomRight.z
};
}
Is this a correct way to retrieve the corners of the far plane in world space?
When I use these corners with my shaders, the results are incorrect, and what I get seems to be in view space. These are the shaders I am using:
Vertex Shader:
layout(location = 0) in vec2 vp;
layout(location = 1) in vec3 textureCoordinates;
uniform vec3 farFrustumCorners[4];
uniform vec3 cameraPosition;
out vec2 st;
out vec3 frustumRay;
void main () {
st = textureCoordinates.xy;
gl_Position = vec4 (vp, 0.0, 1.0);
frustumRay = farFrustumCorners[int(textureCoordinates.z)-1] - cameraPosition;
}
Fragment Shader:
in vec2 st;
in vec3 frustumRay;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform vec3 cameraPosition;
uniform vec3 lightPosition;
out vec3 color;
void main () {
// Far and near distances; Used to linearize the depth value.
float f = 100.0;
float n = 0.1;
float depth = (2 * n) / (f + n - (texture(depthTexture, st).x) * (f - n));
vec3 position = cameraPosition + (normalize(frustumRay) * depth);
vec3 normal = texture(normalTexture, st);
float k = 0.00001;
vec3 distanceToLight = lightPosition - position;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
vec3 diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz+diffuse, gamma);
//color = texture(colorTexture, st);
//colour.r = (2 * n) / (f + n - texture( tex, st ).x * (f - n));
//colour.g = (2 * n) / (f + n - texture( tex, st ).y* (f - n));
//colour.b = (2 * n) / (f + n - texture( tex, st ).z * (f - n));
}
This is what my scene's lighting looks like under these shaders:
I am pretty sure that this is the result of either my reconstructed position being completely wrong, or it being in the wrong space. What is wrong with my reconstruction, and what can I do to fix it?
What you will first want to do is develop a temporary addition to your G-Buffer setup that stores the initial position of each fragment in world/view space (really, whatever space you are trying to reconstruct here). Then write a shader that does nothing but reconstruct these positions from the depth buffer. Set everything up so that half of your screen is displays the original G-Buffer and the other half displays your reconstructed position. You should be able to immediately spot discrepancies this way.
That said, you might want to take a look at an implementation I have used in the past to reconstruct (object space) position from the depth buffer. It basically gets you into view space first, then uses the inverse modelview matrix to go to object space. You can adjust it for world space trivially. It is probably not the most flexible implementation, what with FOV being hard-coded and all, but you can easily modify it to use uniforms instead...
Trimmed down fragment shader:
flat in mat4 inv_mv_mat;
in vec2 uv;
...
float linearZ (float z)
{
#ifdef INVERT_NEAR_FAR
const float f = 2.5;
const float n = 25000.0;
#else
const float f = 25000.0;
const float n = 2.5;
#endif
return n / (f - z * (f - n)) * f;
}
vec4
reconstruct_pos (float depth)
{
depth = linearZ (depth);
vec4 pos = vec4 (uv * depth, -depth, 1.0);
vec4 ret = (inv_mv_mat * pos);
return ret / ret.w;
}
It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this:
#version 150 core
in vec4 vtx_pos;
in vec2 vtx_st;
uniform mat4 modelview_mat; // Matrix used when the G-Buffer was built
uniform mat4 camera_matrix; // Matrix used to stretch the G-Buffer over the viewport
uniform float buffer_res_x;
uniform float buffer_res_y;
out vec2 tex_st;
flat out mat4 inv_mv_mat;
out vec2 uv;
// Hard-Coded 45 degree FOV
//const float fovy = 0.78539818525314331; // NV pukes on the line below!
//const float fovy = radians (45.0);
//const float tan_half_fovy = tan (fovy * 0.5);
const float tan_half_fovy = 0.41421356797218323;
float aspect = buffer_res_x / buffer_res_y;
vec2 inv_focal_len = vec2 (tan_half_fovy * aspect,
tan_half_fovy);
const vec2 uv_scale = vec2 (2.0, 2.0);
const vec2 uv_translate = vec2 (1.0, 1.0);
void main (void)
{
inv_mv_mat = inverse (modelview_mat);
tex_st = vtx_st;
gl_Position = camera_matrix * vtx_pos;
uv = (vtx_st * uv_scale - uv_translate) * inv_focal_len;
}
Depth range inversion is something you might find useful for deferred shading, normally a perspective depth buffer gives you more precision than you need at close range and not enough far away for quality reconstruction. If you flip things on their head by inverting the depth range you can even things out a little bit while still using the hardware depth buffer. This is discussed in detail here.
Im currently trying to convert a shader by Sean O'Neil to version 330 so i can try it out in a application im writing. Im having some issues with deprecated functions, so i replaced them, but im almost completely new to glsl, so i probably did a mistake somewhere.
Original shaders can be found here:
http://www.gamedev.net/topic/592043-solved-trying-to-use-atmospheric-scattering-oneill-2004-but-get-black-sphere/
My horrible attempt at converting them:
Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexNormal_modelspace;
uniform vec3 v3CameraPos; // The camera's current position
uniform vec3 v3LightPos; // The direction vector to the light source
uniform vec3 v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels
uniform float fCameraHeight; // The camera's current height
uniform float fCameraHeight2; // fCameraHeight^2
uniform float fOuterRadius; // The outer (atmosphere) radius
uniform float fOuterRadius2; // fOuterRadius^2
uniform float fInnerRadius; // The inner (planetary) radius
uniform float fInnerRadius2; // fInnerRadius^2
uniform float fKrESun; // Kr * ESun
uniform float fKmESun; // Km * ESun
uniform float fKr4PI; // Kr * 4 * PI
uniform float fKm4PI; // Km * 4 * PI
uniform float fScale; // 1 / (fOuterRadius - fInnerRadius)
uniform float fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found)
uniform float fScaleOverScaleDepth; // fScale / fScaleDepth
const int nSamples = 2;
const float fSamples = 2.0;
invariant out vec3 v3Direction;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightPosition_worldspace;
out vec4 dgl_SecondaryColor;
out vec4 dgl_Color;
float scale(float fCos)
{
float x = 1.0 - fCos;
return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25))));
}
void main(void)
{
//gg_FrontColor = vec3(1.0, 0.0, 0.0);
//gg_FrontSecondaryColor = vec3(0.0, 1.0, 0.0);
// Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere)
vec3 v3Pos = vertexPosition_modelspace;
vec3 v3Ray = v3Pos - v3CameraPos;
float fFar = length(v3Ray);
v3Ray /= fFar;
// Calculate the ray's starting position, then calculate its scattering offset
vec3 v3Start = v3CameraPos;
float fHeight = length(v3Start);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight));
float fStartAngle = dot(v3Ray, v3Start) / fHeight;
float fStartOffset = fDepth*scale(fStartAngle);
// Initialize the scattering loop variables
gl_FrontColor = vec4(0.0, 0.0, 0.0, 0.0);
gl_FrontSecondaryColor = vec4(0.0, 0.0, 0.0, 0.0);
float fSampleLength = fFar / fSamples;
float fScaledLength = fSampleLength * fScale;
vec3 v3SampleRay = v3Ray * fSampleLength;
vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5;
// Now loop through the sample rays
vec3 v3FrontColor = vec3(0.2, 0.1, 0.0);
for(int i=0; i<nSamples; i++)
{
float fHeight = length(v3SamplePoint);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight));
float fLightAngle = dot(v3LightPos, v3SamplePoint) / fHeight;
float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight;
float fScatter = (fStartOffset + fDepth*(scale(fLightAngle) - scale(fCameraAngle)));
vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI));
v3FrontColor += v3Attenuate * (fDepth * fScaledLength);
v3SamplePoint += v3SampleRay;
}
// Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader
gl_FrontSecondaryColor.rgb = v3FrontColor * fKmESun;
gl_FrontColor.rgb = v3FrontColor * (v3InvWavelength * fKrESun);
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
v3Direction = v3CameraPos - v3Pos;
dgl_SecondaryColor = gl_FrontSecondaryColor;
dgl_Color = gl_FrontColor;
}
Fragment shader:
#version 330 core
out vec4 dgl_FragColor;
uniform vec3 v3LightPos;
uniform float g;
uniform float g2;
invariant in vec3 v3Direction;
in vec4 dgl_SecondaryColor;
in vec4 dgl_Color;
uniform mat4 MV;
void main (void)
{
float fCos = dot(v3LightPos, v3Direction) / length(v3Direction);
float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5);
dgl_FragColor = dgl_Color + fMiePhase * dgl_SecondaryColor;
dgl_FragColor.a = dgl_FragColor.b;
}
I wrote a function to render a sphere, and im trying to render this shader onto a inverted version of it, the sphere works completely fine, with normals and all. My problem is that the sphere gets rendered all black, so the shader is not working.
Edit: Got the sun to draw, but the sky is still all black.
This is how i'm trying to render the atmosphere inside my main rendering loop.
glUseProgram(programAtmosphere);
glBindTexture(GL_TEXTURE_2D, 0);
//######################
glUniform3f(v3CameraPos, getPlayerPos().x, getPlayerPos().y, getPlayerPos().z);
glm::vec3 lightDirection = lightPos/length(lightPos);
glUniform3f(v3LightPos, lightDirection.x , lightDirection.y, lightDirection.z);
glUniform3f(v3InvWavelength, 1.0f / pow(0.650f, 4.0f), 1.0f / pow(0.570f, 4.0f), 1.0f / pow(0.475f, 4.0f));
glUniform1fARB(fCameraHeight, 10.0f+length(getPlayerPos()));
glUniform1fARB(fCameraHeight2, (10.0f+length(getPlayerPos()))*(10.0f+length(getPlayerPos())));
glUniform1fARB(fInnerRadius, 10.0f);
glUniform1fARB(fInnerRadius2, 100.0f);
glUniform1fARB(fOuterRadius, 10.25f);
glUniform1fARB(fOuterRadius2, 10.25f*10.25f);
glUniform1fARB(fKrESun, 0.0025f * 20.0f);
glUniform1fARB(fKmESun, 0.0015f * 20.0f);
glUniform1fARB(fKr4PI, 0.0025f * 4.0f * 3.141592653f);
glUniform1fARB(fKm4PI, 0.0015f * 4.0f * 3.141592653f);
glUniform1fARB(fScale, 1.0f / 0.25f);
glUniform1fARB(fScaleDepth, 0.25f);
glUniform1fARB(fScaleOverScaleDepth, 4.0f / 0.25f );
glUniform1fARB(g, -0.990f);
glUniform1f(g2, -0.990f * -0.990f);
Any ideas?
Edit: updated the code, and added a picture.
I think the problem there is, that you write to 'FragColor', which may be a 'dead end' output variable in the fragment shader, since one must explicitly bind it to a color number before linking the program:
glBindFragDataLocation(programAtmosphere,0,"FragColor");
or using this in a shader:
layout(location = 0) out vec4 FragColor
You may try to use the builtin out vars instead: gl_FragColor, which is an alias for gl_FragData[0] and therefore the same as above binding.
EDIT: Forgot to say, when using the deprecated builtins, you must have a compatibility declaration:
#version 330 compatibility
EDIT 2: To test the binding, I'd write a constant color to it to disable possible calculations errors, since these may not yield the expected result, because of errors or zero input.