I am attempting to reconstruct my fragment's position from a depth value stored in a GL_DEPTH_ATTACHMENT. To do this, I linearize the depth then multiply the depth by a ray from the camera position and to the corresponding point on the far plane.
This method is the second one described here. In order to get the ray from the camera to the far plane, I retrieve rays to the four corners of the far planes, pass them to my vertex shader, then interpolate into the fragment shader. I am using the following code to get the rays from the camera to the far plane's corners in world space.
std::vector<float> Camera::GetFlatFarFrustumCorners() {
// rotation is the orientation of my camera in a quaternion.
glm::quat inverseRotation = glm::inverse(rotation);
glm::vec3 localUp = glm::normalize(inverseRotation * glm::vec3(0.0f, 1.0f, 0.0f));
glm::vec3 localRight = glm::normalize(inverseRotation * glm::vec3(1.0f, 0.0f, 0.0f));
float farHeight = 2.0f * tan(90.0f / 2) * 100.0f;
float farWidth = farHeight * aspect;
// 100.0f is the distance to the far plane. position is the location of the camera in word space.
glm::vec3 farCenter = position + glm::vec3(0.0f, 0.0f, -1.0f) * 100.0f;
glm::vec3 farTopLeft = farCenter + (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farTopRight = farCenter + (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
glm::vec3 farBottomLeft = farCenter - (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farBottomRight = farCenter - (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
return {
farTopLeft.x, farTopLeft.y, farTopLeft.z,
farTopRight.x, farTopRight.y, farTopRight.z,
farBottomLeft.x, farBottomLeft.y, farBottomLeft.z,
farBottomRight.x, farBottomRight.y, farBottomRight.z
};
}
Is this a correct way to retrieve the corners of the far plane in world space?
When I use these corners with my shaders, the results are incorrect, and what I get seems to be in view space. These are the shaders I am using:
Vertex Shader:
layout(location = 0) in vec2 vp;
layout(location = 1) in vec3 textureCoordinates;
uniform vec3 farFrustumCorners[4];
uniform vec3 cameraPosition;
out vec2 st;
out vec3 frustumRay;
void main () {
st = textureCoordinates.xy;
gl_Position = vec4 (vp, 0.0, 1.0);
frustumRay = farFrustumCorners[int(textureCoordinates.z)-1] - cameraPosition;
}
Fragment Shader:
in vec2 st;
in vec3 frustumRay;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform vec3 cameraPosition;
uniform vec3 lightPosition;
out vec3 color;
void main () {
// Far and near distances; Used to linearize the depth value.
float f = 100.0;
float n = 0.1;
float depth = (2 * n) / (f + n - (texture(depthTexture, st).x) * (f - n));
vec3 position = cameraPosition + (normalize(frustumRay) * depth);
vec3 normal = texture(normalTexture, st);
float k = 0.00001;
vec3 distanceToLight = lightPosition - position;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
vec3 diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz+diffuse, gamma);
//color = texture(colorTexture, st);
//colour.r = (2 * n) / (f + n - texture( tex, st ).x * (f - n));
//colour.g = (2 * n) / (f + n - texture( tex, st ).y* (f - n));
//colour.b = (2 * n) / (f + n - texture( tex, st ).z * (f - n));
}
This is what my scene's lighting looks like under these shaders:
I am pretty sure that this is the result of either my reconstructed position being completely wrong, or it being in the wrong space. What is wrong with my reconstruction, and what can I do to fix it?
What you will first want to do is develop a temporary addition to your G-Buffer setup that stores the initial position of each fragment in world/view space (really, whatever space you are trying to reconstruct here). Then write a shader that does nothing but reconstruct these positions from the depth buffer. Set everything up so that half of your screen is displays the original G-Buffer and the other half displays your reconstructed position. You should be able to immediately spot discrepancies this way.
That said, you might want to take a look at an implementation I have used in the past to reconstruct (object space) position from the depth buffer. It basically gets you into view space first, then uses the inverse modelview matrix to go to object space. You can adjust it for world space trivially. It is probably not the most flexible implementation, what with FOV being hard-coded and all, but you can easily modify it to use uniforms instead...
Trimmed down fragment shader:
flat in mat4 inv_mv_mat;
in vec2 uv;
...
float linearZ (float z)
{
#ifdef INVERT_NEAR_FAR
const float f = 2.5;
const float n = 25000.0;
#else
const float f = 25000.0;
const float n = 2.5;
#endif
return n / (f - z * (f - n)) * f;
}
vec4
reconstruct_pos (float depth)
{
depth = linearZ (depth);
vec4 pos = vec4 (uv * depth, -depth, 1.0);
vec4 ret = (inv_mv_mat * pos);
return ret / ret.w;
}
It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this:
#version 150 core
in vec4 vtx_pos;
in vec2 vtx_st;
uniform mat4 modelview_mat; // Matrix used when the G-Buffer was built
uniform mat4 camera_matrix; // Matrix used to stretch the G-Buffer over the viewport
uniform float buffer_res_x;
uniform float buffer_res_y;
out vec2 tex_st;
flat out mat4 inv_mv_mat;
out vec2 uv;
// Hard-Coded 45 degree FOV
//const float fovy = 0.78539818525314331; // NV pukes on the line below!
//const float fovy = radians (45.0);
//const float tan_half_fovy = tan (fovy * 0.5);
const float tan_half_fovy = 0.41421356797218323;
float aspect = buffer_res_x / buffer_res_y;
vec2 inv_focal_len = vec2 (tan_half_fovy * aspect,
tan_half_fovy);
const vec2 uv_scale = vec2 (2.0, 2.0);
const vec2 uv_translate = vec2 (1.0, 1.0);
void main (void)
{
inv_mv_mat = inverse (modelview_mat);
tex_st = vtx_st;
gl_Position = camera_matrix * vtx_pos;
uv = (vtx_st * uv_scale - uv_translate) * inv_focal_len;
}
Depth range inversion is something you might find useful for deferred shading, normally a perspective depth buffer gives you more precision than you need at close range and not enough far away for quality reconstruction. If you flip things on their head by inverting the depth range you can even things out a little bit while still using the hardware depth buffer. This is discussed in detail here.
Related
I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx
After changing my current deferred renderer to use a logarithmic depth buffer I can not work out, for the life of me, how to reconstruct world-space depth from the depth buffer values.
When I had the OpenGL default z/w depth written I could easily calculate this value by transforming from window-space to NDC-space then perform inverse perspective transformation.
I did this all in the second pass fragment shader:
uniform sampler2D depth_tex;
uniform mat4 inv_view_proj_mat;
in vec2 uv_f;
vec3 reconstruct_pos(){
float z = texture(depth_tex, uv_f).r;
vec4 pos = vec4(uv_f, z, 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
and got a result that looked pretty correct:
But now the road to a simple z value is not so easy (doesn't seem like it should be so hard either).
My vertex shader for my first pass with log depth:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
layout(location = 0) in vec3 pos;
layout(location = 1) in vec2 uv;
uniform mat4 mvp_mat;
uniform float FC;
out vec2 uv_f;
out float logz_f;
out float FC_2_f;
void main(){
gl_Position = mvp_mat * vec4(pos, 1.0);
logz_f = 1.0 + gl_Position.w;
gl_Position.z = (log2(max(1e-6, logz_f)) * FC - 1.0) * gl_Position.w;
FC_2_f = FC * 0.5;
}
And my fragment shader:
#version 330 core
#extension GL_ARB_shading_language_420pack : require
// other uniforms and output variables
in vec2 uv_f;
in float FC_2_f;
void main(){
gl_FragDepth = log2(logz_f) * FC_2_f;
}
I have tried a few different approaches to get back the z-position correctly, all failing.
If I redefine my reconstruct_pos in the second pass to be:
vec3 reconstruct_pos(){
vec4 pos = vec4(uv_f, get_depth(), 1.0) * 2.0 - 1.0;
pos = inv_view_proj_mat * pos;
return pos.xyz / pos.w;
}
This is my current attempt at reconstructing Z:
uniform float FC;
float get_depth(){
float log2logz_FC_2 = texture(depth_tex, uv_f).r;
float logz = pow(2, log2logz_FC_2 / (FC * 0.5));
float pos_z = log2(max(1e-6, logz)) * FC - 1.0; // pos.z
return pos_z;
}
Explained:
log2logz_FC_2: the value written to depth buffer, so log2(1.0 + gl_Position.w) * (FC / 2)
logz: simply 1.0 + gl_Position.w
pos_z: the value of gl_Position.z before perspective devide
return value: gl_Position.z
Of course, that's just my working. I'm not sure what these values actually hold in the end, because I think I've screwed up some of the math or not correctly understood the transformations going on.
What is the correct way to get my world-space Z position from this logarithmic depth buffer?
In the end I was going about this all wrong. The way to get the world-space position back using a log buffer is:
Retrieve depth from texture
Reconstruct gl_Position.w
linearize reconstructed depth
translate to world space
Here's my implementation in glsl:
in vec2 uv_f;
uniform float nearz;
uniform float farz;
uniform mat4 inv_view_proj_mat;
float linearize_depth(in float depth){
float a = farz / (farz - nearz);
float b = farz * nearz / (nearz - farz);
return a + b / depth;
}
float reconstruct_depth(){
float depth = texture(depth_tex, uv_f).r;
return pow(2.0, depth * log2(farz + 1.0)) - 1.0;
}
vec3 reconstruct_world_pos(){
vec4 wpos =
inv_view_proj_mat *
(vec4(uv_f, linearize_depth(reconstruct_depth()), 1.0) * 2.0 - 1.0);
return wpos.xyz / wpos.w;
}
Which gives me the same result (but with better precision) as when I was using the default OpenGL depth buffer.
I want to implement Screen Space rendering fluid but I have a trouble with getting depth information.
the depth image I have got is too white.I think is because the depth is not linear.When I look this image very close, it will have a reasonable performance, I can see the depth change. In the same circle, colour in the centre is black and the colour will become white and white when it close to the edge.
So I do a linear for the depth, it looks better but the depth also won't change in the same circle. here is my vertex shader:
#version 400
uniform float pointScale;
layout(location=0) in vec3 position;
uniform mat4 viewMat, projMat, modelMat;
uniform mat3 normalMat;
out vec3 fs_PosEye;
out vec4 fs_Color;
void main()
{
vec3 posEye = (viewMat * vec4(position.xyz, 1.0f)).xyz;
float dist = length(posEye);
gl_PointSize = 1.0f * (pointScale/dist);
fs_PosEye = posEye;
fs_Color = vec4(0.5,0.5,0.5,1.0f);
gl_Position = projMat * viewMat * vec4(position.xyz, 1.0);
}
here is my fragment shader:
#version 400
uniform mat4 viewMat, projMat, modelMat;
uniform float pointScale; // scale to calculate size in pixels
in vec3 fs_PosEye;
in vec4 fs_Color;
out vec4 out_Color;
out vec4 out_Position;
float linearizeDepth(float exp_depth, float near, float far) {
return (2 * near) / (far + near - exp_depth * (far - near));
}
void main()
{
// calculate normal from texture coordinates
vec3 N;
N.xy = gl_PointCoord.xy* 2.0 - 1;
float mag = dot(N.xy, N.xy);
if (mag > 1.0) discard; // kill pixels outside circle
N.z = sqrt(1.0-mag);
//calculate depth
vec4 pixelPos = vec4(fs_PosEye + normalize(N)*1.0f,1.0f);
vec4 clipSpacePos = projMat * pixelPos;
//clipSpacePos.z = linearizeDepth(clipSpacePos, 1, 400);
float depth = clipSpacePos.z / clipSpacePos.w;
//depth = linearizeDepth(depth, 1, 100);
gl_FragDepth = depth;
float diffuse = max(0.0, dot(N, vec3(0.0,0.0,1.0)));
out_Color = diffuse * vec4(0.0f, 0.0f, 1.0f, 1.0f);
out_Color = vec4(N, 1.0);
out_Color = vec4(depth, depth, depth, 1.0);
}
I want to interpolation based on normal after linear.
Im currently trying to convert a shader by Sean O'Neil to version 330 so i can try it out in a application im writing. Im having some issues with deprecated functions, so i replaced them, but im almost completely new to glsl, so i probably did a mistake somewhere.
Original shaders can be found here:
http://www.gamedev.net/topic/592043-solved-trying-to-use-atmospheric-scattering-oneill-2004-but-get-black-sphere/
My horrible attempt at converting them:
Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 2) in vec3 vertexNormal_modelspace;
uniform vec3 v3CameraPos; // The camera's current position
uniform vec3 v3LightPos; // The direction vector to the light source
uniform vec3 v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels
uniform float fCameraHeight; // The camera's current height
uniform float fCameraHeight2; // fCameraHeight^2
uniform float fOuterRadius; // The outer (atmosphere) radius
uniform float fOuterRadius2; // fOuterRadius^2
uniform float fInnerRadius; // The inner (planetary) radius
uniform float fInnerRadius2; // fInnerRadius^2
uniform float fKrESun; // Kr * ESun
uniform float fKmESun; // Km * ESun
uniform float fKr4PI; // Kr * 4 * PI
uniform float fKm4PI; // Km * 4 * PI
uniform float fScale; // 1 / (fOuterRadius - fInnerRadius)
uniform float fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found)
uniform float fScaleOverScaleDepth; // fScale / fScaleDepth
const int nSamples = 2;
const float fSamples = 2.0;
invariant out vec3 v3Direction;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightPosition_worldspace;
out vec4 dgl_SecondaryColor;
out vec4 dgl_Color;
float scale(float fCos)
{
float x = 1.0 - fCos;
return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25))));
}
void main(void)
{
//gg_FrontColor = vec3(1.0, 0.0, 0.0);
//gg_FrontSecondaryColor = vec3(0.0, 1.0, 0.0);
// Get the ray from the camera to the vertex, and its length (which is the far point of the ray passing through the atmosphere)
vec3 v3Pos = vertexPosition_modelspace;
vec3 v3Ray = v3Pos - v3CameraPos;
float fFar = length(v3Ray);
v3Ray /= fFar;
// Calculate the ray's starting position, then calculate its scattering offset
vec3 v3Start = v3CameraPos;
float fHeight = length(v3Start);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fCameraHeight));
float fStartAngle = dot(v3Ray, v3Start) / fHeight;
float fStartOffset = fDepth*scale(fStartAngle);
// Initialize the scattering loop variables
gl_FrontColor = vec4(0.0, 0.0, 0.0, 0.0);
gl_FrontSecondaryColor = vec4(0.0, 0.0, 0.0, 0.0);
float fSampleLength = fFar / fSamples;
float fScaledLength = fSampleLength * fScale;
vec3 v3SampleRay = v3Ray * fSampleLength;
vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5;
// Now loop through the sample rays
vec3 v3FrontColor = vec3(0.2, 0.1, 0.0);
for(int i=0; i<nSamples; i++)
{
float fHeight = length(v3SamplePoint);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight));
float fLightAngle = dot(v3LightPos, v3SamplePoint) / fHeight;
float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight;
float fScatter = (fStartOffset + fDepth*(scale(fLightAngle) - scale(fCameraAngle)));
vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI));
v3FrontColor += v3Attenuate * (fDepth * fScaledLength);
v3SamplePoint += v3SampleRay;
}
// Finally, scale the Mie and Rayleigh colors and set up the varying variables for the pixel shader
gl_FrontSecondaryColor.rgb = v3FrontColor * fKmESun;
gl_FrontColor.rgb = v3FrontColor * (v3InvWavelength * fKrESun);
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
v3Direction = v3CameraPos - v3Pos;
dgl_SecondaryColor = gl_FrontSecondaryColor;
dgl_Color = gl_FrontColor;
}
Fragment shader:
#version 330 core
out vec4 dgl_FragColor;
uniform vec3 v3LightPos;
uniform float g;
uniform float g2;
invariant in vec3 v3Direction;
in vec4 dgl_SecondaryColor;
in vec4 dgl_Color;
uniform mat4 MV;
void main (void)
{
float fCos = dot(v3LightPos, v3Direction) / length(v3Direction);
float fMiePhase = 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos*fCos) / pow(1.0 + g2 - 2.0*g*fCos, 1.5);
dgl_FragColor = dgl_Color + fMiePhase * dgl_SecondaryColor;
dgl_FragColor.a = dgl_FragColor.b;
}
I wrote a function to render a sphere, and im trying to render this shader onto a inverted version of it, the sphere works completely fine, with normals and all. My problem is that the sphere gets rendered all black, so the shader is not working.
Edit: Got the sun to draw, but the sky is still all black.
This is how i'm trying to render the atmosphere inside my main rendering loop.
glUseProgram(programAtmosphere);
glBindTexture(GL_TEXTURE_2D, 0);
//######################
glUniform3f(v3CameraPos, getPlayerPos().x, getPlayerPos().y, getPlayerPos().z);
glm::vec3 lightDirection = lightPos/length(lightPos);
glUniform3f(v3LightPos, lightDirection.x , lightDirection.y, lightDirection.z);
glUniform3f(v3InvWavelength, 1.0f / pow(0.650f, 4.0f), 1.0f / pow(0.570f, 4.0f), 1.0f / pow(0.475f, 4.0f));
glUniform1fARB(fCameraHeight, 10.0f+length(getPlayerPos()));
glUniform1fARB(fCameraHeight2, (10.0f+length(getPlayerPos()))*(10.0f+length(getPlayerPos())));
glUniform1fARB(fInnerRadius, 10.0f);
glUniform1fARB(fInnerRadius2, 100.0f);
glUniform1fARB(fOuterRadius, 10.25f);
glUniform1fARB(fOuterRadius2, 10.25f*10.25f);
glUniform1fARB(fKrESun, 0.0025f * 20.0f);
glUniform1fARB(fKmESun, 0.0015f * 20.0f);
glUniform1fARB(fKr4PI, 0.0025f * 4.0f * 3.141592653f);
glUniform1fARB(fKm4PI, 0.0015f * 4.0f * 3.141592653f);
glUniform1fARB(fScale, 1.0f / 0.25f);
glUniform1fARB(fScaleDepth, 0.25f);
glUniform1fARB(fScaleOverScaleDepth, 4.0f / 0.25f );
glUniform1fARB(g, -0.990f);
glUniform1f(g2, -0.990f * -0.990f);
Any ideas?
Edit: updated the code, and added a picture.
I think the problem there is, that you write to 'FragColor', which may be a 'dead end' output variable in the fragment shader, since one must explicitly bind it to a color number before linking the program:
glBindFragDataLocation(programAtmosphere,0,"FragColor");
or using this in a shader:
layout(location = 0) out vec4 FragColor
You may try to use the builtin out vars instead: gl_FragColor, which is an alias for gl_FragData[0] and therefore the same as above binding.
EDIT: Forgot to say, when using the deprecated builtins, you must have a compatibility declaration:
#version 330 compatibility
EDIT 2: To test the binding, I'd write a constant color to it to disable possible calculations errors, since these may not yield the expected result, because of errors or zero input.
I am writing a GLSL shader that simulates chromatic aberration for simple objects. I am staying OpenGL 2.0 compatible, so I use the built-in OpenGL matrix stack. This is the simple vertex shader:
uniform vec3 cameraPos;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}
The cameraPos uniform is the position of the camera in model space, as one might imagine. Here is the fragment shader:
const float etaR = 1.14;
const float etaG = 1.12;
const float etaB = 1.10;
const float fresnelPower = 2.0;
const float F = ((1.0 - etaG) * (1.0 - etaG)) / ((1.0 + etaG) * (1.0 + etaG));
uniform samplerCube environment;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec3 i = normalize(incident);
vec3 n = normalize(normal);
float ratio = F + (1.0 - F) * pow(1.0 - dot(-i, n), fresnelPower);
vec3 refractR = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaR), 1.0));
vec3 refractG = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaG), 1.0));
vec3 refractB = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaB), 1.0));
vec3 reflectDir = vec3(gl_TextureMatrix[0] * vec4(reflect(i, n), 1.0));
vec4 refractColor;
refractColor.ra = textureCube(environment, refractR).ra;
refractColor.g = textureCube(environment, refractG).g;
refractColor.b = textureCube(environment, refractB).b;
vec4 reflectColor;
reflectColor = textureCube(environment, reflectDir);
vec3 combinedColor = mix(refractColor, reflectColor, ratio);
gl_FragColor = vec4(combinedColor, 1.0);
}
The environment is a cube map that is rendered live from the drawn object's environment.
Under normal circumstances, the shader behaves (I think) like expected, yielding this result:
However, when the camera is rotated 180 degrees around its target, so that it now points at the object from the other side, the refracted/reflected image gets warped like so (This happens gradually for angles between 0 and 180 degrees, of course):
Similar artifacts appear when the camera is lowered/raised; it only seems to behave 100% correctly when the camera is directly over the target object (pointing towards negative Z, in this case).
I am having trouble figuring out which transformation in the shader that is responsible for this warped image, but it should be something obvious related to how cameraPos is handled. What is causing the image to warp itself in this way?
This looks suspect to me:
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
Is your cameraPos defined in world space? You're subtracting a view space vector (position), from a supposedly world space cameraPos vector. You either need to do the calculation in world space or view space, but you can't mix them.
To do this correctly in world space you'll have to upload the model matrix separately to get the world space incident vector.