Getting World Position from Depth Buffer Value - c++

I've been working on a deferred renderer to do lighting with, and it works quite well, albeit using a position buffer in my G-buffer. Lighting is done in world space.
I have tried to implement an algorithm to recreate the world space positions from the depth buffer, and the texture coordinates, albeit with no luck.
My vertex shader is nothing particularly special, but this is the part of my fragment shader in which I (attempt to) calculate the world space position:
// Inverse projection matrix
uniform mat4 projMatrixInv;
// Inverse view matrix
uniform mat4 viewMatrixInv;
// texture position from vertex shader
in vec2 TexCoord;
... other uniforms ...
void main() {
// Recalculate the fragment position from the depth buffer
float Depth = texture(gDepth, TexCoord).x;
vec3 FragWorldPos = WorldPosFromDepth(Depth);
... fun lighting code ...
}
// Linearizes a Z buffer value
float CalcLinearZ(float depth) {
const float zFar = 100.0;
const float zNear = 0.1;
// bias it from [0, 1] to [-1, 1]
float linear = zNear / (zFar - depth * (zFar - zNear)) * zFar;
return (linear * 2.0) - 1.0;
}
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float ViewZ = CalcLinearZ(depth);
// Get clip space
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, ViewZ, 1);
// Clip space -> View space
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
// View space -> World space
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I still have my position buffer, and I sample it to compare it against the calculate position later, so everything should be black:
vec3 actualPosition = texture(gPosition, TexCoord).rgb;
vec3 difference = abs(FragWorldPos - actualPosition);
FragColour = vec4(difference, 0.0);
However, what I get is nowhere near the expected result, and of course, lighting doesn't work:
(Try to ignore the blur around the boxes, I was messing around with something else at the time.)
What could cause these issues, and how could I get the position reconstruction from depth working successfully? Thanks.

You are on the right track, but you have not applied the transformations in the correct order.
A quick recap of what you need to accomplish here might help:
Given Texture Coordinates [0,1] and depth [0,1], calculate clip-space position
Do not linearize the depth buffer
Output: w = 1.0 and x,y,z = [-w,w]
Transform from clip-space to view-space (reverse projection)
Use inverse projection matrix
Perform perspective divide
Transform from view-space to world-space (reverse viewing transform)
Use inverse view matrix
The following changes should accomplish that:
// this is supposed to get the world position from the depth buffer
vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;
// Perspective division
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;
return worldSpacePosition.xyz;
}
I would consider changing the name of CalcViewZ (...) though, that is very much misleading. Consider calling it something more appropriate like CalcLinearZ (...).

Related

Drawing a sphere normal map in the fragment shader

I'm trying to draw a simple sphere with normal mapping in the fragment shader with GL_POINTS. At present, I simply draw one point on the screen and apply a fragment shader to "spherify" it.
However, I'm having trouble colouring the sphere correctly (or at least I think I am). It seems that I'm calculating the z correctly but when I apply the 'normal' colours to gl_FragColor it just doesn't look quite right (or is this what one would expect from a normal map?). I'm assuming there is some inconsistency between gl_PointCoord and the fragment coord, but I can't quite figure it out.
Vertex shader
precision mediump float;
attribute vec3 position;
void main() {
gl_PointSize = 500.0;
gl_Position = vec4(position.xyz, 1.0);
}
fragment shader
precision mediump float;
void main() {
float x = gl_PointCoord.x * 2.0 - 1.0;
float y = gl_PointCoord.y * 2.0 - 1.0;
float z = sqrt(1.0 - (pow(x, 2.0) + pow(y, 2.0)));
vec3 position = vec3(x, y, z);
float mag = dot(position.xy, position.xy);
if(mag > 1.0) discard;
vec3 normal = normalize(position);
gl_FragColor = vec4(normal, 1.0);
}
Actual output:
Expected output:
The color channels are clamped to the range [0, 1]. (0, 0, 0) is black and (1, 1, 1) is completely white.
Since the normal vector is normalized, its component are in the range [-1, 1].
To get the expected result you have to map the normal vector from the range [-1, 1] to [0, 1]:
vec3 normal_col = normalize(position) * 0.5 + 0.5;
gl_FragColor = vec4(normal_col, 1.0);
If you use the abs value, then a positive and negative value with the same size have the same color representation. The intensity of the color increases with the grad of the value:
vec3 normal_col = abs(normalize(position));
gl_FragColor = vec4(normal_col, 1.0);
First of all, the normal facing the camera [0,0,-1] should be rgb values: [0.5,0.5,1.0]. You have to rescale things to move those negative values to be between 0 and 1.
Second, the normals of a sphere would not change linearly, but in a sine wave. So you need some trigonometry here. It makes sense to me to to start with the perpendicular normal [0,0,-1] and then then rotate that normal by an angle, because that angle is what changing linearly.
As a result of playing around this I came up with this:
http://glslsandbox.com/e#50268.3
which uses some rotation function from here: https://github.com/yuichiroharai/glsl-y-rotate

OpenGL Computing Normals and TBN Matrix from Depth Buffer (SSAO implementation)

I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}

How do I convert gl_FragCoord to a world space point in a fragment shader?

My understanding is that you can convert gl_FragCoord to a point in world coordinates in the fragment shader if you have the inverse of the view projection matrix, the screen width, and the screen height. First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates by dividing by the width and height respectively, then scaling and offsetting them into the range [-1, 1]. Next you transform by the inverse view projection matrix to get a world space point that you can use only if you divide by the w component.
Below is the fragment shader code I have that isn't working. Note inverse_proj is actually set to the inverse view projection matrix:
#version 450
uniform mat4 inverse_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2,
(gl_FragCoord.y / screen_height - 0.5) * 2,
0,
1);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_proj * ndc;
vec3 view = (1 / ndc.w * clip).xyz;
// ...
}
First, you convert gl_FragCoord.x and gl_FragCoord.y from screen space to normalized device coordinates
While simultaneously ignoring the fact that NDC space is three-dimensional (as is window space). You also forgot that the transformation from clip-space to NDC space involved a division, which you did not undo. Well, you did kinda try to undo it, but after transforming by the inverse clip transformation.
Undoing the vertex post-processing transformations use all four components of gl_FragCoord (though you could make due with just 3). The first step is undoing the viewport transform, which requires getting access to the parameters given to glDepthRange.
That gives you the NDC coordinate. Then you have to undo the perspective divide. gl_FragCoord.w is given the value 1/clipW. And clipW was the divisor in that operation. So you divide by gl_FragCoord.w to get back into clip space.
From there, you can multiply by the inverse of the projection matrix. Though if you want world-space, the projection matrix you invert must be a world-to-projection, rather than just pure projection (which is normally camera-to-projection).
In-code:
vec4 ndcPos;
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) /
(gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
vec4 clipPos = ndcPos / gl_FragCoord.w;
vec4 eyePos = invPersMatrix * clipPos;
Where viewport is a uniform containing the four parameters specified by the glViewport function, in the same order as given to that function.
I figured out the problems with my code. First, as Nicol pointed out, glFragCoord.z (depth) needs to be shifted from screen coordinates. Also, there is a mistake with the original code where I wrote 1 / ndc.w * clip instead of clip / clip.w.
As noted by BDL, however, it would be more efficient to pass the world position as a varying to the fragment shader. However, the code below is a short way to achieve the desired result entirely through the fragment shader (e.g. for screen-space programs that don't have a world position per fragment and you want the view vector per fragment).
#version 450
uniform mat4 inverse_view_proj;
uniform float screen_width;
uniform float screen_height;
out vec4 fragment;
void main()
{
// Convert screen coordinates to normalized device coordinates (NDC)
vec4 ndc = vec4(
(gl_FragCoord.x / screen_width - 0.5) * 2.0,
(gl_FragCoord.y / screen_height - 0.5) * 2.0,
(gl_FragCoord.z - 0.5) * 2.0,
1.0);
// Convert NDC throuch inverse clip coordinates to view coordinates
vec4 clip = inverse_view_proj * ndc;
vec3 vertex = (clip / clip.w).xyz;
// ...
}

SSAO not displaying correct results, mostly no visible occlusion

I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.

OpenGL, target spot-light "following me around the room"!

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.