How to change vertex position using glsl - c++

I am trying to move object depending on camera position. Here is my vertex shader
uniform mat4 osg_ViewMatrixInverse;
void main(){
vec4 position = gl_ProjectionMatrix * gl_ModelViewMatrix *gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = position;
}
But when camera's vertical position is less than 300 or more than 1000 the model simply disappears though in second case it should be moved just by one unit. I read about inside the shader coordinates are different from a world coordinates that's why i am multiplying by Projection and ModelView matrices, to get the world coordinates. Maybe I am wrong at this point? Forgive me if it's a simple question but i couldnt find the answer.
UPDATE: camPos is translated to world coordinates, but position is not. Maybe it has to do with the fact i am using osg_ViewMatrixInverse (passed by OpenSceneGraph) to get camera position and internal gl_ProjectionMatrix and gl_ModelViewMatrix to get the vertex coordinates? How do I translate position into world coordinates?

The problem is that you are transforming the position into clip coordinates (by multiplying gl_Vertex by the projection and modelview matrices), then performing a world-coordinate operation on those clip coordinates, which does not give the results you want.
Simply perform your transformations before you multiply by the modelview and projection matrices.
uniform mat4 osg_ViewMatrixInverse;
void main() {
vec4 position = gl_Vertex;
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
if( camPos.z >1000.0 )
position.z = position.z+1.0;
if( camPos.z >5000.0 )
position.z = position.z+10.0;
if (camPos.z< 300.0 )
position.z = position.z+300.0;
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * position;
}

gl_Position is in clip-space, the values you output for any coordinate must be >= -gl_Position.W or <= gl_Position.W or they will be clipped. If all of your coordinates for a primitive are outside this range, then nothing will be drawn. The reasoning for this is that after the vertex shader completes, OpenGL divides the clip-space coordinates by W to produce coordinates in the range [-1,1] (NDC). Anything outside this volume will not be on screen.
What you should actually do here is add these coordinates to your object-space position and then perform the transformation from object-space to clip-space. Colonel Thirty Two's answer already does a very good job of showing how to do this; I just wanted to explain exactly why you should not apply this offset to the clip-space coordinates.

Figured it out:
uniform mat4 osg_ViewMatrixInverse;
uniform mat4 osg_ViewMatrix;
void main(){
vec3 camPos=osg_ViewMatrixInverse[3].xyz;
vec4 position_in_view_space = gl_ModelViewMatrix * gl_Vertex;
vec4 position_in_world_space = osg_ViewMatrixInverse * position_in_view_space;
if( camPos.z >1000.0 )
position_in_world_space.z = position_in_world_space.z+700.0;
if( camPos.z >5000.0 )
position_in_world_space.z = position_in_world_space.z+1000.0;
if (camPos.z< 300.0 )
position_in_world_space.z = position_in_world_space.z+200;
position_in_view_space = osg_ViewMatrix * position_in_world_space;
vec4 position_in_object_space = gl_ModelViewMatrixInverse * position_in_view_space;
gl_Position = gl_ModelViewProjectionMatrix * position_in_object_space;
}
One needs to transform gl_Vertex (which is in object space coords) into a world coordinates through view space coordinates (maybe there is direct conversion i dont see) than he can modify them and transform back into object space coordinates.

Related

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

Explanation of working principle of openGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how coding in openGL works. I found this code on the internet and I want to understand it clearly.
For my vertex shader I have:
Vertex
uniform vec3 fvLightPosition;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
uniform mat4 extra;
attribute vec3 rm_Binormal;
attribute vec3 rm_Tangent;
uniform float fSinTime0_X;
uniform float fCosTime0_X;
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex * extra;
Texcoord = gl_MultiTexCoord0.xy;
Texcoordcut = gl_MultiTexCoord0.xy;
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
vec3 rotationLight = vec3(fCosTime0_X,0, fSinTime0_X);
ViewDirection = - fvObjectPosition.xyz;
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
}
And for my Fragment shader, I created a white color on the picture to create a hole in it. :
uniform vec4 fvAmbient;
uniform vec4 fvSpecular;
uniform vec4 fvDiffuse;
uniform float fSpecularPower;
uniform sampler2D baseMap;
uniform sampler2D bumpMap;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
void main( void )
{
vec3 fvLightDirection = normalize( LightDirection );
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
float fNDotL = dot( fvNormal, fvLightDirection );
vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection );
vec3 fvViewDirection = normalize( ViewDirection );
float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) );
vec4 fvBaseColor = texture2D( baseMap, Texcoord );
vec4 fvTotalAmbient = fvAmbient * fvBaseColor;
vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor;
vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) );
if(fvBaseColor == vec4(1,1,1,1)){
discard;
}else{
gl_FragColor = ( fvTotalDiffuse + fvTotalSpecular );
}
}
Somebody who can explain me very whell what everything does? I understand the basic idea of it. But not often why you need it, and what happend when you use other variables? What happens is that light around the teapot is comming and removing in the time. How is this correctly linked with the cosinus and sinus variables? What if I want the the light comes from above and goes to the bottom of the teapot?
Also,
What do this lines mean?
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
And why is here a minus before the variable?
ViewDirection = - fvObjectPosition.xyz;
Why do we use a negative rotationLight?
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
Why do they use *2.0 ) - 1.0 for calculating the normalvector? Isn't that not possible with Normal = normalize( gl_NormalMatrix * gl_Normal);?
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
Too lazy to fully analyze the code without the propper context of what you are sending to the shaders ... but your subquestions are easy enough:
What do this lines mean? vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
This converts gl_Vertex (polygon edge points) from object/model coordinate system to camera coordinate system. In other words it apply all the rotations and translations of your vertexes. The z axis is camera view axis pointing to or from Screen and x,y axises are the same as screens. No projections/clippings/clampings are applied yet !!! The resulting point is stored in fvObjectPosition 4D vector (x,y,z,w) I strongly recommend you to read Understanding 4x4 homogenous transform matrices and the sub-links there are also worth looking into.
And why is here a minus before the variable? ViewDirection = - fvObjectPosition.xyz;
Most likely because you need the direction from surface to camera so direction_from_surface=camera_pos-surface_pos as your surface_pos is already in camera coordinate system then camera position in the same coordinates are (0,0,0) so the result is direction_from_surface=(0,0,0)-surface_pos=-surface_pos or you got negative Z axis view direction (depends on the format of your matrices). It is Hard to determine without background info.
Why do we use a negative rotationLight? LightDirection = (-rotationLight ) * (gl_NormalMatrix);
most likely for the same reasons as bullet 2
Why do they use *2.0)-1.0 for calculating the normalvector?
The shader use normal/bump mapping which means you got an texture with normal vectors encoded as RGB. As RGB textures are clamped to range <0,1> and normal vector coordinates are in range <-1,+1> then you just need to rescale the texel So:
RGB*2.0 is in range <0,2>
RGB*2.0-1.0 is in range <-1,+1>
This obtains your normal vector in polygon coordinate system so you need to convert it to the coordinate system your equations work with. Usually global world space or camera space. The normalize is not necessary if your normal/bump map is normalized already. Normal textures are distinctive with colors ...
flat surface has normal=(0.0,0.0,+1.0) so in RGB it would be (0.5,0.5,1.0)
That is the common bluish/magenta color often seen in textures (see the link above).
But Yes you can use Normal = normalize( gl_NormalMatrix * gl_Normal);
But that will eliminate the bump/normal map and you would got just flat surfaces instead. Something like this:
GL+GLSL normal shading complete example (C++)
Light
vec3(fCosTime0_X,0, fSinTime0_X) looks like the light direction. This one is rotating around y axis. If you want to change light direction to something else just make it an uniform and pass it directly to shader instead of fCosTime0_X,fSinTime0_X
How is this correctly linked with the cosinus and sinus variables?
You can send data to a shader uniform variable via glUniform function. For example: in your vertex shader, you have 2 float values, so you will call glUniform1f twice each time with different location and different value.
Or you can stick the float variables to one vec2 variable like so:
uniform vec2 fSinValues; and fill them with glUniform2f(location, sinVal, cosVal);
What if I want the the light comes from above and goes to the bottom of the teapot?
If you want your light rotate in different direction, just pass the sin and cos values to different space coordinate right here: vec3 rotationLight = vec3(fCosTime0_X,fSinTime0_X, 0);

Phong Illumination in OpenGL website

I was reading through the following Phong Illumination shader which has in opengl.org:
Phong Illumination in Opengl.org
The vertex and fragment shaders were as follows:
vertex shader:
varying vec3 N;
varying vec3 v;
void main(void)
{
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
varying vec3 N;
varying vec3 v;
void main (void)
{
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = gl_FrontLightModelProduct.sceneColor + Iamb + Idiff + Ispec;
}
I was wondering about the way which he calculates the viewer vector or v. Because by multiplying the vertex position with gl_ModelViewMatrix, the result will be in view matrix (and view coordinates are rotated most of the time, compared to world coordinates).
So, we cannot simply subtract the light position from v to calculate the L vector, because they are not in the same coordinates system. Also, the result of dot product between L and N won't be correct because their coordinates are not the same. Am I right about it?
So, we cannot simply subtract the light position from v to calculate
the L vector, because they are not in the same coordinates system.
Also, the result of dot product between L and N won't be correct
because their coordinates are not the same. Am I right about it?
No.
The gl_LightSource[0].position.xyz is not the value you set GL_POSITION to. The GL will automatically multiply the position by the current GL_MODELVIEW matrix at the time of the glLight() call. Lighting calculations are done completely in eye space in fixed-function GL. So both V and N have to be transformed to eye space, and gl_LightSource[].position will already be transformed to eye-space, so the code is correct and is actually not mixing different coordinate spaces.
The code you are using is relying on deprecated functionality, using lots of the old fixed-function features of the GL, including that particular issue. In mordern GL, those builtin uniforms and attributes do not exist, and you have to define your own - and you can interpret them as you like.
You of course could also ignore that convention and still use a different coordinate space for the lighting calculation with the builtins, and interpret gl_LightSource[].position differently by simply choosing some other matrix when setting a position (typically, the light's world space position is set while the GL_MODELVIEW matrix contains only the view transformation, so that the eye-space light position for some world-stationary light source emerges, but you can do whatever you like). However, the code as presented is meant to work as some "drop-in" replacement for the fixed-function pipeline, so it will interpret those builtin uniforms and attributes in the same way the fixed-function pipeline did.

Creating a rectangular light source in OpenGL?

I am trying to create a rectangular, sharp-edge light source in OpenGL for one application. My idea is to create a spot light and somehow mask the shape of the shade into a rectangle, the mask of course has to be invisible through camera. When I was trying to implement this idea, it turns out that OpenGL will just skip rendering objects outside the camera, although lighting source outside camera is still valid. This has prevented me from creating the effect I wanted and I am wondering if any of you have come across similar problems before.
To make my question more specific, consider the following case of my question:
spot light at 0,0,5
target object at 0,0,0
mask object (a simple quad parallel to x-axis) at 0,0,3.
When camera is at 0,0,4, light passes through mask object and leaves a rectangular shape on the target object (which is what I wanted), but I can also see the mask object!(while I need the mask object to be invisible)
When I move the camera closer to the target object, say 0,0,2. The mask object is behind the camera and therefore invisible. However, since it's invisible, OpenGL stopped rendering it and therefore the mask object does not have any effect on the target object, and the light shade is still round!
My guess would be to start from a spot light, but separating the angle calculation:
* Project the L vector on the YZ plane to calculate the angle on the X axis
* Project the L vector on the XZ plane to calculate the angle on the Y axis
A very naive implementation of this could be (GLSL):
varying vec3 v_V; // World-space position
varying vec3 v_N; // World-space normal
uniform float time; // global time in seconds since shaderprogram link
uniform vec2 uSpotSize; // Spot size, on X and Y axes
vec3 lp = vec3(0.0, 0.0, 7.0 + cos(time) * 5.0); // Light world-space position
vec3 lz = vec3(0.0, 0.0, -1.0); // Light direction (Z vector)
// Light radius (for attenuation calculation)
float lr = 3.0;
void main()
{
// Calculate L, the vector from model surface to light
vec3 L = lp - v_V;
// Project L on the YZ / XZ plane
vec3 LX = normalize(vec3(L.x, 0.0, L.z));
vec3 LY = normalize(vec3(0.0, L.y, L.z));
// Calculate the angle on X and Y axis using projected vectors just above
float ax = dot(LX, -lz);
float ay = dot(LY, -lz);
// Light attenuation
float d = distance(lp, v_V);
float attenuation = 1.0 / (1.0 + (2.0/lr)*d + (1.0/(lr*lr))*d*d);
float shaded = max(0.0, dot(v_N, L)) * attenuation;
if(ax > cos(uSpotSize.x) && ay > cos(uSpotSize.y))
gl_FragColor = vec4(shaded); // Inside the light influence zone, light it up !
else
gl_FragColor = vec4(0.1); // Outside the light influence zone.
}
Again, this is very naive. For instance, the X/Y projection is done in world-space. If you want to be able to rotate the light rectangle, you might have to introduce a vector pointing to the right of the light.
Thus, you'll be able to get the fragment coordinate in the light's coordinate frame, and with this, you can decide whether to shade the fragment or not.
One solution might be adapting the calculations used for projective texture lookups to simulate a rectangular light source. You did not specify which OpenGL version you're using, but projective texture lookups can even be achieved with the fixed function pipeline
- although they're arguably easier to do in a shader.
Of course, this would not simulate a rectangular area light source, just a point light source that is constrained to a rectangular region.
Using this approach, you'd have to specify view & projection matrices for the light source; where the view matrix is essentially generated by a 'look at' with the light position & it's direction; the projection matrix encodes a perspective projection with your desired horizontal & vertical 'field of view'.
If you just want a rectangular area, you don't even need a texture; A simple vertex/ fragment shader pair could look like this:
( the vertex shader basically transforms the position to the light's clip space, the fragment shader performs the clipping & computes a lambert shading if the fragment is inside the light frustum )
#version 330 core
layout ( location = 0 ) in vec3 vertexPosition;
layout ( location = 1 ) in vec3 vertexNormal;
layout ( location = 3 ) in vec3 vertexDiffuse;
uniform mat4 modelTf;
uniform mat3 normalTf;
uniform mat4 viewTf; // view matrix for render camera
uniform mat4 projectiveTf; // projection matrix for render camera
uniform mat4 viewTf_lightCam; // view matrix of light source
uniform mat4 projectiveTf_lightCam; // projective matrix of light source
uniform vec4 lightPosition_worldSpace;
out vec3 diffuseColor;
out vec3 normal_worldSpace;
out vec3 toLight_worldSpace;
out vec4 position_lightClipSpace;
void main()
{
diffuseColor = vertexDiffuse;
vec4 vertexPosition_worldSpace = modelTf * vec4( vertexPosition, 1.0 );
normal_worldSpace = normalTf * vertexNormal;
toLight_worldSpace = normalize( lightPosition_worldSpace - vertexPosition_worldSpace ).xyz;
position_lightClipSpace = projectiveTf_lightCam * viewTf_lightCam * vertexPosition_worldSpace;
gl_Position = projectiveTf * viewTf * vertexPosition_worldSpace;
}
#version 330 core
layout ( location=0 ) out vec4 fragColor;
in vec3 diffuseColor;
in vec3 normal_worldSpace;
in vec3 toLight_worldSpace;
in vec4 position_lightClipSpace;
uniform vec3 ambientLight;
void main()
{
// clipping against the light frustum
bool isInsideX = ( position_lightClipSpace.x <= position_lightClipSpace.w && position_lightClipSpace.x >= -position_lightClipSpace.w );
bool isInsideY = ( position_lightClipSpace.y <= position_lightClipSpace.w && position_lightClipSpace.y >= -position_lightClipSpace.w );
bool isInsideZ = ( position_lightClipSpace.z <= position_lightClipSpace.w && position_lightClipSpace.z >= -position_lightClipSpace.w );
bool isInside = isInsideX && isInsideY && isInsideZ;
vec3 N = normalize( normal_worldSpace );
vec3 L = normalize( toLight_worldSpace );
vec3 lightColor = isInside ? max( dot( N, L ), 0.0 ) * vec3( 0.99, 0.66, 0.33 ) : vec3( 0.0 );
fragColor = vec4( clamp( ( ambientLight + lightColor ) * diffuseColor, vec3( 0.0 ), vec3( 1.0 ) ), 1.0 );
}
There are a lot of good papers on this, Brian Karis wrote about it in 2013 (in regards to UE4) here:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
And more recently Michal Drobot wrote an article about area lights in GPU Pro 5.
If you are using a metalness workflow you can also crank up the roughness as an approximation to area lighting, a technique introduced by Tri-Ace:
http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

Issues Transforming to Eye Space

I've been trying to get all of my lights into eye space for the GLSL shaders I'm using, but I'm missing something. I have no idea what I'm missing. Here's my shader code, just in case it's causing the problem...
varying vec3 normal, lightDir;
uniform vec3 lightPos;
//gl_Normal: Object Space
//gl_Vertex: Object Space
//lightDir: Eye Space
void main()
{
vec4 vert;
normal = gl_NormalMatrix * gl_Normal;
vert = gl_ModelViewMatrix * gl_Vertex;
lightDir = normalize(vec3(vec4(lightPos, 1.0) - vert));
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
If it isn't that, then it must be the way I'm transforming the light position CPU side, so here's what I'm doing...
eye = inverse(camera->climb(root));
glMultMatrixf(value_ptr(eye));
glUniform3fv(sLight, 1, value_ptr(vec3(eye * light->climb(root) * vec4())));
Everything else in my program is working perfectly, but there's something I'm not spotting here. NOTE: camera->climb(root) yields the transformation of the camera's scene node in world space. light->climb(root) yields the transformation of the light's scene node in world space.
EDIT: The exact symptoms I'm having are that my light always appears to be at the origin in eye space (in the same location as the camera).
To move answer from the comment:
The origin coordinate that you multiply to get your light's eye-space position should be vec4(0,0,0,1) instead of vec4(0,0,0,0).