Phong Illumination in OpenGL website - c++

I was reading through the following Phong Illumination shader which has in opengl.org:
Phong Illumination in Opengl.org
The vertex and fragment shaders were as follows:
vertex shader:
varying vec3 N;
varying vec3 v;
void main(void)
{
v = vec3(gl_ModelViewMatrix * gl_Vertex);
N = normalize(gl_NormalMatrix * gl_Normal);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
fragment shader:
varying vec3 N;
varying vec3 v;
void main (void)
{
vec3 L = normalize(gl_LightSource[0].position.xyz - v);
vec3 E = normalize(-v); // we are in Eye Coordinates, so EyePos is (0,0,0)
vec3 R = normalize(-reflect(L,N));
//calculate Ambient Term:
vec4 Iamb = gl_FrontLightProduct[0].ambient;
//calculate Diffuse Term:
vec4 Idiff = gl_FrontLightProduct[0].diffuse * max(dot(N,L), 0.0);
Idiff = clamp(Idiff, 0.0, 1.0);
// calculate Specular Term:
vec4 Ispec = gl_FrontLightProduct[0].specular
* pow(max(dot(R,E),0.0),0.3*gl_FrontMaterial.shininess);
Ispec = clamp(Ispec, 0.0, 1.0);
// write Total Color:
gl_FragColor = gl_FrontLightModelProduct.sceneColor + Iamb + Idiff + Ispec;
}
I was wondering about the way which he calculates the viewer vector or v. Because by multiplying the vertex position with gl_ModelViewMatrix, the result will be in view matrix (and view coordinates are rotated most of the time, compared to world coordinates).
So, we cannot simply subtract the light position from v to calculate the L vector, because they are not in the same coordinates system. Also, the result of dot product between L and N won't be correct because their coordinates are not the same. Am I right about it?

So, we cannot simply subtract the light position from v to calculate
the L vector, because they are not in the same coordinates system.
Also, the result of dot product between L and N won't be correct
because their coordinates are not the same. Am I right about it?
No.
The gl_LightSource[0].position.xyz is not the value you set GL_POSITION to. The GL will automatically multiply the position by the current GL_MODELVIEW matrix at the time of the glLight() call. Lighting calculations are done completely in eye space in fixed-function GL. So both V and N have to be transformed to eye space, and gl_LightSource[].position will already be transformed to eye-space, so the code is correct and is actually not mixing different coordinate spaces.
The code you are using is relying on deprecated functionality, using lots of the old fixed-function features of the GL, including that particular issue. In mordern GL, those builtin uniforms and attributes do not exist, and you have to define your own - and you can interpret them as you like.
You of course could also ignore that convention and still use a different coordinate space for the lighting calculation with the builtins, and interpret gl_LightSource[].position differently by simply choosing some other matrix when setting a position (typically, the light's world space position is set while the GL_MODELVIEW matrix contains only the view transformation, so that the eye-space light position for some world-stationary light source emerges, but you can do whatever you like). However, the code as presented is meant to work as some "drop-in" replacement for the fixed-function pipeline, so it will interpret those builtin uniforms and attributes in the same way the fixed-function pipeline did.

Related

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

GLSL3 Tangent space coordinates and normal mapping

First of all, I must apologize for posting yet another question on this subject (there are a lot already!). I did search for other related questions and answers, but unfortunately none of them showed me the solution. Now I'm desperate! :D
It is worth mentioning that the code posted below gives a satisfying 'bumpy' effect. It is the scene enlightenment that seems to be wrong.
The scene: is dead simple! A cube in the center, a light source rotating around it (parallel to the ground) and above.
My approach is to start from my basic light shader, which gives me adequate outputs (or so I think!). The first step is to modify it to do the calculations in tangent space, then use the normal extracted from a texture.
I tried to comment the code nicely, but in short I have two questions:
1) Doing only basic lighting (no normal mapping), I expect the scene to look exactly the same, with or without transforming my vectors into tangent space with the TBN matrix. Am I wrong?
2) Why do I get incorrect enlightenment?
A couple of screenshots to give you an idea (EDITED) - following LJ's comment, I am no longer summing normals and tangent per vertex/face. Interestingly, it highlights the issue (see on the capture, I have marked how the light moves).
Basically it is as if the cube was rotated 90 degrees to the left, or, as if the light was turing vertically instead of horizontally
Result with normal mapping:
Version with simple light:
Vertex shader:
// Information about the light.
// Here we care essentially about light.Position, which
// is set to be something like vec3(cos(x)*9, 5, sin(x)*9)
uniform Light_t Light;
uniform mat4 W; // The model transformation matrix
uniform mat4 V; // The camera transformation matrix
uniform mat4 P; // The projection matrix
in vec3 VS_Position;
in vec4 VS_Color;
in vec2 VS_TexCoord;
in vec3 VS_Normal;
in vec3 VS_Tangent;
out vec3 FS_Vertex;
out vec4 FS_Color;
out vec2 FS_TexCoord;
out vec3 FS_LightPos;
out vec3 FS_ViewPos;
out vec3 FS_Normal;
// This method calculates the TBN matrix:
// I'm sure it is not optimized vertex shader code,
// to have this seperate method, but nevermind for now :)
mat3 getTangentMatrix()
{
// Note: here I must say am a bit confused, do I need to transform
// with 'normalMatrix'? In practice, it seems to make no difference...
mat3 normalMatrix = transpose(inverse(mat3(W)));
vec3 norm = normalize(normalMatrix * VS_Normal);
vec3 tang = normalize(normalMatrix * VS_Tangent);
vec3 btan = normalize(normalMatrix * cross(VS_Normal, VS_Tangent));
tang = normalize(tang - dot(tang, norm) * norm);
return transpose(mat3(tang, btan, norm));
}
void main()
{
// Set the gl_Position and pass color + texcoords to the fragment shader
gl_Position = (P * V * W) * vec4(VS_Position, 1.0);
FS_Color = VS_Color;
FS_TexCoord = VS_TexCoord;
// Now here we start:
// This is where supposedly, multiplying with the TBN should not
// change anything to the output, as long as I apply the transformation
// to all of them, or none.
// Typically, removing the 'TBN *' everywhere (and not using the normal
// texture later in the fragment shader) is exactly the code I use for
// my basic light shader.
mat3 TBN = getTangentMatrix();
FS_Vertex = TBN * (W * vec4(VS_Position, 1)).xyz;
FS_LightPos = TBN * Light.Position;
FS_ViewPos = TBN * inverse(V)[3].xyz;
// This line is actually not needed when using the normal map:
// I keep the FS_Normal variable for comparison purposes,
// when I want to switch to my basic light shader effect.
// (see later in fragment shader)
FS_Normal = TBN * normalize(transpose(inverse(mat3(W))) * VS_Normal);
}
And the fragment shader:
struct Textures_t
{
int SamplersCount;
sampler2D Samplers[4];
};
struct Light_t
{
int Active;
float Ambient;
float Power;
vec3 Position;
vec4 Color;
};
uniform mat4 W;
uniform mat4 V;
uniform Textures_t Textures;
uniform Light_t Light;
in vec3 FS_Vertex;
in vec4 FS_Color;
in vec2 FS_TexCoord;
in vec3 FS_LightPos;
in vec3 FS_ViewPos;
in vec3 FS_Normal;
out vec4 frag_Output;
vec4 getPixelColor()
{
return Textures.SamplersCount >= 1
? texture2D(Textures.Samplers[0], FS_TexCoord)
: FS_Color;
}
vec3 getTextureNormal()
{
// FYI: the normal texture is always at index 1
vec3 bump = texture(Textures.Samplers[1], FS_TexCoord).xyz;
bump = 2.0 * bump - vec3(1.0, 1.0, 1.0);
return normalize(bump);
}
vec4 getLightColor()
{
// This is the one line that changes between my basic light shader
// and the normal mapping one:
// - If I don't do 'TBN *' earlier and use FS_Normal here,
// the enlightenment seems fine (see second screenshot)
// - If I do multiply by TBN (including on FS_Normal), I would expect
// the same result as without multiplying ==> not the case: it looks
// very similar to the result with normal mapping
// (just has no bumpy effect of course)
// - If I use the normal texture (along with TBN of course), then I get
// the result you see in the first screenshot.
vec3 N = getTextureNormal(); // Instead of 'normalize(FS_Normal);'
// Everything from here on is the same as my basic light shader
vec3 L = normalize(FS_LightPos - FS_Vertex);
vec3 E = normalize(FS_ViewPos - FS_Vertex);
vec3 R = normalize(reflect(-L, N));
// Ambient color: light color times ambient factor
vec4 ambient = Light.Color * Light.Ambient;
// Diffuse factor: product of Normal to Light vectors
// Diffuse color: light color times the diffuse factor
float dfactor = max(dot(N, L), 0);
vec4 diffuse = clamp(Light.Color * dfactor, 0, 1);
// Specular factor: product of reflected to camera vectors
// Note: applies only if the diffuse factor is greater than zero
float sfactor = 0.0;
if(dfactor > 0)
{
sfactor = pow(max(dot(R, E), 0.0), 8.0);
}
// Specular color: light color times specular factor
vec4 specular = clamp(Light.Color * sfactor, 0, 1);
// Light attenuation: square of the distance moderated by light's power factor
float atten = 1 + pow(length(FS_LightPos - FS_Vertex), 2) / Light.Power;
// The fragment color is a factor of the pixel and light colors:
// Note: attenuation only applies to diffuse and specular components
return getPixelColor() * (ambient + (diffuse + specular) / atten);
}
void main()
{
frag_Output = Light.Active == 1
? getLightColor()
: getPixelColor();
}
That's it! I hope you have enough information and of course, your help will be greatly appreciated! :) Take care.
I am experiancing a very similar problem, and i can not explain why the lighting doesn't work right, but i can answer your first question and at the very least explain how i somehow got lighting working acceptably (though your problem may not necesarrily be the same is mine).
Firstly in theory if you tangents and bitangents are calculated correctly, then you should get exactly the same lighting result when doing the calculation in tangentspace with a tangentspace normal [0,0,1].
Secondly while it is common knowledge that you should transform your normals from model to cameraspace by multiplying by inverse transpose model-view matrix as explained by this tutorial, i found that the problem with the lighting being transformed wrong can be solved if you transform the normal tangent by the model-view matrix rather than the inverse transpose model-view. Ie use normalMatrix = mat3(W); instead of normalMatrix = transpose(inverse(mat3(W)));.
In my case this did »fix« the problems with the light, but i don't know why this fixed it, but i make no guarantee that it does not (in fact i assume that it does) introduce other problems with the shading

SSAO not displaying correct results, mostly no visible occlusion

I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.

GLSL point light shader moving with camera

I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.

GLSL Phong Light, Camera Problem

I Just started write a phong shader
vert:
varying vec3 normal, eyeVec;
#define MAX_LIGHTS 8
#define NUM_LIGHTS 3
varying vec3 lightDir[MAX_LIGHTS];
void main() {
gl_Position = ftransform();
normal = gl_NormalMatrix * gl_Normal;
vec4 vVertex = gl_ModelViewMatrix * gl_Vertex;
eyeVec = -vVertex.xyz;
int i;
for (i=0; i<NUM_LIGHTS; ++i){
lightDir[i] = vec3(gl_LightSource[i].position.xyz - vVertex.xyz);
}
}
I know that I need to get the camera position with uniform, but how, and where put this value?
ps. I'm using opengl 2.0
You don't need to pass the camera position, because, well, there is no camera in OpenGL.
Lighting calculations are performed in eye/world space, i.e. after the multiplication with the modelview matrix, which also performs the "camera positioning". So actually you already got the right things in place. using ftransform() is a little inefficient, as you're doing half of what it does again (gl_ModelviewMatrix * gl_Vertex, you can make this into ftransform by adding gl_Position = gl_ProjectionMatrix * eyeVec)
So if your lights seem to move when your "camera" transforms, you're not transforming the light's positions properly. Either precompute the transformed light positions, or transform them in the shader as well. It's more a matter of choosen convention, less laid out constraint if using shaders.