Instancing using more values than uniforms can store - c++

I am fairly new to OpenGL and trying to achieve instancing using uniform arrays. However the number of instances I am trying to invoke is larger than the MAX_UNIFORM_LOCATIONS limit:
QOpenGLShader::link: error: count of uniform locations > MAX_UNIFORM_LOCATIONS(262148 > 98304)error: Too many vertex shader default uniform block components
error: Too many vertex shader uniform components
What other ways are possible that will work with that large a number of objects? So far this is my shader code:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform vec3 positions[262144];
void main() {
vec3 t = vec3(positions[gl_InstanceID].x, positions[gl_InstanceID].y, positions[gl_InstanceID].z);
float val = 0;
mat4 wm = myMatrix * mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, t.z, 1) * worldMatrix;
color = vec3(0.4, 1.0, 0);
vert = vec3(wm * vertex);
vertNormal = mat3(transpose(inverse(wm))) * normal;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
If it should matter, I am using QOpenGLExtraFunctions.

There are many alternatives for overcoming the limitations of uniform storage:
UBOs, for example; they usually have a larger storage capacity than non-block uniforms. Now in your case, that probably won't work, since storing 200,000 vec3svec4s will require more storage than most implementations allow UBOs to provide. What you need is unbounded storage.
Instanced Arrays
Instanced arrays use the instanced rendering mechanism to automatically fetch vertex attributes based on the instance index. This requires that your VAO setup work change a bit.
Your shader would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 position;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
void main() {
vec3 t = position;
/*etc*/
}
Here, the shader itself never uses gl_InstanceID. That happens automatically based on your VAO.
That setup code would have to include the following:
glBindBuffer(GL_ARRAY_BUFFER, buffer_containing_instance_data);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), 0);
glVertexAttribDivisor(2, 1);
This code assumes that the instance data is at the start of the buffer and is 3 floats-per-value (tightly packed). Since you're using vertex attributes, you can use the usual vertex attribute compression techniques on them.
The last call, to glVertexAttribDivisor is what tells OpenGL that it will only move to the next value in the array once per instance, rather than based on the vertex's index.
Note that by using instanced arrays, you also gain the ability to use the baseInstance glDraw* calls. The baseInstance in OpenGL is only respected by instanced arrays; gl_InstanceID never is affected by it.
Buffer Textures
Buffer textures are linear, one-dimensional textures that get their data from a buffer object's storage.
Your shader logic would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform samplerBuffer positions;
void main() {
vec3 t = texelFetch(positions, gl_InstanceID).xyz;
/*etc*/
}
Buffer textures can only be accessed via the direct texel fetching functions like texelFetch.
Buffer textures in GL 4.x can use a few 3-channel formats, but earlier GL versions don't give you that option (not without an extension). So you may want to expand your data to a 4-channel value rather than 3 channel.
Another problem is that buffer textures do have a maximum size limitation. And the required minimum is only 64KB of size, so the instanced array method will probably be more reliable (since it has no size restriction). However, all non-Intel OpenGL implementations give a huge size for buffer textures.
SSBOs
Shader storage buffer objects are like UBOs, only you can both read and write to them. That latter tool isn't important for you. The main advantage here is that the minimum required OpenGL size for them is have a minimum required size of 16MB (and implementations generally return a size limit on the order of available video memory). So size limits aren't a problem.
Your shader code would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
buffer PositionSSBO
{
vec4 positions[];
};
void main() {
vec3 t = positions[gl_InstanceID].xyz;
/*etc*/
}
Note that we explicitly use a vec4 here. That's because you should never use vec3 in a buffer-backed interface block (ie: UBO/SSBO).
In code, SSBOs work much like UBOs. You bind them for use with glBindBufferRange.

Related

Error loading compute shader

I'm implementing simple ray tracing with a compute shader.
But I'm stuck in linking the program object of the compute shader.
#version 440
struct triangle {
vec3 points[3];
};
struct sphere {
vec3 pos;
float r;
};
struct hitinfo {
vec2 lambda;
int idx;
};
layout(binding = 0, rgba32f) uniform image2D framebuffer;
// wrriten by compute shader
layout (local_size_x = 1, local_size_y = 1) in;
uniform triangle triangles[2500];
uniform sphere spheres[2500];
uniform int num_triangles;
uniform int num_spheres;
uniform vec3 eye;
uniform vec3 ray00;
uniform vec3 ray10;
uniform vec3 ray01;
uniform vec3 ray11;
Here is my compute shader code and I can get a "Out of resource" error.
I know the reason of this error is the size of triangles but I need that size.
Is there any way how a large number of triangles can be passed into the shader?
There is just a very limited amount of uniforms a shader can have. If you need more data then what fits in your uniforms, you can either use Uniform Buffer Objects or Shader Storage Buffer Objects to back up a uniform.
In this case, you will have to define a GLSL Interface Block and bind the buffer to that uniform. This means, that you only need one uniform in order to store a large number of similar elements.

OpenGL - Explicit Uniform Location in different Shader Stages

How do I assign an explicit uniform location when I want to use the uniform in different shader stages of the same program?
When automatic assignment is used, uniforms in different stages are assigned to the same location when the identifiers match. But how can I define the location in the shader using the
layout (location = ...)
syntax?
Following quote from:
https://www.opengl.org/wiki/Uniform_(GLSL)/Explicit_Uniform_Location
It is illegal to assign the same uniform location to two uniforms in the same shader or the same program. Even if those two uniforms have the same name and type, and are defined in different shader stages, it is not legal to explicitly assign them the same uniform location; a linker error will occur.
Following quote from the GLSL Spec:
No two default-block uniform variables in the program can have the same location,
even if they are unused, otherwise a compile-time or link-time error will be generated.
I'm using OpenGL 4.3.
Due to immense READING THE CODE, I figured out, that the uniform is unused.
That leads to the following situation: On a GTX 780 the following code runs without problems (although it seems it shouldn't). On an Intel HD 5500 onboard graphics chip the code produces a SHADER_ID_LINK error at link time, according to the GL_ARB_DEBUG_OUTPUT extension. It states, that the uniform location overlaps another uniform.
Vertex Shader:
#version 430 core
layout(location = 0) in vec4 vPosition;
layout(location = 2) in vec4 vTexCoord;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
out vec4 fPosition;
out vec4 fTexCoord;
void main() { ... }
Fragment Shader:
#version 430 core
in vec4 fPosition;
in vec4 fTexCoord;
layout(location = 0) out vec4 Albedo;
layout(location = 1) out vec4 Normal;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
layout(location = 1) uniform mat4 InverseViewProjectionMatrix;
layout(location = 2) uniform samplerCube Cubemap;
void main() { ... }
However, when the uniform is used, no problems occour. Assumed I interpret the GLSL Spec right, this seems to be not as it's supposed. Although, this is exactly how I would like it to function.
Still, there is the problem of overlapping uniforms, when the uniform is not used.
see complete GL+VAO/VBO+GLSL+shaders example in C++
extracted from that example:
On GPU side:
#version 400 core
layout(location = 0) in vec3 pos;
you need to specify GLSL version to use this
not sure from which they add layout location but for 400+ it will work for sure
VBO pos is set to location 0
On CPU side:
// globals
GLuint vbo[4]={-1,-1,-1,-1};
GLuint vao[4]={-1,-1,-1,-1};
const GLfloat vao_pos[]=
{
// x y z //ix
-1.0,-1.0,-1.0, //0
+1.0,-1.0,-1.0, //1
+1.0,+1.0,-1.0, //2
-1.0,+1.0,-1.0, //3
-1.0,-1.0,+1.0, //4
+1.0,-1.0,+1.0, //5
+1.0,+1.0,+1.0, //6
-1.0,+1.0,+1.0, //7
};
// init
GLuint i;
glGenVertexArrays(4,vao);
glGenBuffers(4,vbo);
glBindVertexArray(vao[0]);
i=0; // VBO location
glBindBuffer(GL_ARRAY_BUFFER,vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_pos),vao_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
when you attach data to location
then you need to have set the layout location for it the same in all shaders it use it inside single stage. You can not assign the same location to more than one VBO at once (that is the state you copied all about).
By single stage is meant single/set of glDrawArrays/glDrawElements calls without changing shader setup. If you have more shader program stages (more than one of fragment/vertex/geometry...) then the location can be set differently for each stage but inside each stage all its shader programs must have the same location setup.
By single stage start you can assume each glUseProgram(prog_id); call and it ends by glUseProgram(0); or another stage start ...
[edit2] here are the uniforms for non nVidia drivers
vertex shader:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
fragment shader:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
void main()
{
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
col=vec4(c,1.0);
}
These are rewritten shaders from the linked example with layout location used for uniforms. You have to add:
#extension GL_ARB_explicit_uniform_location : enable
for 400 profile to make it work
On CPU side use glGetUniformLocation as usual
id=glGetUniformLocation(prog_id,"lt_pnt_pos"); glUniform3fv(id,1,lt_pnt_pos);
id=glGetUniformLocation(prog_id,"lt_pnt_col"); glUniform3fv(id,1,lt_pnt_col);
id=glGetUniformLocation(prog_id,"lt_amb_col"); glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_model" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=glGetUniformLocation(prog_id,"m_normal" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=glGetUniformLocation(prog_id,"m_view" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_proj" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
Or the defined position:
id=64; glUniform3fv(id,1,lt_pnt_pos);
id=67; glUniform3fv(id,1,lt_pnt_col);
id=70; glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id= 0; glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=16; glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=32; glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=48; glUniformMatrix4fv(id,1,GL_FALSE,m);
Looks like nVidia compiler handles the locations differently. In case it does not work properly try workaround for buggy drivers to set locations with different step per data type:
1 location: float,int,bool
2 locations double
3 locations vec3
4 locations vec4
6 locations dvec3
8 locations dvec4
9 locations mat3
16 locations mat4
etc ...

OpenGL pass packed texture coordinates to the shader

I need to do some heavy optimizations in my game and I was thinking since we can pass a vec4 color to the shader packed to only one float, is there a way to pass the vec2 texture coordinates too? For example in the following code can I pass the 2 texture coords(its always 1 and 0) as only one element,in the same way as its happening in the color element.
public void point(float x,float y,float z,float size,float color){
float hsize=size/2;
if(index>vertices.length-7)return;
vertices[index++]=x-hsize;
vertices[index++]=y-hsize;
vertices[index++]=color;
vertices[index++]=0;
vertices[index++]=0;
vertices[index++]=x+hsize;
vertices[index++]=y-hsize;
vertices[index++]=color;
vertices[index++]=1;
vertices[index++]=0;
vertices[index++]=x+hsize;
vertices[index++]=y+hsize;
vertices[index++]=color;
vertices[index++]=1;
vertices[index++]=1;
vertices[index++]=x-hsize;
vertices[index++]=y+hsize;
vertices[index++]=color;
vertices[index++]=0;
vertices[index++]=1;
num++;
}
{
mesh=new Mesh(false,COUNT*20, indices.length,
new VertexAttribute(Usage.Position, 2,"a_position"),
new VertexAttribute(Usage.ColorPacked, 4,"a_color"),
new VertexAttribute(Usage.TextureCoordinates, 2,"a_texCoord0")
);
}
frag shader
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main() {
vec4 color=v_color * texture2D(u_texture, v_texCoords);
gl_FragColor = color;
}
vert shader
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
I know this will not make a big difference in the performance but I am just willing to spend some extra hours here and there to make this small optimizations in order to make my game run faster.
I haven't tried this but I think it will work. Simply use Usage.ColorPacked for your texture coordinates. I don't think you can send anything smaller than 4 bytes, so you may as well use the already defined packed color. You will only be saving one byte per vertex. You can put your coordinates into the first two elements and ignore the second two elements.
mesh = new Mesh(false,COUNT*20, indices.length,
new VertexAttribute(Usage.Position, 2,"a_position"),
new VertexAttribute(Usage.ColorPacked, 4,"a_color"),
new VertexAttribute(Usage.ColorPacked, 4,"a_texCoord0")
);
I don't think the usage parameter of VertexAttribute is actually used by anything in Libgdx. If you look at the source, it only checks if the usage is ColorPacked or not, and from that decides whether to use a single byte per component versus 4.
And in your vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec4 a_texCoord0; //note using vec4
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0.xy; //using the first two elements
gl_Position = u_projTrans * a_position;
}
To translate the texture coordinate correctly:
final Color texCoordColor = new Color(0, 0, 0, 0);
//....
texCoordColor.r = texCoord.x;
texCoordColor.g = texCoord.y;
float texCoord = texCoordColor.toFloatBits();

DirectX FVF-like GLSL Shaders

Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!

Strange error in GLSL while assigning attribute to a varying in vertex shader

I am writing a GLSL program for texture mapping. I am having a weird problem. From my vertex shader I am trying to pass the texture coordinate vec2 as a varying to frag shader. In a diff shader in the same prog I did the same and it worked. But for this texture its not working. If I comment that line, then everything works. I have no clue why this is happening.
This is the vertex shader:
attribute vec4 position;
attribute vec4 color1;
attribute vec4 normal;
attribute vec2 texCoord;
uniform mat4 model; //passed to shader
uniform mat4 projection; //passed to shader
uniform mat4 view; // passed to shader
uniform mat4 normalMatrix; //passed to shader
uniform mat4 worldNormalMatrix;
uniform vec3 eyePos;
varying vec4 pcolor;
varying vec3 fNormal;
varying vec3 v;
varying mat4 modelMat;
varying mat4 viewMat;
varying mat4 projectionMat;
varying vec2 texCoordinate;
varying vec3 reflector;
void main()
{
//texCoordinate = texCoord; // If I uncomment this, then I get wrong output. but the same thing works in a diff shader!!
mat4 projectionModelView;
vec4 N;
vec3 WorldCameraPosition = vec3(model*vec4(eyePos,1.0));
vec3 worldPos = vec3(model*position);
vec3 worldNorm = normalize(vec3(worldNormalMatrix*normal));
vec3 worldView = normalize(vec3(WorldCameraPosition-worldPos));
reflector = reflect(-worldView, worldNorm);
projectionMat = projection;
modelMat=model;
viewMat=view;
N=normalMatrix*normal;
fNormal = vec3(N); //need to multiply this with normal matrix
projectionModelView=projection*view*model;
v=vec3(view*model*position); // v is the position at eye space for each vertex passed to frag shader
gl_Position = projectionModelView * position; // calculate clip space position
}
varying vec4 pcolor;
varying vec3 fNormal;
varying vec3 v;
varying mat4 modelMat;
varying mat4 viewMat;
varying mat4 projectionMat;
varying vec2 texCoordinate;
varying vec3 reflector;
This is 17 total varying vectors. That's too many for most 2.1-class hardware (they generally only support 16). That's probably why it is "not working" when you uncomment that line, because your vertex shader will ignore any varying that you don't actually write to.
You should not be passing those matrices at all. They're uniforms; by definition, they don't change from vertex to vertex. And fragment shaders are perfectly capable of accessing those same uniforms too. So just declare them in your fragment shader if you need to use them.