index expression must be constant using array - glsl

I would like to apply multiples lights in my shader. To do that, I've make the choice to put lights data in multiple arrays instead of one light = one struct to avoid doing "setUniforms" a lot of time.
I've got something like that:
// Light's data.
struct LightData
{
vec3 ambient[4];
vec3 diffuse[4];
vec3 position[4];
vec3 specular[4];
};
uniform LightData lights;
vec3 applyLight( int index, vec3 cameraPos )
{
vec3 position = lights.position[index]; //<<< Error here.
…
return result;
}
int main()
{
vec3 color;
for( int i = 0; i < 4; i++)
color += applyLight(i, camera.position);
}
The problem is in the GLSL code: "index expression must be constant []". How can I do to have this code to work? I could give position, diffuse, ambient, specular to the function but it will by dirty.
What about performances? It is better to have something like what I've done or like the code bellow?
struct Light
{
vec3 ambient;
vec3 diffuse;
vec3 position;
vec3 specular;
};
uniform Light lights[4];
Have a nice day!

Unroll your loop:
color += applyLight(0, camera.position);
color += applyLight(1, camera.position);
color += applyLight(2, camera.position);
color += applyLight(3, camera.position);
(not sure if this exact code will work, but you get the idea)
lights.position[index]; // no
lights.position[0]; // yes, etc
Option below (for uniform layout) might be better, it may have a better locality.

Related

How to use a directional light in a blinn shader instead of point light?

So I am using a blinn shader program on some of my models, that uses a DIRECTIONAL light as opposed to a point light. I started out this original code that uses a point light:
VERTEX:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
void main()
{
// Vertex location in modelview coordinates
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
Normal = gl_NormalMatrix * gl_Normal;
Half = gl_LightSource[0].halfVector.xyz;
Ambient = gl_FrontMaterial.emission + gl_FrontLightProduct[0].ambient + gl_LightModel.ambient*gl_FrontMaterial.ambient;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
FRAGMENT:
varying vec3 Half;
varying vec3 Light;
varying vec3 Normal;
varying vec4 Ambient;
uniform sampler2D tex;
vec4 blinn()
{
vec3 N = normalize(Normal);
vec3 L = normalize(Light);
vec4 color = Ambient;
float Id = dot(L,N);
if (Id>0.0)
{
color += Id*gl_FrontLightProduct[0].diffuse;
vec3 H = normalize(Half);
float Is = dot(H,L); // Specular is cosine of reflected and view vectors
if (Is>0.0) color += pow(Is,gl_FrontMaterial.shininess)*gl_FrontLightProduct[0].specular;
}
return color;
}
void main()
{
gl_FragColor = blinn() * texture2D(tex,gl_TexCoord[0].xy);
}
However, as stated above, instead of a point light I want I want a directional light, such that no matter where in the scene the model is the direction of the light is the same. So I make the following changes:
Instead of:
varying vec3 Light;
and
vec3 P = vec3(gl_ModelViewMatrix * gl_Vertex);
Light = vec3(gl_LightSource[0].position) - P;
I get rid of the above lines of code and instead in my fragment shader have:
uniform vec4 lightDir;
and
vec3 L = normalize(lightDir.xyz);
I pass the direction of the light as a uniform outside of a my shader program, and this works good: the model is lighted from a single direction no matter its location in the world! HOWEVER, now the lighting changes dramatically and unrealistically depending on the user's view, which makes sense since I got rid of ("-P ") in the light calculation in the original code. I've already tried adding that to the code (by moving lightDir back to vertex shader and passing it along again in a varying) to what I have and it just doesn't fix the problem. I'm afraid I just don't understand what is going well enough to figure this out, I understand that the " - P" for the Light vec3 is necessary: to make specular/reflection work, but I don't know how to make it work for a directional light. How do I make take the original above code, and make it so the light is treated as a directional light as opposed to a point light?

DirectX FVF-like GLSL Shaders

Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

GLSL calculating color vector from multiple lights

I'm using my own (not opengl built in) light. This is my fragment shader program:
#version 330
in vec4 vertexPosition;
in vec3 surfaceNormal;
in vec2 textureCoordinate;
in vec3 eyeVecNormal;
out vec4 outputColor;
uniform sampler2D texture_diffuse;
uniform bool specular;
uniform float shininess;
struct Light {
vec4 position;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
};
uniform Light lights[8];
void main()
{
outputColor = texture2D(texture_diffuse, textureCoordinate);
for (int l = 0; l < 8; l++) {
vec3 lightDirection = normalize(lights[l].position.xyz - vertexPosition.xyz);
float diffuseLightIntensity = max(0, dot(surfaceNormal, lightDirection));
outputColor.rgb += lights[l].ambientColor.rgb * lights[l].ambientColor.a;
outputColor.rgb += lights[l].diffuseColor.rgb * lights[l].diffuseColor.a * diffuseLightIntensity;
if (specular) {
vec3 reflectionDirection = normalize(reflect(lightDirection, surfaceNormal));
float specular = max(0.0, dot(eyeVecNormal, reflectionDirection));
if (diffuseLightIntensity != 0) {
vec3 specularColorOut = pow(specular, shininess) * lights[l].specularColor.rgb;
outputColor.rgb += specularColorOut * lights[l].specularColor.a;
}
}
}
}
Now the problem is, that when i have 2 light sources with say ambient color vec4(0.2f, 0.2f, 0.2f, 1.0f) the ambient color on model will be vec4(0.4f, 0.4f, 0.4f, 1.0f) because i simply add it to the outputColor variable. How can i calculate a single ambient and a single diffuse color variables for multiple lights, so i would get a realistic result?
Here's a fun fact: lights in the real world do not have ambient, diffuse, or specular colors. They emit one color. Period (OK, if you want to be technical, lights emit lots of colors. But individual photons don't have "ambient" properties. They just have different wavelengths). All you're doing is copying someone who copied OpenGL's nonsense about ambient and diffuse light colors.
Stop copying someone else's code and do it right.
Each light has a color. You use that color to compute the diffuse and specular contributions of that light to the object. That's it.
Ambient is a property of the scene, not of any particular light. It is intended to represent indirect, global illumination: the light reflected from other objects in the scene, as taken as a general aggregate. You don't have ambient "lights"; there is only one ambient term, and it should be applied once.

My GLSL shader program links fine, but errors when I try to use it -- how do I debug this?

If I had a syntax error in my GLSL, it would fail at link time, shouldn't it?
Anyway, I tried running my program in glslDevil (never used this program before) and it just keeps repeating wglGetCurrentContext(). I'm not sure if it's supposed to do that, or it's getting confused because it's a CLR app.
Is there an easier way to check for syntax errors in my GLSL code?
It appears to be the fragment shader that's causing the problem....here's the code if needed:
#version 330
const int MAX_POINT_LIGHTS = 2;
in vec2 TexCoord0;
in vec3 Normal0;
in vec3 WorldPos0;
out vec4 FragColor;
struct BaseLight
{
vec3 Color;
float AmbientIntensity;
float DiffuseIntensity;
};
struct DirectionalLight
{
struct BaseLight Base;
vec3 Direction;
};
struct Attenuation
{
float Constant;
float Linear;
float Exp;
};
struct PointLight
{
struct BaseLight Base;
vec3 Position;
Attenuation Atten;
};
uniform int gNumPointLights;
uniform DirectionalLight gDirectionalLight;
uniform PointLight gPointLights[MAX_POINT_LIGHTS];
uniform sampler2D gSampler;
uniform vec3 gEyeWorldPos;
uniform float gMatSpecularIntensity;
uniform float gSpecularPower;
vec4 CalcLightInternal(struct BaseLight Light, vec3 LightDirection, vec3 Normal)
{
vec4 AmbientColor = vec4(Light.Color, 1.0f) * Light.AmbientIntensity;
float DiffuseFactor = dot(Normal, -LightDirection);
vec4 DiffuseColor = vec4(0, 0, 0, 0);
vec4 SpecularColor = vec4(0, 0, 0, 0);
if (DiffuseFactor > 0) {
DiffuseColor = vec4(Light.Color, 1.0f) * Light.DiffuseIntensity * DiffuseFactor;
vec3 VertexToEye = normalize(gEyeWorldPos - WorldPos0);
vec3 LightReflect = normalize(reflect(LightDirection, Normal));
float SpecularFactor = dot(VertexToEye, LightReflect);
SpecularFactor = pow(SpecularFactor, gSpecularPower);
if (SpecularFactor > 0) {
SpecularColor = vec4(Light.Color, 1.0f) *
gMatSpecularIntensity * SpecularFactor;
}
}
return (AmbientColor + DiffuseColor + SpecularColor);
}
vec4 CalcDirectionalLight(vec3 Normal)
{
return CalcLightInternal(gDirectionalLight.Base, gDirectionalLight.Direction, Normal);
}
vec4 CalcPointLight(int Index, vec3 Normal)
{
vec3 LightDirection = WorldPos0 - gPointLights[Index].Position;
float Distance = length(LightDirection);
LightDirection = normalize(LightDirection);
vec4 Color = CalcLightInternal(gPointLights[Index].Base, LightDirection, Normal);
float Attenuation = gPointLights[Index].Atten.Constant +
gPointLights[Index].Atten.Linear * Distance +
gPointLights[Index].Atten.Exp * Distance * Distance;
return Color / Attenuation;
}
void main()
{
vec3 Normal = normalize(Normal0);
vec4 TotalLight = CalcDirectionalLight(Normal);
for (int i = 0 ; i < gNumPointLights ; i++) {
TotalLight += CalcPointLight(i, Normal);
}
FragColor = texture2D(gSampler, TexCoord0.xy) * TotalLight;
}
Edit: Specifically, I get GL_INVALID_OPERATION when I call glUseProgram. The docs say:
GL_INVALID_OPERATION is generated if program is not a program object.
GL_INVALID_OPERATION is generated if program could not be made part of
current state.
GL_INVALID_OPERATION is generated if glUseProgram is executed between
the execution of glBegin and the corresponding execution of glEnd.
But I don't think it can be the first or last case, because the program ran fine until I tweaked the shader. "could not be made part of the current state" doesn't give me much to go on.
I think one possible explanation could be the for loop inside your code.
The for statement, in my humble experience, should be always avoided.
Consider that OpenGL ES supports only fixed number of iterations and, in your case, since you are basing it on a uniform, I would investigate on it.
Moreover, consider that many drivers unroll the for statement in order to optimize the generated code.
In other words, I would focus on a simpler version of your shader, something like the following:
if (gNumPointLights == 1) {
TotalLight += CalcPointLight(1, Normal);
} else if (gNumPointLights == 2) {
TotalLight += CalcPointLight(1, Normal);
TotalLight += CalcPointLight(2, Normal);
} else if (gNumPointLights == 3) {
TotalLight += CalcPointLight(1, Normal);
TotalLight += CalcPointLight(2, Normal);
TotalLight += CalcPointLight(3, Normal);
}
Obviously the code can be optmized in its style and efficiency but it is a quick starting point.
Anyway, realistically, you won't have thousands lights at the same time :)
Cheers
The problem was this:
struct PointLight
{
struct BaseLight Base;
vec3 Position;
Attenuation Atten;
};
Should be just BaseLight Base, no struct
And this:
vec4 CalcLightInternal(struct BaseLight Light, vec3 LightDirection, vec3 Normal)
Same problem.
Thought it looked funny, so I changed it, and it started working.
When a program object has been successfully linked, the program object can be made part of current state by calling glUseProgram.So you should call program link before use program
In my case, I had an extra tick mark at the end of a line. Fat thumb typing.
gl_FragColor = vec4(0.6, 0.6, 0.6, 1.0);`
Removing the tick mark fixed the problem.
Weird that it gets all the way through compile, linking and throws the error when trying to use the program.