I am writing a 2D lighting system using LibGDX, but I have run into difficulties with shaders.
Previously I had written a lighting system in Slick2D, which worked very well, however the way coordinates worked in shaders in that library seems to be very different to LibGDX.
I had to write this Java function to change coordinates from 'world coordinates' to whatever works in shaders. I did this by analysing the output for different coordinates.
public static Vector2 screenToNDC(Vector2 screenCoords, Vector2 textureSize, Vector2 texturePosition)
{
return screenCoords.scl(new Vector2(1f / textureSize.x, 1f / textureSize.y)).sub(texturePosition.scl(new Vector2(1f / textureSize.x, 1f / textureSize.y)));
}
This function works fine translating the light position.
This is my shader code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform mat4 u_projTrans;
uniform vec2 lightPos;
uniform vec3 lightCol;
//uniform float lightIntensity;
float dist(vec2 a, vec2 b) {
return sqrt((b.x - a.x) * (b.x - a.x) + (b.y - a.y) * (b.y - a.y));
}
void main() {
gl_FragColor = vec4(lightCol, 1.0 / dist(v_texCoords, lightPos));
}
Currently, this will draw only the light colour, with full alpha. Upon investigation, I found this must be because the dist function is returning very small values such that 1.0 / dist is positive. If I multiply dist(v_texCoords, lightPos) by a value like 10, the light falloff works as intended.
Is there any way to simply make coordinates work in LibGDX shaders in the same way that they do in Java or Slick2D shaders?
Related
I am trying to implement Blur effect in my game on mobile devices using GLSL shader. I don't have any former experience with writing shaders. And I don't understand if my shader is enough good. Actually I have copyied the GLSL code from a tutorial and I don't know it this tutorial is for vivid demo or also can be used in practice. Here is the code of two pass blur shader that uses Gaussian weights (http://www.cocos2d-x.org/wiki/User_Tutorial-RenderTexture_Plus_Blur):
#ifdef GL_ES
precision mediump float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform vec2 pixelSize;
uniform vec2 direction;
uniform int radius;
uniform float weights[64];
void main()
{
gl_FragColor = texture2D(CC_Texture0, v_texCoord)*weights[0];
for (int i = 1; i < radius; i++) {
vec2 offset = vec2(float(i)*pixelSize.x*direction.x, float(i)*pixelSize.y*direction.y);
gl_FragColor += texture2D(CC_Texture0, v_texCoord + offset)*weights[i];
gl_FragColor += texture2D(CC_Texture0, v_texCoord - offset)*weights[i];
}
}
I run this shader on each frame update (60 times in a sec) and my game framerate for only one pass drops down to 22 FPS on iPhone 5S (not a bad device). I think this is very-very strange. it seems it has not to much instruction. Why this is so heavy?
P.S. Blur radius is 50, step is 1.
Main reasons why your shader is heavy:
1: This two calculations: v_texCoord + offset and v_texCoord - offset. because the uv coordonates are computed in the fragment shader the texture data has to be loaded from memory on the spot causing cache miss.
What is a dependent texture read?
2: radius is way to large.
How to make it faster/better:
1: Calculate as much as possible in the vertex shader. Ideally if you calculate all the UV's in the vertex shader the GPU can move the texture memory in cache before calling fragment shaders, drastically improving performance.
2: reduce Radius to accommodate let's say 8-16 texture2D calls. This will probably not give you the result you are expecting, and to solve this you can have 2 textures, blurring texture A into B , then blur again B into texture A and so on, as mush as you need. This will give very good results, i remember crisys 1 used it for motion blur , but i can't find the paper.
3: eliminate those 64 uniforms, have all the data hardcoded in the shader. I know that this is not that nice but you will gain some extra performance.
4: If you carefully calculate the UV coordinates you can take great advantage of texture interpolation. Basically never sample a pixel on it's center, always sample in between pixels and the hardware will do and avrage of the near 4 pixels:
https://en.wikipedia.org/wiki/Bilinear_filtering
5: This line: precision mediump float; does everything have to be mediump? I would suggest to remove it and do some testing with lowp on as much as you can.
Edit:
For you shader, here is a simplified version of what you need to do:
Vertex shader:
attribute highp vec4 Position;
attribute mediump vec2 texture0UV;
varying mediump vec2 v_texCoord0;
varying mediump vec2 v_texCoord1;
varying mediump vec2 v_texCoord2;
varying mediump vec2 v_texCoord3;
varying mediump vec2 v_texCoord5;
uniform mediump vec2 texture_size;
void main()
{
gl_Position = Position;
vec2 pixel_size = vec2(1.0) / texture_size;
vec2 offset;
v_texCoord0 = texture0UV;
v_texCoord1 = texture0UV + vec2(-1.0,0.0) / texture_size + pixel_size * 0.5;
v_texCoord2 = texture0UV + vec2(0.0,-1.0) / texture_size + pixel_size * 0.5;
v_texCoord3 = texture0UV + vec2(1.0,0.0) / texture_size - pixel_size * 0.5;
v_texCoord4 = texture0UV + vec2(0.0,1.0) / texture_size - pixel_size * 0.5;
}
The last operation pixel_size * 0.5 is required to take maximum advantage of linear interpolation. In this example the position you pick for sampling are trivial but there is an entire discussion on how you should pick your sampling positions that is way out of the scope of this question.
Fragment shader:
varying mediump vec2 v_texCoord0;
varying mediump vec2 v_texCoord1;
varying mediump vec2 v_texCoord2;
varying mediump vec2 v_texCoord3;
varying mediump vec2 v_texCoord5;
uniform lowp sampler2D CC_Texture0;
void main()
{
mediump vec4 final_color = vec4(0.0);
final_color += texture2D(CC_Texture0,v_texCoord0);
final_color += texture2D(CC_Texture0,v_texCoord1);
final_color += texture2D(CC_Texture0,v_texCoord2);
final_color += texture2D(CC_Texture0,v_texCoord3);
final_color += texture2D(CC_Texture0,v_texCoord4);
gl_FragColor = final_color / 5.0;//weights have to go, use fixed values instead, in this case it's 1/5 for each sample
}
For this to look good you need to blur the texture multiple times, even if you blur the texture 2 times you should see a notable difference.
To speed up you can:
Make radius a const to allow shader compiler to unroll the loop
Precompute pixelSize * direction
Decrease radius, I think 50 is too big for mobile device
I tried to implement normal mapping in my opengl application but I can't get it to work.
This is the diffuse map (which I add a brown color to) and this is the normal map.
In order to get the tangent and bitangent (in other places called binormals?) vectors, I run this function for every triangle in my mesh:
void getTangent(const glm::vec3 &v0, const glm::vec3 &v1, const glm::vec3 &v2,
const glm::vec2 &uv0, const glm::vec2 &uv1, const glm::vec2 &uv2,
std::vector<glm::vec3> &vTangents, std::vector<glm::vec3> &vBiangents)
{
// Edges of the triangle : postion delta
glm::vec3 deltaPos1 = v1-v0;
glm::vec3 deltaPos2 = v2-v0;
// UV delta
glm::vec2 deltaUV1 = uv1-uv0;
glm::vec2 deltaUV2 = uv2-uv0;
float r = 1.0f / (deltaUV1.x * deltaUV2.y - deltaUV1.y * deltaUV2.x);
glm::vec3 tangent = (deltaPos1 * deltaUV2.y - deltaPos2 * deltaUV1.y)*r;
glm::vec3 bitangent = (deltaPos2 * deltaUV1.x - deltaPos1 * deltaUV2.x)*r;
for(int i = 0; i < 3; i++) {
vTangents.push_back(tangent);
vBiangents.push_back(bitangent);
}
}
After that, I call glBufferData to upload the vertices, normals, uvs, tangents and bitangents to the GPU.
The vertex shader:
#version 430
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
in vec3 tangent;
in vec3 bitangent;
out vec2 fsCoords;
out vec3 fsVertex;
out mat3 TBNMatrix;
void main()
{
gl_Position = ProjectionMatrix * CameraMatrix * ModelMatrix * vec4(vertex, 1.0);
fsCoords = uv;
fsVertex = vertex;
TBNMatrix = mat3(tangent, bitangent, normal);
}
Fragment shader:
#version 430
uniform sampler2D diffuseMap;
uniform sampler2D normalMap;
uniform mat4 ModelMatrix;
uniform vec3 CameraPosition;
uniform struct Light {
float ambient;
vec3 position;
} light;
uniform float shininess;
in vec2 fsCoords;
in vec3 fsVertex;
in mat3 TBNMatrix;
out vec4 color;
void main()
{
//base color
const vec3 brownColor = vec3(153.0 / 255.0, 102.0 / 255.0, 51.0 / 255.0);
color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell
//general vars
vec3 normal = texture(normalMap, fsCoords).rgb * 2.0 - 1.0;
vec3 surfacePos = vec3(ModelMatrix * vec4(fsVertex, 1.0));
vec3 surfaceToLight = normalize(TBNMatrix * (light.position - surfacePos)); //unit vector
vec3 eyePos = TBNMatrix * CameraPosition;
//diffuse
float diffuse = max(0.0, dot(normal, surfaceToLight));
//specular
float specular;
vec3 incidentVector = -surfaceToLight; //unit
vec3 reflectionVector = reflect(incidentVector, normal); //unit vector
vec3 surfaceToCamera = normalize(eyePos - surfacePos); //unit vector
float cosAngle = max(0.0, dot(surfaceToCamera, reflectionVector));
if(diffuse > 0.0)
specular = pow(cosAngle, shininess);
//add lighting to the fragment color (no attenuation for now)
color.rgb *= light.ambient;
color.rgb += diffuse + specular;
}
The image I get is completely incorrect. (light positioned on camera)
What am I doing wrong here?
My bet is on the color setting/mixing in fragment shader...
you are setting output color more then once
If I remember correctly on some gfx drivers that do a big problems for example everything after the line
color = vec4(brownColor * (texture(diffuseMap, fsCoords).rgb + 0.25), 1.0);//add a fixed base color (0.25), because its dark as hell
could be deleted by driver ...
you are adding color and intensities instead of color*intensity
but I could overlook someting.
try just normal/bump shading at first
Ignore ambient,reflect,specular... and then if it works add the rest one by one. Always check the shader's compilation logs
Too lazy to further analyze your code, so here is how I do it:
Left size is space ship object (similar to ZXS Elite's Viper) rendered with fixed function. Right side the same (a bit different rotation of object) with GLSL shader's in place and this normal/bump map
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
// texture units:
// 0 - texture0 map 2D rgba
// 1 - texture1 map 2D rgba
// 2 - normal map 2D xyz
// 3 - specular map 2D i
// 4 - light map 2D rgb rgb
// 5 - enviroment/skybox cube map 3D rgb
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_l2g_dir;
uniform mat4x4 tm_g2s;
uniform mat4x4 tm_l2s_per;
uniform mat4x4 tm_per;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
layout(location=2) in vec2 txr;
layout(location=3) in vec3 tan;
layout(location=4) in vec3 bin;
layout(location=5) in vec3 nor;
out smooth vec3 pixel_pos;
out smooth vec4 pixel_col;
out smooth vec2 pixel_txr;
//out flat mat3 pixel_TBN;
out smooth mat3 pixel_TBN;
//------------------------------------------------------------------
void main(void)
{
vec4 p;
p.xyz=pos;
p.w=1.0;
p=tm_l2g*p;
pixel_pos=p.xyz;
p=tm_g2s*p;
gl_Position=p;
pixel_col=col;
pixel_txr=txr;
p.xyz=tan.xyz; p.w=1.0; pixel_TBN[0]=normalize((tm_l2g_dir*p).xyz);
p.xyz=bin.xyz; p.w=1.0; pixel_TBN[1]=normalize((tm_l2g_dir*p).xyz);
p.xyz=nor.xyz; p.w=1.0; pixel_TBN[2]=normalize((tm_l2g_dir*p).xyz);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec3 pixel_pos;
in smooth vec4 pixel_col;
in smooth vec2 pixel_txr;
//in flat mat3 pixel_TBN;
in smooth mat3 pixel_TBN;
uniform sampler2D txr_texture0;
uniform sampler2D txr_texture1;
uniform sampler2D txr_normal;
uniform sampler2D txr_specular;
uniform sampler2D txr_light;
uniform samplerCube txr_skybox;
const int _lights=3;
uniform vec3 light_col0=vec3(0.1,0.1,0.1);
uniform vec3 light_dir[_lights]= // direction to local star in ellipsoid space
{
vec3(0.0,0.0,+1.0),
vec3(0.0,0.0,+1.0),
vec3(0.0,0.0,+1.0),
};
uniform vec3 light_col[_lights]= // local star color * visual intensity
{
vec3(1.0,0.0,0.0),
vec3(0.0,1.0,0.0),
vec3(0.0,0.0,1.0),
};
out layout(location=0) vec4 frag_col;
const vec4 v05=vec4(0.5,0.5,0.5,0.5);
const bool _blend=false;
const bool _reflect=true;
//------------------------------------------------------------------
void main(void)
{
float a=0.0,b,li;
vec4 col,blend0,blend1,specul,skybox;
vec3 normal;
col=(texture2D(txr_normal,pixel_txr.st)-v05)*2.0; // normal/bump maping
// normal=pixel_TBN*col.xyz;
normal=pixel_TBN[0];
blend0=texture(txr_texture0,pixel_txr.st);
blend1=texture(txr_texture1,pixel_txr.st);
specul=texture(txr_specular,pixel_txr.st);
skybox=texture(txr_skybox,normal);
if (_blend)
{
a=blend1.a;
blend0*=1.0-a;
blend1*=a;
blend0+=blend1;
blend0.a=a;
}
col.xyz=light_col0; col.a=0.0; li=0.0; // normal shading (aj s bump mapingom)
for (int i=0;i<_lights;i++)
{
b=dot(light_dir[i],normal.xyz);
if (b<0.0) b=0.0;
// b*=specul.r;
li+=b;
col.xyz+=light_col[i]*b;
}
col*=blend0;
if (li<=0.1)
{
blend0=texture2D(txr_light,pixel_txr.st);
blend0*=1.0-a;
blend0.a=a;
col+=blend0;
}
if (_reflect) col+=skybox*specul.r;
col*=pixel_col;
if (col.r<0.0) col.r=0.0;
if (col.g<0.0) col.g=0.0;
if (col.b<0.0) col.b=0.0;
a=0.0;
if (a<col.r) a=col.r;
if (a<col.g) a=col.g;
if (a<col.b) a=col.b;
if (a>1.0)
{
a=1.0/a;
col.r*=a;
col.g*=a;
col.b*=a;
}
frag_col=col;
}
//------------------------------------------------------------------
These source codes are bit old and mix of different things for specific application
So extract only what you need from it. If you are confused with the variable names then comment me...
tm_ stands for transform matrix
l2g stands for local coordinate system to global coordinate system transform
dir means that transformation changes just direction (offset is 0,0,0)
g2s stands for global to screen ...
per is perspective transform ...
The GLSL compilation log
You have to obtain its content programaticaly after compilation of your shader's (not application!!!). I do it with calling the function glGetShaderInfoLog for every shader,program I use ...
[Notes]
Some drivers optimize "unused" variables. As you can see at the image txr_texture1 is not found even if the fragment shader has it in code but the blending is not used in this App so driver deleted it on its own...
Shader logs can show you much (syntax errors, warnings...)
there are few GLSL IDEs for making shader's easy but I prefer my own because I can use in it the target app code directly. Mine looks like this:
each txt window is a shader source (vertex,fragment,...) the right bottom is clipboard, left top is shader's log after last compilation and left bottom is the preview. I managed to code it like Borland style IDE (with the keys also and syntax highlight) the other IDEs I saw look similar (different colors of coarse:)) anyway if you want to play with shader's download such App or do it your self it will help a lot...
There could be also a problem with TBN creation
You should visually check if the TBN vectors (tangent,binormal,normal) correspond to object surface by drawing colored lines at each vertex position. Just to be sure... something like this:
I will try to make your code work. Have you tried it with moving camera?
I cannot see anywhere that you have transformed the TBNMatrix with the transform, view and model matrices. Did you try with the vec3 normal = TBNMatrix[2]; original normals? (Fragment shader)
The following might help. In the Vertex shader you have:
uniform mat4 ProjectionMatrix;
uniform mat4 CameraMatrix;
uniform mat4 ModelMatrix;
However here, only these 3 matrices should be used:
uniform mat4 PCM;
uniform mat4 MIT; //could be mat3
uniform mat4 ModelMatrix; //could be mat3
It is more efficient to calculate the product of those matrices on CPU (and yields the same because matrix multiplication is associative). Then this product, the PCM can be used as to calculate the new position with one multiplication per vertex:
gl_Position = PCM * vec4(vertex, 1.0);
The MIT is the inverse transpose of the ModelMatrix, you have to calculate it on the CPU. This can be used the transform the normals:
vec4 tang = ModelMatrix*vec4(tangent,0);
vec4 bita= ModelMatrix*vec4(bitangent,0);
vec4 norm= PCMIT*vec4(tangent,0);
TBNMatrix = mat3(normalize(tang.xyz), normalize(bita.xyz), normalize(normal.xyz));
I am not sure what happens to the tangent and bitangent, but this way the normal will stay perpendicular to them. It is easy to prove. Here I use a ° b as the skalar product of a and b vectors. So let n be some normal, and a is some vektor on the surface (eg. {bi}tangent, edge of a triangle), and let A be any transformation. Then:
0 = a n = A^(-1) A a ° n = A a ° A^(-T) n = 0
Where I used the equality A x ° y = x ° A^T y. Therefore if a is perpendicular to n, then A a is perpendicular to A^(-T) n, so we have to transform it with the matrix's inverse transpose.
However, the normal should have a length of 1, so after the transformations, it should be normalized.
You can get also get perpendicular normal by doing this:
vec3 normal = normalize(cross(tangent, bitangent));
Where cross(a,b) is the function that calculates cross product of a and b, witch is always perpendicular to both a and b.
Sorry for my English :)
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.
Is it possible to somehow modify this fragment shader so that it doesn't use the oes_texture_float extension? Because I get an error on the machine which is supposed to run a webgl animation.
I set up my scene using three.js webglrenderer and a cube with a shadermaterial applied to it. On My macbook pro, everything works fine, but on some windows machine I get the error "float textures not supported" (I've searched and found that this probably has to do with oes_texture_float extension)
So I'm guessing I need to change my fragment shader? Or am I missing the point completely?
<script type="x-shader/x-vertex" id="vertexshader">
// switch on high precision floats
#ifdef GL_ES
precision highp float;
#endif
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
#ifdef GL_ES
precision mediump float;
#endif
#define PI 3.14159265
uniform float time;
uniform vec2 resolution;
float f(float x) {
return (sin(x * 1.50 * PI ) + 19.0);
}
float q(vec2 p) {
float s = (f(p.x + 0.85)) / 2.0;
float c = smoothstep(0.9, 1.20, 1.0 - abs(p.y - s));
return c;
}
vec3 aurora(vec2 p, float time) {
vec3 c1 = q( vec2(p.x, p.y / 0.051) + vec2(time / 3.0, -0.3)) * vec3(2.90, 0.50, 0.10);
vec3 c2 = q( vec2(p.x, p.y / 0.051) + vec2(time, -0.2)) * vec3(1.3, .6, 0.3);
vec3 c3 = q( vec2(p.x, p.y / 0.051) + vec2(time / 5.0, -0.5)) * vec3(1.7, 0.4, 0.20);
return c1+c2+c3;
}
void main( void ) {
vec2 p = ( gl_FragCoord.xy / resolution.xy );
vec3 c = aurora(p, time);
gl_FragColor = vec4(1.0-c, c);
}
</script>
EDIT: this has nothing to do with the floating point texture, but rather with something in my fragment shader. Three.js gives me the error: "Can't initialise shader, VALIDATE_STATUS"
"Or am I missing the point completely?" - Indeed you are. The shaders don't care about the underlying texture format (you don't even use any textures in those shaders you posted!), so they don't have anything to do with your problem.
It's the application code that uses a float texture somewhere and needs to be changed accordingly. But from the fact that your shader doesn't use any textures at all (and I guess you haven't explicitly created a float texture elsewhere), it's probably three.js' internals that need a float texture somewhere, maybe as render target. So you need to search for ways to disable this requirement if possible.
Unless it's a three.js ism you haven't defined projectionMatrix, modelViewMatrix, and position in your vertex shader.
Try adding
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec4 position;
To the top of the first shader?
Lets say we texturing quad (two triangles). I think what this question is similiar to texture splatting like in next example
precision lowp float;
uniform sampler2D Terrain;
uniform sampler2D Grass;
uniform sampler2D Stone;
uniform sampler2D Rock;
varying vec2 tex_coord;
void main(void)
{
vec4 terrain = texture2D(Terrain, tex_coord);
vec4 tex0 = texture2D(Grass, tex_coord * 4.0); // Tile
vec4 tex1 = texture2D(Rock, tex_coord * 4.0); // Tile
vec4 tex2 = texture2D(Stone, tex_coord * 4.0); // Tile
tex0 *= terrain.r; // Red channel - puts grass
tex1 = mix( tex0, tex1, terrain.g ); // Green channel - puts rock and mix with grass
vec4 outColor = mix( tex1, tex2, terrain.b ); // Blue channel - puts stone and mix with others
gl_FragColor = outColor; //final color
}
But i want to just place a 1 decal on base quad texture in desired place.
Algorithm is just the same, but i think we don't need extra texture with 1 filled layer to hold positions(e.g. where red layer != 0) of decal, some how we must generate our own "terrain.r"(is this float?) variable and mix base texture and decal texture with it.
precision lowp float;
uniform sampler2D base;
uniform sampler2D decal;
uniform vec2 decal_location; //where we want place decal (e.g. 0.5, 0.5 is center of quad)
varying vec2 base_tex_coord;
varying vec2 decal_tex_coord;
void main(void)
{
vec4 v_base = texture2D(base, base_tex_coord);
vec4 v_decal = texture2D(Grass, decal_tex_coord);
float decal_layer = /*somehow get our decal_layer based on decal_position*/
gl_FragColor = mix(v_base, v_decal, decal_layer);
}
How achieve such thing?
Or i may just generate splat texture on opengl side and pass it to first shader? This will give me up to 4 various decals on quad but will be slow for frequent updates (e.g. machine gun hits wall)
float decal_layer = /*somehow get our decal_layer based on decal_position*/
Well, it's up to you, how you interpret decal_position. I think a simple distance metric would suffice. but this also requires the size of the quad. Let's assume you provide this through an additional uniform decal_radius. Then we can use
decal_layer = clamp(length(decal_position - vec2(0.5, 0.5)) / decal_radius, 0., 1.);
Yes, decal_layer is a float as you've described. Its range is 0 to 1. But you don't have quite enough info, here you've specified decal_location but no size for the decal. You also don't know where this fragment falls in the quad, you'll need a varying vec2 quad_coord; or similar input from the vertex shader if you want to know where this fragment is relative to the quad being rendered.
But let's try a different approach. Edit the top of your 2nd example to include these uniforms:
uniform vec2 decal_location; // Location of decal relative to base_tex_coord
uniform float decal_size; // Size of decal relative to base_tex_coord
Now, in main(), you should be able to compute decal_layer with something like this:
float decal_layer = 1.0 - smoothstep(decal_size - 0.01, decal_size, max(abs(decal_location.x - base_tex_coord.x), abs(decal_location.y - base_tex_coord.y)));
Basically you're trying to get decal_layer to be 1.0 within the decal, and 0.0 outside the decal. I've added a 0.01 fuzzy edge at the boundary that you can play with. Good luck!