I need help with a simple spotlight shader.
All vertices inside the cone should be colored yellow, all vertices outside the cone should be colored black.
I just can't get it work. I asume it has something to do with the transformation from world into eye coordinates.
Vertex Shader:
uniform vec4 lightPositionOC; // in object coordinates
uniform vec3 spotDirectionOC; // in object coordinates
uniform float spotCutoff; // in degrees
void main(void)
{
vec3 lightPosition;
vec3 spotDirection;
vec3 lightDirection;
float angle;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// Transforms light position and direction into eye coordinates
lightPosition = (lightPositionOC * gl_ModelViewMatrix).xyz;
spotDirection = normalize(spotDirectionOC * gl_NormalMatrix);
// Calculates the light vector (vector from light position to vertex)
vec4 vertex = gl_ModelViewMatrix * gl_Vertex;
lightDirection = normalize(vertex.xyz - lightPosition.xyz);
// Calculates the angle between the spot light direction vector and the light vector
angle = dot( normalize(spotDirection),
-normalize(lightDirection));
angle = max(angle,0);
// Test whether vertex is located in the cone
if(angle > radians(spotCutoff))
gl_FrontColor = vec4(1,1,0,1); // lit (yellow)
else
gl_FrontColor = vec4(0,0,0,1); // unlit(black)
}
Fragment Shader:
void main(void)
{
gl_FragColor = gl_Color;
}
Edit:
Tim is right. This
if(angle > radians(spotCutoff))
should be:
if(acos(angle) < radians(spotCutoff))
New question:
The light seems not to stay at a fixed position in the scene, instead it seems to move relative to my camera as the cone gets smaller or bigger when I move forward or backward.
(Let spotDirection be vector A, and lightDirection be vector B)
You are assigning;
angle = dot(A,B)
Shouldn't the formula be:
cos(angle) = dot(A,B)
or
angle = arccos(dot(A,B))
http://en.wikipedia.org/wiki/Dot_product#Geometric_interpretation
in my old shader I used that code:
float spotEffect = dot(normalize(gl_LightSource[0].spotDirection.xyz),
normalize(-light));
if (spotEffect < gl_LightSource[0].spotCosCutoff)
{
spotEffect = smoothstep(gl_LightSource[0].spotCosCutoff-0.002,
gl_LightSource[0].spotCosCutoff, spotEffect);
}
else spotEffect = 1.0;
instead of sending Angles to the shader it is better to send Cos of those Angles
To answer your new question, here is a link to a similar question: GLSL point light shader moving with camera.
The solution is to remove gl_NormalMatrix and gl_ModelViewMatrix.
spotDirection = normalize(spotDirectionOC * gl_NormalMatrix);
Would become:
spotDirection = normalize(spotDirectionOC);
https://code.google.com/p/jpcsp/source/browse/trunk/src/jpcsp/graphics/shader.vert?r=1639
if (spotEffect >= cos(radians(uLightOuterCone[index])))
and
//vec3 NSpotDir = (uViewMatrix * vec4(uLightDirection[index],0)).xyz; //must do outside or become flashlight follow
vec3 NSpotDir = normalize(uLightDirection[index]);
Related
I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx
I am trying to implement a simple artificial 2D lighting. I am not using an algorithm like Phong's. However, I am having some difficulty in ensuring that my lighting do not stretch/squeeze whenever the window resize. Any tips and suggestions will be appreciated. I have tried converting my radius into a vec2 so that I can scale them accordingly based on the aspect ratio, however it doesnt work properly. Also, I am aware that my code is not the most efficient, any feedback is also appreciated as I am still learning! :D
I have an orthographic projection matrix transforming the light position so that it will be at the correct spot in the viewport, this fixed the position but not the radius (as I am calculating per fragment). How would I go about transforming the radius based on the aspect ratio?
void LightSystem::Update(const OrthographicCamera& camera)
{
std::vector<LightComponent> lights;
for (auto& entity : m_Entities)
{
auto& light = g_ECSManager.GetComponent<LightComponent>(entity);
auto& trans = g_ECSManager.GetComponent<TransformComponent>(entity);
if (light.lightEnabled)
{
light.pos = trans.Position;
glm::mat4 viewProjMat = camera.GetViewProjectionMatrix();
light.pos = viewProjMat * glm::vec4(light.pos, 1.f);
// Need to store all the light atrributes in an array
lights.emplace_back(light);
}
// Create a function in Render2D.cpp, pass all the arrays as a uniform variable to the shader, call this function here
glm::vec2 res{ camera.GetWidth(), camera.GetHeight() };
Renderer2D::DrawLight(lights, camera, res);
}
}
Here is my shader:
#type fragment
#version 330 core
layout (location = 0) out vec4 color;
#define MAX_LIGHTS 10
uniform struct Light
{
vec4 colour;
vec3 position;
float radius;
float intensity;
} allLights[MAX_LIGHTS];
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;
in vec4 fragmentPosition;
uniform sampler2D u_Textures[32];
uniform vec4 u_ambientColour;
uniform int numLights;
uniform vec2 resolution;
vec4 calculateLight(Light light)
{
float lightDistance = length(distance(fragmentPosition.xy, light.position.xy));
//float ar = resolution.x / resolution.y;
if (lightDistance >= light.radius)
{
return vec4(0, 0, 0, 1); //outside of radius make it black
}
return light.intensity * (1 - lightDistance / light.radius) * light.colour;
}
void main()
{
vec4 texColor = v_Color;
vec4 netLightColour = vec4(0, 0, 0, 1);
if (numLights == 0)
color = texColor;
else
{
for(int i = 0; i < numLights; ++i) //Loop through lights
netLightColour += calculateLight(allLights[i]) + u_ambientColour;
color = texColor * netLightColour;
}
}
You must use an orthographic projection matrix in the vertex shader. Modify the clip space position through the projection matrix.
Alternatively, consider the aspect ratio when calculating the distance to the light:
float aspectRatio = resolution.x/resolution.y;
vec2 pos = fragmentPosition.xy * vec2(aspectRatio, 1.0);
float lightDistance = length(distance(pos.xy, light.position.xy));
I'm going to compile all the answers for my question, as I had done a bad job in asking and everything turned out to be a mess.
As the other answers suggest, first I had to use an orthographic projection matrix to ensure that the light source position was displayed at the correct position in the viewport.
Next, from the way I did my lighting, the projection matrix earlier would not fix the stretch effect as my light wasn't an actual circle object made with actual vertices. I had to turn radius into a vec2 type, representing the radius vectors along x and y axis. This is so that I can then modify the vectors based on the aspect ratio:
if (aspectRatio > 1.0)
light.radius.x /= aspectRatio;
else
light.radius.x /= aspectRatio;
I had posted another question here, to modify my lighting algorithm to support an ellipse shape. This allowed me to then perform the scalings needed to counter the stretching along x/y axis whenever my aspect ratio changed. Thank you all for the answers.
I have a 2D mode which displays moving sprites over the world. each sprite has rotation.
When i'm trying to implement the same in 3D world, over a sphere, i met a problem calculating the sprite rotation so it will look like it is moving toward the direction. I'm aware that the sprite is billboard only and the rotation will be 2D only, and will not be 100% rotated toward the direction but at least to make it look reasonable for the eye.
I've tried to consider the vector to the north (of the world) in my rotation but still, there are allot of cases when we move the camera around the sphere that the sprite arrow is not in the direction of the movement.
Can anyone direct me for a solution ?
-------- ADDITION -----------
More explanation: I have 2D world (x,y). In this world I have a point that moves toward a direction (an angle is saved in the object). The rotations are calculated in the fragment shader of course.
In the 3D world, i'm converting this (x, y) to a (x,y,z) by simple sphere formula.
My sphere (world) origin is (0,0,0) with radius 1.
The angle (saved in the point for the direction of movement) is used in 2D for rotating the texture as well (As shown above in the first image). The problem is the rotation of the texture in 3D. The rotating should consider the point direction angle, and the camera.
-------- ADDITION -----------
My fragment shader for 2D - If it is helping. And few more pictures and my wish
varying vec2 TextureCoord;
varying vec2 TextureSize;
uniform sampler2D sampler;
varying float angle;
uniform vec4 uColor;
void main()
{
vec2 calcedCoord = gl_PointCoord;
float c = cos(angle);
float s = sin(angle);
vec2 trans = vec2(-0.5, -0.5);
mat2 rot = mat2(c, s, -s, c);
calcedCoord = calcedCoord + trans;
calcedCoord = rot * calcedCoord;
calcedCoord = calcedCoord - trans;
vec2 realTexCoord = TextureCoord + (calcedCoord * TextureSize);
vec4 fragColor = texture2D(sampler, realTexCoord);
gl_FragColor = fragColor * uColor;
}
After struggling allot with this issue I came into this solution.
Instead of attaching as attribute the direction angle to each sprite, I sent the next sprite location instead. And calculating the 2D angle in the vertex shader as follow:
varying float angle;
attribute vec3 nextPointAtt;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec4 nextPnt = gl_ModelViewProjectionMatrix * vec4(nextPointAtt, gl_Vertex.w);
vec2 ver = gl_Position.xy / gl_Position.w;
vec2 nextVer = nextPnt.xy / nextPnt.w;
vec2 d = nextVer - ver;
angle = atan(d.y, d.x);
}
The angle will be used in the fragment shader (Look at my question for the fragment shader code).
I'm trying to implement custom lights in world space (I know, it is suggested to do it in view space, but I have my reasons) using LWJGL and GLSL into a little game I'm programming. To do so I need the vertex positions in world space in the shader. I know, that I can calculate them as modelMatrix*gl_Vertex in the Vertex Shader. Since the modelMatrix is not accessible in the shader, I calculate it (as it is done here) by multiplying the ModelViewMatrix with the inverse of the viewMatrix.
Vertex shader code:
uniform mat4 inverseViewMatrix;
varying vec3 normal;
varying vec3 worldCoord;
normal = normalize(vec3(gl_ModelViewMatrix *inverseViewMatrix * vec4(gl_Normal,0.0)));
worldCoord = vec3(gl_ModelViewMatrix *inverseViewMatrix *gl_Vertex); //coordinates in world space?
The light is then calculated per pixel in the fragment shader:
uniform vec4 lights[30]; //lightPositions...
uniform vec4 lightCol[30]; //...& lightColors passed from the Java-Code
uniform int noOfLights;
varying vec3 normal;
varying vec3 worldCoord;
for(int i = 0; i < noOfLights; i++){
vec3 lightPos = (lights[i]).xyz; // lights[i].w does contain data if parallel or point light
if(lights[i].w == 1.0){ //parallel lights (working fine)
vec3 lightDir = normalize(lightPos); //light direction
float NdotL = max(dot(normal, lightDir), 0.0);
brightness += (NdotL*lightCol[i].xyz); //lightCol[i].xyz does contain the color of the light
}
else{ //point lights (not working)
vec3 lightPosRelative = lightPos - worldCoord; //Vector from Vertex to point light
float dist = length(lightPosRelative);
if(dist < lightCol[i].w){ //lightCol[i].w is the radius of the point light
vec3 lightDir = normalize(lightPosRelative); //light direction to point light
float NdotL = max(dot(normal, lightDir), 0.0);
float distVal = pow(1.0-(dist/lightCol[i].w), 2.0); //light attenuation with distance
distVal = clamp(distVal, 0.0, 1.0);
brightness += (NdotL*lightCol[i].xyz)*distVal; //lightCol[i].xyz does contain the color of the light
}
}
}
[... brightness is then multiplied with the vertexColor...]
The inverse of the view matrix is passed to the shader from the Java side of the code. It is calculated after the camera has been positioned using glRotatef(...) and glTranslatef(...).
Matrix4f viewMatrix = new Matrix4f();
Matrix4f inverseViewMatrix = new Matrix4f();
viewBuffer = BufferUtils.createFloatBuffer(16);
viewBuffer.rewind();
glGetFloat(GL_MODELVIEW_MATRIX, viewBuffer);
viewMatrix = (Matrix4f) viewMatrix.load(modelBuffer);
Matrix4f.invert(viewMatrix, inverseViewMatrix);
The result looks fine for the parallel lights. The calculation of the normals therefore seems to be correct.
The point lights, however, do only affect the terrain (which is never rotated nor translated) and not other objects (which are translated and rotated).
Somethings seems to be wrong with the calculation of worldCoord for the objects (I can see that by displaying worldCoord instead of the brightness).
Any suggestions what is wrong in my code?
I'm trying to implement phong shading in GLSL but am having some issues with the specular component.
The green light is the specular component. The light (a point light) travels in a circle above the plane. The specular highlight always points inward toward the Y axis about which the light rotates and fans out toward the diffuse reflection as seen in the image. It doesn't appear to be affected at all by the positioning of the camera and I'm not sure where I'm going wrong.
Vertex shader code:
#version 330 core
/*
* Phong Shading with with Point Light (Quadratic Attenutation)
*/
//Input vertex data
layout(location = 0) in vec3 vertexPosition_modelSpace;
layout(location = 1) in vec2 vertexUVs;
layout(location = 2) in vec3 vertexNormal_modelSpace;
//Output Data; will be interpolated for each fragment
out vec2 uvCoords;
out vec3 vertexPosition_cameraSpace;
out vec3 vertexNormal_cameraSpace;
//Uniforms
uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalTransformMatrix;
void main()
{
vec3 normal = normalize(vertexNormal_modelSpace);
//Set vertices in clip space
gl_Position = mvpMatrix * vec4(vertexPosition_modelSpace, 1);
//Set output for UVs
uvCoords = vertexUVs;
//Convert vertex and normal into eye space
vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace;
vertexNormal_cameraSpace = normalize(normalTransformMatrix * normal);
}
Fragment Shader Code:
#version 330 core
in vec2 uvCoords;
in vec3 vertexPosition_cameraSpace;
in vec3 vertexNormal_cameraSpace;
//out
out vec4 fragColor;
//uniforms
uniform sampler2D diffuseTex;
uniform vec3 lightPosition_cameraSpace;
void main()
{
const float materialAmbient = 0.025; //a touch of ambient
const float materialDiffuse = 0.65;
const float materialSpec = 0.35;
const float lightPower = 2.0;
const float specExponent = 2;
//--------------Set Colors and determine vectors needed for shading-----------------
//reflection colors- NOTE- diffuse and ambient reflections will use the texture color
const vec3 colorSpec = vec3(0,1,0); //Green spec color
vec3 diffuseColor = texture2D(diffuseTex, uvCoords).rgb; //Get color from the texture at fragment
const vec3 lightColor = vec3(1,1,1); //White light
//Re-normalize normal vectors : after interpolation they make not be unit length any longer
vec3 normVertexNormal_cameraSpace = normalize(vertexNormal_cameraSpace);
//Set camera vec
vec3 viewVec_cameraSpace = normalize(-vertexPosition_cameraSpace); //Since its view space, camera at origin
//Set light vec
vec3 lightVec_cameraSpace = normalize(lightPosition_cameraSpace - vertexPosition_cameraSpace);
//Set reflect vect
vec3 reflectVec_cameraSpace = normalize(reflect(-lightVec_cameraSpace, normVertexNormal_cameraSpace)); //reflect function requires incident vec; from light to vertex
//----------------Find intensity of each component---------------------
//Determine Light Intensity
float distance = abs(length(lightPosition_cameraSpace - vertexPosition_cameraSpace));
float lightAttenuation = 1.0/( (distance > 0) ? (distance * distance) : 1 ); //Quadratic
vec3 lightIntensity = lightPower * lightAttenuation * lightColor;
//Determine Ambient Component
vec3 ambientComp = materialAmbient * diffuseColor * lightIntensity;
//Determine Diffuse Component
float lightDotNormal = max( dot(lightVec_cameraSpace, normVertexNormal_cameraSpace), 0.0 );
vec3 diffuseComp = materialDiffuse * diffuseColor * lightDotNormal * lightIntensity;
vec3 specComp = vec3(0,0,0);
//Determine Spec Component
if(lightDotNormal > 0.0)
{
float reflectDotView = max( dot(reflectVec_cameraSpace, viewVec_cameraSpace), 0.0 );
specComp = materialSpec * colorSpec * pow(reflectDotView, specExponent) * lightIntensity;
}
//Add Ambient + Diffuse + Spec
vec3 phongFragRGB = ambientComp +
diffuseComp +
specComp;
//----------------------Putting it together-----------------------
//Out Frag color
fragColor = vec4( phongFragRGB, 1);
}
Just noting that the normalTransformMatrix seen in the Vertex shader is the inverse-transpose of the model-view matrix.
I am setting a vector from the vertex position to the light, to the camera, and the reflect vector, all in camera space. For the diffuse calculation I am taking the dot product of the light vector and the normal vector, and for the specular component I am taking the dot product of the reflection vector and the view vector. Perhaps there is some fundamental misunderstanding that I have with the algorithm?
I thought at first that the problem could be that I wasn't normalizing the normals entering the fragment shader after interpolation, but adding a line to normalize didn't affect the image. I'm not sure where to look.
I know that there a lot of phong shading questions on the site, but everyone seems to have a problem that is a bit different. If anyone can see where I am going wrong, please let me know. Any help is appreciated.
EDIT: Okay its working now! Just as jozxyqk suggested below, I needed to do a mat4*vec4 operation for my vertex position or lose the translation information. When I first made the change I was getting strange results until I realized that I was making the same mistake in my OpenGL code for the lightPosition_cameraSpace before I passed it to the shader (the mistake being that I was casting down the view matrix to a mat3 for the calculation instead of setting the light position vector as a vec4). Once I edited those lines the shader appears to be working properly! Thanks for the help, jozxqk!
I can see two parts which don't look right.
"vertexPosition_cameraSpace = mat3(mvMatrix) * vertexPosition_modelSpace" should be a mat4/vec4(x,y,z,1) multiply, otherwise it ignores the translation part of the modelview matrix.
2. distance uses the light position relative to the camera and not the vertex. Use lightVec_cameraSpace instead. (edit: missed the duplicated calculation)