I am trying to implement a simple artificial 2D lighting. I am not using an algorithm like Phong's. However, I am having some difficulty in ensuring that my lighting do not stretch/squeeze whenever the window resize. Any tips and suggestions will be appreciated. I have tried converting my radius into a vec2 so that I can scale them accordingly based on the aspect ratio, however it doesnt work properly. Also, I am aware that my code is not the most efficient, any feedback is also appreciated as I am still learning! :D
I have an orthographic projection matrix transforming the light position so that it will be at the correct spot in the viewport, this fixed the position but not the radius (as I am calculating per fragment). How would I go about transforming the radius based on the aspect ratio?
void LightSystem::Update(const OrthographicCamera& camera)
{
std::vector<LightComponent> lights;
for (auto& entity : m_Entities)
{
auto& light = g_ECSManager.GetComponent<LightComponent>(entity);
auto& trans = g_ECSManager.GetComponent<TransformComponent>(entity);
if (light.lightEnabled)
{
light.pos = trans.Position;
glm::mat4 viewProjMat = camera.GetViewProjectionMatrix();
light.pos = viewProjMat * glm::vec4(light.pos, 1.f);
// Need to store all the light atrributes in an array
lights.emplace_back(light);
}
// Create a function in Render2D.cpp, pass all the arrays as a uniform variable to the shader, call this function here
glm::vec2 res{ camera.GetWidth(), camera.GetHeight() };
Renderer2D::DrawLight(lights, camera, res);
}
}
Here is my shader:
#type fragment
#version 330 core
layout (location = 0) out vec4 color;
#define MAX_LIGHTS 10
uniform struct Light
{
vec4 colour;
vec3 position;
float radius;
float intensity;
} allLights[MAX_LIGHTS];
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;
in vec4 fragmentPosition;
uniform sampler2D u_Textures[32];
uniform vec4 u_ambientColour;
uniform int numLights;
uniform vec2 resolution;
vec4 calculateLight(Light light)
{
float lightDistance = length(distance(fragmentPosition.xy, light.position.xy));
//float ar = resolution.x / resolution.y;
if (lightDistance >= light.radius)
{
return vec4(0, 0, 0, 1); //outside of radius make it black
}
return light.intensity * (1 - lightDistance / light.radius) * light.colour;
}
void main()
{
vec4 texColor = v_Color;
vec4 netLightColour = vec4(0, 0, 0, 1);
if (numLights == 0)
color = texColor;
else
{
for(int i = 0; i < numLights; ++i) //Loop through lights
netLightColour += calculateLight(allLights[i]) + u_ambientColour;
color = texColor * netLightColour;
}
}
You must use an orthographic projection matrix in the vertex shader. Modify the clip space position through the projection matrix.
Alternatively, consider the aspect ratio when calculating the distance to the light:
float aspectRatio = resolution.x/resolution.y;
vec2 pos = fragmentPosition.xy * vec2(aspectRatio, 1.0);
float lightDistance = length(distance(pos.xy, light.position.xy));
I'm going to compile all the answers for my question, as I had done a bad job in asking and everything turned out to be a mess.
As the other answers suggest, first I had to use an orthographic projection matrix to ensure that the light source position was displayed at the correct position in the viewport.
Next, from the way I did my lighting, the projection matrix earlier would not fix the stretch effect as my light wasn't an actual circle object made with actual vertices. I had to turn radius into a vec2 type, representing the radius vectors along x and y axis. This is so that I can then modify the vectors based on the aspect ratio:
if (aspectRatio > 1.0)
light.radius.x /= aspectRatio;
else
light.radius.x /= aspectRatio;
I had posted another question here, to modify my lighting algorithm to support an ellipse shape. This allowed me to then perform the scalings needed to counter the stretching along x/y axis whenever my aspect ratio changed. Thank you all for the answers.
Related
I’m trying to achieve a circular vignette with GLSL, but the result is elliptical when the texture is rectangular. What is the correct way to make it square regardless of the texture size? The input texture size (resolution) can be both rectangular or square.
I tried a solution using the discard method, but this doesn't suit what I require, as I need to use smoothstep to get a gradient edge.
Current result:
GLSL shader:
varying vec2 v_texcoord;
uniform sampler2D u_texture;
uniform vec2 u_resolution;
vec4 applyVignette(vec4 color)
{
vec2 position = (gl_FragCoord.xy / u_resolution) - vec2(0.5);
float dist = length(position);
float radius = 0.5;
float softness = 0.02;
float vignette = smoothstep(radius, radius - softness, dist);
color.rgb = color.rgb - (1.0 - vignette);
return color;
}
void main()
{
vec4 color = texture2D(u_texture, v_texcoord);
color = applyVignette(color);
gl_FragColor = color;
}
You have to respect the aspect ration when you calculate the distance to the center point of the circular view:
float dist = length(position * vec2(u_resolution.x/u_resolution.y, 1.0));
Note, if you have a rectangular viewport, where the width is greater than the height, then a perfect circle is squeezed at it left and right to an ellipse, when the coordinates are transformed from view space the normalized devices space.
You must counteract this squeezing by scaling up the x axis of the distance vector.
Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.
I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
Since built-in uniforms such as gl_LightSource are now marked as deprecated in the latest versions of the OpenGL specification, I am currently implementing a basic lighting system (point lights right now) which receives all the light and material information through custom uniform variables.
I have implemented the light attenuation and specular highlights for a point light, and it seems to be working good, apart from a position glitch: I'm manually moving the light, altering its position along the X axis. The light source however (judging by the light it casts upon the square plane below it) doesn't seem to move along the X axis, but, rather, diagonally, on both the X and Z axes (possibly Y too, though it's not entirely a positioning bug).
Here's a screenshot of what the distortion looks like (the light is at -35, 5, 0, Suzanne ist at 0, 2, 0:
:
It looks OK when the light is at 0, 5, 0:
According to the OpenGL specification, all the default light computations take place in eye coordinates, which is what I'm trying to emulate here (hence the multiplication of the light position with the vMatrix). I am using just the view matrix, since applying the model transformation of the vertex batch being rendered to the light doesn't really make sense.
If it matters, all the plane's normals are pointing straight up - 0, 1, 0.
(Note: I fixed the issue now, thanks to msell and myAces! The following snippets are the corrected versions. There's also an option to add spotlight parameters to the light now (d3d style ones))
Here's the code I'm using in the vertex shader:
#version 330
uniform mat4 mvpMatrix;
uniform mat4 mvMatrix;
uniform mat4 vMatrix;
uniform mat3 normalMatrix;
uniform vec3 vLightPosition;
uniform vec3 spotDirection;
uniform bool useTexture;
uniform bool fogEnabled;
uniform float minFogDistance;
uniform float maxFogDistance;
in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexCoord;
smooth out vec3 vVaryingNormal;
smooth out vec3 vVaryingLightDir;
smooth out vec2 vVaryingTexCoords;
smooth out float fogFactor;
smooth out vec4 vertPos_ec;
smooth out vec4 lightPos_ec;
smooth out vec3 spotDirection_ec;
void main() {
// Surface normal in eye coords
vVaryingNormal = normalMatrix * vNormal;
vec4 vPosition4 = mvMatrix * vVertex;
vec3 vPosition3 = vPosition4.xyz / vPosition4.w;
vec4 tLightPos4 = vMatrix * vec4(vLightPosition, 1.0);
vec3 tLightPos = tLightPos4.xyz / tLightPos4.w;
// Diffuse light
// Vector to light source (do NOT normalize this!)
vVaryingLightDir = tLightPos - vPosition3;
if(useTexture) {
vVaryingTexCoords = vTexCoord;
}
lightPos_ec = vec4(tLightPos, 1.0f);
vertPos_ec = vec4(vPosition3, 1.0f);
// Transform the light direction (for spotlights)
vec4 spotDirection_ec4 = vec4(spotDirection, 1.0f);
spotDirection_ec = spotDirection_ec4.xyz / spotDirection_ec4.w;
spotDirection_ec = normalMatrix * spotDirection;
// Projected vertex
gl_Position = mvpMatrix * vVertex;
// Fog factor
if(fogEnabled) {
float len = length(gl_Position);
fogFactor = (len - minFogDistance) / (maxFogDistance - minFogDistance);
fogFactor = clamp(fogFactor, 0, 1);
}
}
And this is the code I'm using in the fragment shader:
#version 330
uniform vec4 globalAmbient;
// ADS shading model
uniform vec4 lightDiffuse;
uniform vec4 lightSpecular;
uniform float lightTheta;
uniform float lightPhi;
uniform float lightExponent;
uniform int shininess;
uniform vec4 matAmbient;
uniform vec4 matDiffuse;
uniform vec4 matSpecular;
// Cubic attenuation parameters
uniform float constantAt;
uniform float linearAt;
uniform float quadraticAt;
uniform float cubicAt;
// Texture stuff
uniform bool useTexture;
uniform sampler2D colorMap;
// Fog
uniform bool fogEnabled;
uniform vec4 fogColor;
smooth in vec3 vVaryingNormal;
smooth in vec3 vVaryingLightDir;
smooth in vec2 vVaryingTexCoords;
smooth in float fogFactor;
smooth in vec4 vertPos_ec;
smooth in vec4 lightPos_ec;
smooth in vec3 spotDirection_ec;
out vec4 vFragColor;
// Cubic attenuation function
float att(float d) {
float den = constantAt + d * linearAt + d * d * quadraticAt + d * d * d * cubicAt;
if(den == 0.0f) {
return 1.0f;
}
return min(1.0f, 1.0f / den);
}
float computeIntensity(in vec3 nNormal, in vec3 nLightDir) {
float intensity = max(0.0f, dot(nNormal, nLightDir));
float cos_outer_cone = lightTheta;
float cos_inner_cone = lightPhi;
float cos_inner_minus_outer = cos_inner_cone - cos_outer_cone;
// If we are a point light
if(lightTheta > 0.0f) {
float cos_cur = dot(normalize(spotDirection_ec), -nLightDir);
// d3d style smooth edge
float spotEffect = clamp((cos_cur - cos_outer_cone) /
cos_inner_minus_outer, 0.0, 1.0);
spotEffect = pow(spotEffect, lightExponent);
intensity *= spotEffect;
}
float attenuation = att( length(lightPos_ec - vertPos_ec) );
intensity *= attenuation;
return intensity;
}
/**
* Phong per-pixel lighting shading model.
* Implements basic texture mapping and fog.
*/
void main() {
vec3 ct, cf;
vec4 texel;
float at, af;
if(useTexture) {
texel = texture2D(colorMap, vVaryingTexCoords);
} else {
texel = vec4(1.0f);
}
ct = texel.rgb;
at = texel.a;
vec3 nNormal = normalize(vVaryingNormal);
vec3 nLightDir = normalize(vVaryingLightDir);
float intensity = computeIntensity(nNormal, nLightDir);
cf = matAmbient.rgb * globalAmbient.rgb + intensity * lightDiffuse.rgb * matDiffuse.rgb;
af = matAmbient.a * globalAmbient.a + lightDiffuse.a * matDiffuse.a;
if(intensity > 0.0f) {
// Specular light
// - added *after* the texture color is multiplied so that
// we get a truly shiny result
vec3 vReflection = normalize(reflect(-nLightDir, nNormal));
float spec = max(0.0, dot(nNormal, vReflection));
float fSpec = pow(spec, shininess) * lightSpecular.a;
cf += intensity * vec3(fSpec) * lightSpecular.rgb * matSpecular.rgb;
}
// Color modulation
vFragColor = vec4(ct * cf, at * af);
// Add the fog to the mix
if(fogEnabled) {
vFragColor = mix(vFragColor, fogColor, fogFactor);
}
}
What math bug could be causing this distortion?
Edit 1:
I've updated the shader code. The attenuation is now being computed in the fragment shader, as it should have been all along. It's currently disabled, though - the bug doesn't have anything to do with the attenuation. When rendering only the attenuation factor of the light (see the last few lines of the fragment shader), the attenuation is being computed right. This means that the light position is being correctly transformed into eye coordinates, so it can't be the source of the bug.
The last few lines of the fragment shader can be used for some (slightly hackish but nevertheless insightful) debugging - it seems the intensity of the light is not being computed right per-fragment, though I have no idea why.
What's interesting is that this bug is only noticeable on (very) large quads like the floor in the images. It's not noticeable on small models.
Edit 2:
I've updated the shader code to a working version. It's all good now and I hope it helps any future user reading this, since as of today, I have yet to see any glsl tutorial that implements lights with absolutely no fixed functionality and secret implicit transforms (such as gl_LightSource[i].* and the implicit transformations to eye space).
My code is licensed under the BSD 2-clause license and can be found on GitHub!
I recently had a similar problem, where lighting worked somewhat wrong when using large polygons. The problem was normalizing the eye vector in vertex shader, as interpolating normalized values procudes incorrect results.
Change
vVaryingLightDir = normalize( tLightPos - vPosition3 );
to
vVaryingLightDir = tLightPos - vPosition3;
in your vertex shader. You can keep the normalization in the fragment shader.
Just because I noticed:
vec3 tLightPos = (vMatrix * vec4(vLightPosition, 1.0)).xyz;
you are simply eliminating the homogenous coordinate here, without dividing through it first. This will cause some problems.
I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.
From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.
My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.
The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)
Can anyone direct me to where I'm going wrong, and explain why?
Code:
C++
// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);
// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);
Vertex Shader
#version 330 core
layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;
flat out int spriteNum;
uniform mat4 MVP;
const float constAtten = 0.9;
const float linearAtten = 0.6;
const float quadAtten = 0.001;
void main() {
spriteNum = spriteNum_;
gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
float dist = distance(gl_Position, vec4(0,0,0,1));
float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
gl_PointSize = 768.0 * attn;
}
Fragment Shader
#version 330 core
flat in int spriteNum;
out vec4 color;
uniform sampler2DArray Sprites;
void main() {
color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
if (color.a < 0.2)
discard;
}
First of all, I don't really understand why you use pos.x + 1.
Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.
uniform mat4 MV; //modelview matrix
vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:
float dist = length(eyePos); //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);
EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.
A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:
uniform mat4 MV; //modelview matrix
uniform mat4 P; //projection matrix
uniform float spriteWidth; //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth; //screen width in pixels
vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;
This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.
EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.
Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.
Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:
Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)
Vertex Shader
#version 330 core
layout (location = 0) in vec4 gridPos;
layout (location = 1) in int spriteNum_in;
flat out int spriteNum;
// simple pass-thru to the geometry generator
void main() {
gl_Position = gridPos;
spriteNum = spriteNum_in;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
flat in int spriteNum[];
smooth out vec3 stp;
uniform mat4 Projection;
uniform mat4 View;
void main() {
// Put us into screen space.
vec4 pos = View * gl_in[0].gl_Position;
int snum = spriteNum[0];
// Bottom left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(0, 0, snum);
EmitVertex();
// Top left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(0, 1, snum);
EmitVertex();
// Bottom right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(1, 0, snum);
EmitVertex();
// Top right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(1, 1, snum);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330 core
smooth in vec3 stp;
out vec4 colour;
uniform sampler2DArray Sprites;
void main() {
colour = texture(Sprites, stp);
if (colour.a < 0.2)
discard;
}
I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.
Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.
At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.