Resizing point sprites based on distance from the camera - c++

I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.
From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.
My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.
The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)
Can anyone direct me to where I'm going wrong, and explain why?
Code:
C++
// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);
// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);
Vertex Shader
#version 330 core
layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;
flat out int spriteNum;
uniform mat4 MVP;
const float constAtten = 0.9;
const float linearAtten = 0.6;
const float quadAtten = 0.001;
void main() {
spriteNum = spriteNum_;
gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
float dist = distance(gl_Position, vec4(0,0,0,1));
float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
gl_PointSize = 768.0 * attn;
}
Fragment Shader
#version 330 core
flat in int spriteNum;
out vec4 color;
uniform sampler2DArray Sprites;
void main() {
color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
if (color.a < 0.2)
discard;
}

First of all, I don't really understand why you use pos.x + 1.
Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.
uniform mat4 MV; //modelview matrix
vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:
float dist = length(eyePos); //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);
EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.
A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:
uniform mat4 MV; //modelview matrix
uniform mat4 P; //projection matrix
uniform float spriteWidth; //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth; //screen width in pixels
vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;
This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.
EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.
Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.

Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:
Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)
Vertex Shader
#version 330 core
layout (location = 0) in vec4 gridPos;
layout (location = 1) in int spriteNum_in;
flat out int spriteNum;
// simple pass-thru to the geometry generator
void main() {
gl_Position = gridPos;
spriteNum = spriteNum_in;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
flat in int spriteNum[];
smooth out vec3 stp;
uniform mat4 Projection;
uniform mat4 View;
void main() {
// Put us into screen space.
vec4 pos = View * gl_in[0].gl_Position;
int snum = spriteNum[0];
// Bottom left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(0, 0, snum);
EmitVertex();
// Top left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(0, 1, snum);
EmitVertex();
// Bottom right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(1, 0, snum);
EmitVertex();
// Top right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(1, 1, snum);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330 core
smooth in vec3 stp;
out vec4 colour;
uniform sampler2DArray Sprites;
void main() {
colour = texture(Sprites, stp);
if (colour.a < 0.2)
discard;
}

I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.
Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.

At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.

Related

Resizing window cause my 2D Lighting to stretch

I am trying to implement a simple artificial 2D lighting. I am not using an algorithm like Phong's. However, I am having some difficulty in ensuring that my lighting do not stretch/squeeze whenever the window resize. Any tips and suggestions will be appreciated. I have tried converting my radius into a vec2 so that I can scale them accordingly based on the aspect ratio, however it doesnt work properly. Also, I am aware that my code is not the most efficient, any feedback is also appreciated as I am still learning! :D
I have an orthographic projection matrix transforming the light position so that it will be at the correct spot in the viewport, this fixed the position but not the radius (as I am calculating per fragment). How would I go about transforming the radius based on the aspect ratio?
void LightSystem::Update(const OrthographicCamera& camera)
{
std::vector<LightComponent> lights;
for (auto& entity : m_Entities)
{
auto& light = g_ECSManager.GetComponent<LightComponent>(entity);
auto& trans = g_ECSManager.GetComponent<TransformComponent>(entity);
if (light.lightEnabled)
{
light.pos = trans.Position;
glm::mat4 viewProjMat = camera.GetViewProjectionMatrix();
light.pos = viewProjMat * glm::vec4(light.pos, 1.f);
// Need to store all the light atrributes in an array
lights.emplace_back(light);
}
// Create a function in Render2D.cpp, pass all the arrays as a uniform variable to the shader, call this function here
glm::vec2 res{ camera.GetWidth(), camera.GetHeight() };
Renderer2D::DrawLight(lights, camera, res);
}
}
Here is my shader:
#type fragment
#version 330 core
layout (location = 0) out vec4 color;
#define MAX_LIGHTS 10
uniform struct Light
{
vec4 colour;
vec3 position;
float radius;
float intensity;
} allLights[MAX_LIGHTS];
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;
in vec4 fragmentPosition;
uniform sampler2D u_Textures[32];
uniform vec4 u_ambientColour;
uniform int numLights;
uniform vec2 resolution;
vec4 calculateLight(Light light)
{
float lightDistance = length(distance(fragmentPosition.xy, light.position.xy));
//float ar = resolution.x / resolution.y;
if (lightDistance >= light.radius)
{
return vec4(0, 0, 0, 1); //outside of radius make it black
}
return light.intensity * (1 - lightDistance / light.radius) * light.colour;
}
void main()
{
vec4 texColor = v_Color;
vec4 netLightColour = vec4(0, 0, 0, 1);
if (numLights == 0)
color = texColor;
else
{
for(int i = 0; i < numLights; ++i) //Loop through lights
netLightColour += calculateLight(allLights[i]) + u_ambientColour;
color = texColor * netLightColour;
}
}
You must use an orthographic projection matrix in the vertex shader. Modify the clip space position through the projection matrix.
Alternatively, consider the aspect ratio when calculating the distance to the light:
float aspectRatio = resolution.x/resolution.y;
vec2 pos = fragmentPosition.xy * vec2(aspectRatio, 1.0);
float lightDistance = length(distance(pos.xy, light.position.xy));
I'm going to compile all the answers for my question, as I had done a bad job in asking and everything turned out to be a mess.
As the other answers suggest, first I had to use an orthographic projection matrix to ensure that the light source position was displayed at the correct position in the viewport.
Next, from the way I did my lighting, the projection matrix earlier would not fix the stretch effect as my light wasn't an actual circle object made with actual vertices. I had to turn radius into a vec2 type, representing the radius vectors along x and y axis. This is so that I can then modify the vectors based on the aspect ratio:
if (aspectRatio > 1.0)
light.radius.x /= aspectRatio;
else
light.radius.x /= aspectRatio;
I had posted another question here, to modify my lighting algorithm to support an ellipse shape. This allowed me to then perform the scalings needed to counter the stretching along x/y axis whenever my aspect ratio changed. Thank you all for the answers.

OpenGL flickering issue in rendering

I have an OpenGL rendering issue and I think that is due to a problem with the Z-Buffer.
I have a code to render a set of points where their size depends on the distance from the camera. Thus bigger points means that are closer to the camera. Moreover in the following snapshots the color reflect the z-buffer of the fragment.
How you can see there is a big point near the camera.
However some frames later the same point is rendered behind more distant points.
These are the functions that I call before render the points:
glClearDepth(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
this is the vertex shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec4 color;
// uniform variable
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform float pointSize;
out vec4 fragColor;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0f);
vec3 posEye = vec3(view * model * vec4(position, 1.0f));
float Z = length(posEye);
gl_PointSize = pointSize / Z;
fragColor = color;
}
and this is the fragment shader
#version 330 core
in vec4 fragColor;
out vec4 outColor;
void main() {
vec2 cxy = 2.0 * gl_PointCoord - 1.0;
float r = dot(cxy, cxy);
if(r > 1.0) discard;
// calculate lighting
vec3 pos = vec3(cxy.x,cxy.y,sqrt(1.0-r));
vec3 lightDir = vec3(0.577, 0.577, 0.577);
float diffuse = max(0.0, dot(lightDir, pos));
float alpha = 1.0;
float delta = fwidth(r);
alpha = 1.0 - smoothstep(1.0 - delta, 1.0 + delta, r);
outColor = fragColor * alpha * diffuse;
}
UPDATE
looks like that the problem was due to the definition of the near and far planes.
There is something that I do not understand about which are the best values that I should use.
this is the function that I use to create the projective matrix
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
where winth=1600 height=1200 fov=45
when things didn't work zNear was set to zero and zFar was set to double the distance of the farthest point from the center of gravity of the point cloud, i.e. in my case 1.844
If I move the near clipping plane from zero to 0.1 the flicker seems resolved. However, the distant objects, which I saw before, disappear. So I also changed the far plane to 10 and everything seems to work. Unfortunately, I don't understand why the values I used before were not good.
As already updated the issue is called Z-fighting when choosing the wrong near and far panes in the projection matrix. If they are to far away from your objects, there is only a very discrete number of values for z left. Then the drawing is detemined by the call order and processing order in the shaders, since there is no difference in z. If one has a hint of what needs to come first, draw it that way. Once the rendering-call was set, there is no proper way to determine which processor gets the objects first and that you'll see as flickering.
So please update the way you establish your projection Matrix. Best practise: look at the objects need to be rendered and determine a roughly bounding box, make it a bounding ball and center-radius is a point in the near pane, center+radius is a point in the far pane. Done!
Update
looking into
glm::perspective(glm::radians(fov), width/(float)height, zNear, zFar);
gives a specific for the only influence of z: (before normalization):
Result[3][2] = - (static_cast<T>(2) * zFar * zNear) / (zFar - zNear);
meaning other than zFar!=zNear, it should be also avoided to set zNear to zero. This means there is no difference in z, so it has to Flicker. I would then assume that you don't applied some transform on your projection matrix, better don't. If all your objects live in a space around the coordinates center meaning also in the center of your projection, move them in front of the projection as a last step. So apply some translation not onto the projection/view matrix, but on your object matrix, to avoid having such an ill formed projection space. E.g. set near to 1, and move all objects with that amount through your scene.

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

Issues when simulating directional light in OpenGL

I'm working on OpenGL application using the QT5 Gui framework, However, I'm not an expert in OpenGL and I'm facing a couple of issues when trying to simulate directional light. I'm using 'almost' the same algorithm I used in an WebGL application which works just fine.
The application is used to render multiple adjacent cells of a large gridblock (each of which is represented by 8 independent vertices) meaning that some vertices of the whole gridblock are duplicated in the VBO. the normals are calculated per face in geometry shader as shown below in the code.
QOpenGLWidget paintGL() body.
void OpenGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
m_camera = camera.toMatrix();
m_world.setToIdentity();
m_program->bind();
m_program->setUniformValue(m_projMatrixLoc, m_proj);
m_program->setUniformValue(m_mvMatrixLoc, m_camera * m_world);
QMatrix3x3 normalMatrix = (m_camera * m_world).normalMatrix();
m_program->setUniformValue(m_normalMatrixLoc, normalMatrix);
QVector3D lightDirection = QVector3D(1,1,1);
lightDirection.normalize();
QVector3D directionalColor = QVector3D(1,1,1);
QVector3D ambientLight = QVector3D(0.2,0.2,0.2);
m_program->setUniformValue(m_lightDirectionLoc, lightDirection);
m_program->setUniformValue(m_directionalColorLoc, directionalColor);
m_program->setUniformValue(m_ambientColorLoc, ambientLight);
geometries->drawGeometry(m_program);
m_program->release();
}
}
Vertex Shader
#version 330
layout(location = 0) in vec4 vertex;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
vec3 B = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalMatrix * normalize(cross(A,B));
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330
in vec3 transformedNormal;
out vec4 fColor;
uniform vec3 lightDirection;
uniform vec3 ambientColor;
uniform vec3 directionalColor;
void main()
{
highp float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
vec3 vLightWeighting = ambientColor + directionalColor * directionalLightWeighting;
highp vec3 color = vec3(1, 1, 0.0);
fColor = vec4(color*vLightWeighting, 1.0);
}
The 1st issue is that lighting on the faces seems to change whenever the camera angle changes (camera location doesn't affect it, only the angle). You can see this behavior in the following snapshot. My guess is that I'm doing something wrong when calculating the normal matrix, but I can't figure out what it is.
The 2nd issue (The one causing me headaches) is whenever The camera is moved, edges of the cells show blocky and rigged lines that flickers when the camera moves around. this effect gets really nasty when there are too many cells clustered together.
The model used in the snapshot is just a sample slab of 10 cells to better illustrate the faulty effects. The actual models (gridblock) contain up to 200K cells stacked together.
EDIT: 2nd issue solution.
I was using znear/zfar of 0.01f and 50000.0f respecticvely, when I
changed the znear to 1.0f, this effect disappeared. According to OpenGL Wiki this is caused by a zNear clipping plane value that's too close to 0.0. As the zNear clipping plane is set increasingly closer to 0.0, the effective precision of the depth buffer decreases dramatically
EDIT2: I tried debug drawing the normals as suggested in the comments,
I quickly realized that I probably shouldn't calculate them based on
gl_Position (after MVP matrix multiplication in VS) instead I should use the
original vertex locations, so i modified the the shaders as follows:
Vertex Shader (UPDATED)
#version 330
layout(location = 0) in vec4 vertex;
out vec3 vert;
uniform mat4 projMatrix;
uniform mat4 mvMatrix;
void main()
{
vert = vertex.xyz;
gl_Position = projMatrix * mvMatrix * vertex;
}
Geometry Shader (UPDATED)
#version 330
layout ( triangles ) in;
layout ( triangle_strip, max_vertices = 3 ) out;
in vec3 vert [];
out vec3 transformedNormal;
uniform mat3 normalMatrix;
void main()
{
vec3 A = vert[2].xyz - vert[0].xyz;
vec3 B = vert[1].xyz - vert[0].xyz;
gl_Position = gl_in[0].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[1].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
gl_Position = gl_in[2].gl_Position;
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
EmitVertex();
EndPrimitive();
}
But even after this modification the normals of the surface still change with the camera angle, as shown below in the screenshot. I dont know if the normal calculation is wrong or the normal matrix calculation is done wrong or maybe both...
EDIT3: 1st Issue Solution: changing normal calculation in GS from
transformedNormal = normalize(normalMatrix * normalize(cross(A,B)));
to transformedNormal = normalize(cross(A,B)); seems to solve the
problem. Omitting the normalMatrix from the calculation fixed the
issue and the normals dont change with the viewing angle.
If I missed any important/relevant information, please notify me in a comment.
Depth buffer precision
Depth buffer is usually stored as 16 or 24 bit buffer. It is a HW implementation of float normalized to specific range. So you can see there is very few bits for mantissa/exponent in comparison to standard float.
if I oversimplify things and assume integer values instead float then for 16 bit buffer you got 2^16 values. if you got znear=0.1 and zfar=50000.0 then you got only 65535 values on the full range. Now as the Depth valued are nonlinear you got higher accuracy near znear and much much lower near zfar plane so the depth values will jump with higher and higher step causing accuracy problems where any 2 polygons are near.
I empirically got this for setting the planes in my views:
(zfar-znear)/desired_accuracy_step > 0.3*(2^n)
Where n is the depth buffer bit-width and desired_accuracy_step is the wanted resolution in Z axis I need. Sometimes I saw it exchanged by znear value.