Model stretched when window resize OpenGL 3.2 - c++

I've set up a window (in MFC) that contains on OpenGL 3.2 rendering context. Since it's with OpenGL 3.2 I want to use shaders ect, so I'm biulding my projection and view matrix by hand. I've used this tutorial as an input to build them and pass them to my shader.
The problem now is that (even in the example of the tutorial) when I resize my window, the model is stretched.
This is the code I use to build my matrices (I rebuild them and send them to my shader every the window is refreshed).
View Matrix:
float zAxis[3], xAxis[3], yAxis[3];
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - m_position[0];
zAxis[1] = lookAt[1] - m_position[1];
zAxis[2] = lookAt[2] - m_position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * m_position[0]) + (xAxis[1] * m_position[1]) + (xAxis[2] * m_position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * m_position[0]) + (yAxis[1] * m_position[1]) + (yAxis[2] * m_position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * m_position[0]) + (zAxis[1] * m_position[1]) + (zAxis[2] * m_position[2])) * -1.0f;
viewMatrix[0] = xAxis[0];
viewMatrix[1] = yAxis[0];
viewMatrix[2] = zAxis[0];
viewMatrix[3] = 0.0f;
viewMatrix[4] = xAxis[1];
viewMatrix[5] = yAxis[1];
viewMatrix[6] = zAxis[1];
viewMatrix[7] = 0.0f;
viewMatrix[8] = xAxis[2];
viewMatrix[9] = yAxis[2];
viewMatrix[10] = zAxis[2];
viewMatrix[11] = 0.0f;
viewMatrix[12] = result1;
viewMatrix[13] = result2;
viewMatrix[14] = result3;
viewMatrix[15] = 1.0f;
Projection Matrix:
float screenAspect = (float)rcClient.Width() / (float)rcClient.Height();
float fov = 3.14159265358979323846f / 4.0f;
float zfar = 1000.0;
float znear = 0.1f;
projectionMatrix[0] = 1.0f / (screenAspect * tan( fov * 0.5f));
projectionMatrix[1] = 0.0f;
projectionMatrix[2] = 0.0f;
projectionMatrix[3] = 0.0f;
projectionMatrix[4] = 0.0f;
projectionMatrix[5] = 1.0f / tan( fov * 0.5f);
projectionMatrix[6] = 0.0f;
projectionMatrix[7] = 0.0f;
projectionMatrix[8] = 0.0f;
projectionMatrix[9] = 0.0f;
projectionMatrix[10] = zfar / (zfar - znear);
projectionMatrix[11] = 1.0f;
projectionMatrix[12] = 0.0f;
projectionMatrix[13] = 0.0f;
projectionMatrix[14] = (-znear * zfar) / (zfar - znear);
projectionMatrix[15] = 0.0f;
And in my shader I use them this way:
#version 150
in vec3 inputPosition;
in vec3 inputColor;
out vec3 color;
uniform mat4 worldMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
void main(void)
{
gl_Position = projectionMatrix * viewMatrix * worldMatrix * vec4(inputPosition, 1.0f);
color = inputColor;
}
Does anyone have an idea of what is happening here?

Don't use glViewport since you can just use shaders which is the preferred way.
Can you change your projection matrix to:
float projectionMatrix[4][4];
projectionMatrix[0][0] = 1.0f / (screenAspect * tan( fov * 0.5f));
projectionMatrix[0][1] = 0.0f;
projectionMatrix[0][2] = 0.0f;
projectionMatrix[0][3] = 0.0f;
projectionMatrix[1][0] = 0.0f;
projectionMatrix[1][1] = 1.0f / tan( fov * 0.5f);
projectionMatrix[1][2] = 0.0f;
projectionMatrix[1][3] = 0.0f;
projectionMatrix[2][0] = 0.0f;
projectionMatrix[2][1] = 0.0f;
projectionMatrix[2][2] = (-znear -zfar) / (znear-zfar);
projectionMatrix[2][3] = 2.0f * zfar * znear / (znear-zfar);
projectionMatrix[3][0] = 0.0f;
projectionMatrix[3][1] = 0.0f;
projectionMatrix[3][2] = 1.0;
projectionMatrix[3][3] = 0.0f;
I am not sure about your UVN matrix, but it may not be the problem since you got it working after using glViewport.
Anyway, instead of generating your matrices you can use GLM. Code example on building the projection, UVN, translation matrices, etc is given here: glm code example
I think that this tutorials explain transformations well enough: http://ogldev.atspace.co.uk/

Related

DX12) Trying to Implement Volumetric Scattering for multiple Spot Light, but It's not going well

(This Image is What I want to implement)
I am attempting Post Processing using Compute Shader to implement Light Shaft for multiple Spot Lights in the DX12 framework.
The first thing I tried was the method at the following link:https://gitlab.com/tomasoh/100_procent_more_volume/-/blob/master/shaders/volumetric.frag
It's a very complicated and hard-to-understand kind of shader, but it's basically built on the premise of using multiple lights, so it's a kind of example for the purpose.
However, since the game I'm making has 32 light source limitations, considering that excessive amount of Frame Drop will occur in the part of calculating Visibility by making Shadow Map for all light sources, I decided to implement Visibility as 1.0 Constant and did not get the desired result. (Of course it's a result.)
Down below is how I did this thing:
#include "lighting.hlsl"
Texture2D<float4> inputTexture : register(t0);
Texture2D<float> depthTexture : register(t1);
RWTexture2D<float4> outputTexture : register(u0);
#define PI 3.141592653589793238f
cbuffer VolumetricCB : register(b1)
{
float absorptionTau : packoffset(c0);
float3 absorptionColor : packoffset(c0.y);
int scatteringSamples : packoffset(c1.x);
float scatteringTau : packoffset(c1.y);
float scatteringZFar : packoffset(c1.z);
float3 scatteringColor : packoffset(c2);
matrix gInvProj : packoffset(c3);
matrix gInvView : packoffset(c7);
float3 gCameraPos : packoffset(c11);
Light gLights[NUM_LIGHTS] : packoffset(c12);
}
float random(float2 co)
{
return frac(sin(dot(co.xy, float2(12.9898, 78.233))) * 43758.5453123);
}
float3 PixelWorldPos(float depthValue, int2 pixel)
{
uint width, height;
inputTexture.GetDimensions(width, height);
float2 fPixel = float2(pixel.x, pixel.y);
float x = (fPixel.x / width * 2) - 1;
float y = (fPixel.y / height * (-2)) + 1;
float z = depthValue;
float4 ndcCoords = float4(x, y, z, 1.0f);
float4 p = mul(ndcCoords, gInvProj);
p /= p.w;
float4 worldCoords = mul(p, gInvView);
return worldCoords.xyz;
}
float3 absorptionTransmittance(float dist)
{
return absorptionColor * exp(-dist * (absorptionTau + scatteringTau));
}
float phaseFunction(float3 inDir, float3 outDir)
{
float cosAngle = dot(inDir, outDir) / (length(inDir) * length(outDir));
float x = (1.0 + cosAngle) / 2.0;
float x2 = x * x;
float x4 = x2 * x2;
float x8 = x4 * x4;
float x16 = x8 * x8;
float x32 = x16 * x16;
float nom = 0.5 + 16.5 * x32;
float factor = 1.0 / (4.0 * PI);
return nom * factor;
}
float3 volumetricScattering(float3 worldPosition, Light light)
{
float3 result = float3(0.0, 0.0, 0.0);
float3 camToFrag = worldPosition - gCameraPos;
if (length(camToFrag) > scatteringZFar)
{
camToFrag = normalize(camToFrag) * scatteringZFar;
}
float3 deltaStep = camToFrag / (scatteringSamples + 1);
float3 fragToCamNorm = normalize(gCameraPos - worldPosition);
float3 x = gCameraPos;
float rand = random(worldPosition.xy + worldPosition.z);
x += (deltaStep * rand);
for (int i = 0; i < scatteringSamples; ++i)
{
float visibility = 1.0;
float3 lightToX = x - light.Position;
float lightDist = length(lightToX);
float omega = 4 * PI * lightDist * lightDist;
float3 Lin = absorptionTransmittance(lightDist) * visibility * light.Diffuse * light.SpotPower / omega;
float3 Li = Lin * scatteringTau * scatteringColor * phaseFunction(normalize(lightToX), fragToCamNorm);
result += Li * absorptionTransmittance(distance(x, gCameraPos)) * length(deltaStep);
x += deltaStep;
}
return result;
}
[numthreads(32, 32, 1)]
void CS(uint3 dispatchID : SV_DispatchThreadID)
{
int2 pixel = int2(dispatchID.x, dispatchID.y);
float4 volumetricColor = float4(0.0, 0.0, 0.0, 1.0);
float depthValue = depthTexture[pixel].r;
float3 worldPosition = PixelWorldPos(depthValue, pixel);
float fragCamDist = distance(worldPosition, gCameraPos);
for (int i = 0; i < NUM_LIGHTS; ++i)
{
if (gLights[i].Type == SPOT_LIGHT && gLights[i].FalloffEnd > length(gLights[i].Position - worldPosition))
volumetricColor += float4(volumetricScattering(worldPosition, gLights[i]), 0.0);
}
outputTexture[pixel] = volumetricColor + inputTexture[pixel];
}
(AbsorptionTau = -0.061f, ScatteringTau = 0.059f)
All these Codes for that Tiny Spot...
The second method was shown in Chapter 13 of GPU GEM3.
It was a method of drawing only Light Source on a separate Render Target, processing the Render Target using Post Processing Shder to create light scattering, and then merging it with a back buffer. (At least that's how I understand it.)
However, this method was designed only for one very strong light, and to fix it, I modified the code as below, but it didn't work well.
[numthreads(32, 32, 1)]
void CS(uint3 dispatchID : SV_DispatchThreadID)
{
uint2 pixel = dispatchID.xy;
uint width, height;
inputTexture.GetDimensions(width, height);
float4 result = inputTexture[pixel];
for (int i = 0; i < NUM_LIGHTS; ++i)
{
if(gLights[i].Type == SPOT_LIGHT)
{
float2 texCoord = float2(pixel.x / width, pixel.y / height);
float2 deltaTexCoord = (texCoord - mul(mul(float4(gLights[i].Position, 1.0f), gView), gProj).xy);
deltaTexCoord *= 1.0f / NUM_SAMPLES * Density;
float3 color = inputTexture[pixel].rgb;
float illuminationDecay = 1.0f;
for (int j = 0; j < NUM_SAMPLES; j++)
{
texCoord -= deltaTexCoord;
uint2 modifiedPixel = uint2(texCoord.x * width, texCoord.y * height);
float3 sample = inputTexture[modifiedPixel].rgb;
sample *= illuminationDecay * Weight;
color += sample;
illuminationDecay *= Decay;
}
result += float4(color * Exposure, 1);
}
}
outputTexture[pixel] = result;
}
this just 'Blur' these light source map, and surely it's not what I wanted.
Is there a similar kind of example to the implementation that I want, or is there a simpler way to do this? I've spent a week on this issue, but I haven't achieved much.
edit :
I did it! but there's some error about direction of light volume.
[numthreads(32, 32, 1)]
void CS(uint3 dispatchID : SV_DispatchThreadID)
{
float4 result = { 0.0f, 0.0f, 0.0f, 0.0f };
uint2 pixel = dispatchID.xy;
uint width, height;
inputTexture.GetDimensions(width, height);
float2 texCoord = (float2(pixel) + 0.5f) / float2(width, height);
float depth = depthTexture[pixel].r;
float3 screenPos = GetPositionVS(texCoord, depth);
float3 rayEnd = float3(0.0f, 0.0f, 0.0f);
const uint sampleCount = 16;
const float stepSize = length(screenPos - rayEnd) / sampleCount;
// Perform ray marching to integrate light volume along view ray:
[loop]
for (uint i = 0; i < NUM_LIGHTS; ++i)
{
[branch]
if (gLights[i].Type == SPOT_LIGHT)
{
float3 V = float3(0.0f, 0.0f, 0.0f) - screenPos;
float cameraDistance = length(V);
V /= cameraDistance;
float marchedDistance = 0;
float accumulation = 0;
float3 P = screenPos + V * stepSize * dither(pixel.xy);
for (uint j = 0; j < sampleCount; ++j)
{
float3 L = mul(float4(gLights[i].Position, 1.0f), gView).xyz - P;
const float dist2 = dot(L, L);
const float dist = sqrt(dist2);
L /= dist;
//float3 viewDir = mul(float4(gLights[i].Direction, 1.0f), gView).xyz;
float3 viewDir = gLights[i].Direction;
float SpotFactor = dot(L, normalize(-viewDir));
float spotCutOff = gLights[i].outerCosine;
[branch]
if (SpotFactor > spotCutOff)
{
float attenuation = DoAttenuation(dist, gLights[i].Range);
float conAtt = saturate((SpotFactor - gLights[i].outerCosine) / (gLights[i].innerCosine - gLights[i].outerCosine));
conAtt *= conAtt;
attenuation *= conAtt;
attenuation *= ExponentialFog(cameraDistance - marchedDistance);
accumulation += attenuation;
}
marchedDistance += stepSize;
P = P + V * stepSize;
}
accumulation /= sampleCount;
result += max(0, float4(accumulation * gLights[i].Color * gLights[i].VolumetricStrength, 1));
}
}
outputTexture[pixel] = inputTexture[pixel] + result;
}
this is my compute shader, but when I doesn't multiply view matrix to direction, it goes wrong like this :
as you can see, street lamp's volume direction is good, but vehicle's headlight's volume direction is different from it's spot light direction.
and when I multiply view matrix to direction :
head lights gone wrong AND street lamp goes wrong too.
I still finding where's wrong in my cpu codes, but I haven't find anything.
this might be helpful. here's my shader code about spot lighting.
float CalcAttenuation(float d, float falloffStart, float falloffEnd)
{
return saturate((falloffEnd - d) / (falloffEnd - falloffStart));
}
float3 BlinnPhongModelLighting(float3 lightDiff, float3 lightVec, float3 normal, float3 view, Material mat)
{
const float m = mat.Exponent;
const float f = ((mat.IOR - 1) * (mat.IOR - 1)) / ((mat.IOR + 1) * (mat.IOR + 1));
const float3 fresnel0 = float3(f, f, f);
float3 halfVec = normalize(view + lightVec);
float roughness = (m + 8.0f) * pow(saturate(dot(halfVec, normal)), m) / 8.0f;
float3 fresnel = CalcReflectPercent(fresnel0, halfVec, lightVec);
float3 specular = fresnel * roughness;
specular = specular / (specular + 1.0f);
return (mat.Diffuse.rgb + specular * mat.Specular) * lightDiff;
}
float3 ComputeSpotLight(Light light, Material mat, float3 pos, float3 normal, float3 view)
{
float3 result = float3(0.0f, 0.0f, 0.0f);
bool bCompute = true;
float3 lightVec = light.Position - pos;
float d = length(lightVec);
if (d > light.FalloffEnd)
bCompute = false;
if (bCompute)
{
lightVec /= d;
float ndotl = max(dot(lightVec, normal), 0.0f);
float3 lightDiffuse = light.Diffuse * ndotl;
float att = CalcAttenuation(d, light.FalloffStart, light.FalloffEnd);
lightDiffuse *= att;
float spotFactor = pow(max(dot(-lightVec, light.Direction), 0.0f), light.SpotPower);
lightDiffuse *= spotFactor;
result = BlinnPhongModelLighting(lightDiffuse, lightVec, normal, view, mat);
}
return result;
}

Shadowmap works with ortho projection, but not perspective projection

I have implemented shadow-mapping, and it works great as long as I use an orthogonal projection (e.g. to have shadows from 'sun')
However, as soon as I switched to a perspective projection for a spotlight, the shadowmap no longer works for me. Even though I use the exact same matrix code for creating my (working) perspective camera projection.
Is there some pitfall with perspective shadow maps that I am not aware of?
This projection matrix (orthogonal) works:
const float projectionSize = 50.0f;
const float left = -projectionSize;
const float right = projectionSize;
const float bottom= -projectionSize;
const float top = projectionSize;
const float r_l = right - left;
const float t_b = top - bottom;
const float f_n = zFar - zNear;
const float tx = - (right + left) / (right - left);
const float ty = - (top + bottom) / (top - bottom);
const float tz = - (zFar + zNear) / (zFar - zNear);
float* mout = sl_proj.data;
mout[0] = 2.0f / r_l;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = 2.0f / t_b;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = -2.0f / f_n;
mout[11] = 0.0f;
mout[12] = tx;
mout[13] = ty;
mout[14] = tz;
mout[15] = 1.0f;
This projection matrix (perspective) works as a camera, but fails to work with shadow map:
const float f = 1.0f / tanf(fov/2.0f);
const float aspect = 1.0f;
float* mout = sl_proj.data;
mout[0] = f / aspect;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = f;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = (zFar+zNear) / (zNear-zFar);
mout[11] = -1.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 2 * zFar * zNear / (zNear-zFar);
mout[15] = 0.0f;
The ortho projection creates light and shadows as expected (see below) taking into account that it is not realistic of course.
To create the lightviewprojection matrix, I simply multiply the projection matrix and the inverse of the light-transformation.
All working fine for perspective camera, but not for perspective(spot) light.
And with 'not working' I mean: zero light shows up, because somehow, not a single fragment falls inside the light's view? (It is not a texture issue, it is a transformation issue.)
This was a case of a missing division by the homogeneous W coordinate in the GLSL code.
Somehow, with the orthogonal projection, not dividing by W was fine.
With the perspective projection, the coordinate in light-space needs dividing by W.

Half of normal mapped object is inverted

I have tried to implement normal mapping for 3 days now in vulkan. I pre-calculate the tangents and bitangents in the simple obj loader i made and create the TBN matrix in the shader. For now I'm just trying to get it to work, optimizations is saved for later. I'm very confused for why this problem occurs, I have googled around a lot which led me to check for handedness in the texture coordinates. This hasn't solved my problem though. I'm not sure what could be the problem or why this doesn't look correct. I worked with openGL before and this problem also occurred then. The normal mapping works on another model, but not this one.
Example of the problem
This is how i pre-calculate the tangents and bitangents
for (unsigned int i = 0; i < vertices.size(); i += 3)
{
Vector3f deltapos1 = vertices[i + 1].position - vertices[i].position;
Vector3f deltapos2 = vertices[i + 2].position - vertices[i].position;
Vector2f deltauv1 = vertices[i + 1].uv - vertices[i].uv;
Vector2f deltauv2 = vertices[i + 2].uv - vertices[i].uv;
float f;
Vector3f tangent(1, 0, 0);
Vector3f bitangent(0, 1, 0);
float det = deltauv1.x * deltauv2.y - deltauv1.y * deltauv2.x;
if (det != 0)
{
f = 1.0f / det;
tangent = (deltapos1 * deltauv2.y - deltapos2 * deltauv1.y) * f;
bitangent = (deltapos2 * deltauv1.x - deltapos1 * deltauv2.x) * f;
}
vertices[i].tangent = tangent;
vertices[i + 1].tangent = tangent;
vertices[i + 2].tangent = tangent;
vertices[i].bitangent = bitangent;
vertices[i + 1].bitangent = bitangent;
vertices[i + 2].bitangent = bitangent;
}
for (unsigned int i = 0; i < vertices.size(); i++)
{
vertices[i].tangent = Math::normalize(vertices[i].tangent - vertices[i].normal * Math::dot(vertices[i].tangent, vertices[i].normal));
if (Math::dot(Math::cross(vertices[i].bitangent, vertices[i].tangent), vertices[i].normal) < 0.0f)
vertices[i].bitangent = Math::normalize(Math::cross(vertices[i].normal, vertices[i].tangent)) * -1;
else
vertices[i].bitangent =
Math::normalize(Math::cross(vertices[i].normal, vertices[i].tangent));
}
Vertex shader
void main()
{
UV = vec2(inUv.x, 1.0f - inUv.y);
mat3 nmat = mat3(((ubo.model)));
gl_Position = ubo.mvp * vec4(inPosition, 1.0f);
Normal = normalize(nmat * inNormal);
vec3 T = normalize(nmat * inTangent);
vec3 B = normalize(nmat * inBitangent);
vec3 N = Normal;
//T = normalize(T - N * dot(T, N));
//B = normalize(cross(T, N));
mat3 TBN = mat3(T, B, N);
toModelSpace = TBN;
fragPos = vec3(ubo.model * vec4(inPosition, 1.0f));
viewPos = ubo.cameraPos;
This is how I convert normal map to model space
vec3 normal = normalize(texture(imageBump, UV).rgb * 2.0f - 1.0f);
normal = normalize(toModelSpace * normal);

Point light Dual-paraboloid VSM in deferred rendering

I've been following this tutorial to implement my variance shadow mapping feature for point light in deferred rendering.
I'm using GLSL 3.3, left-handed coordinate system. Here is what I've been doing:
I render the scene to dual-paraboloid maps, storing depth and depth * depth.
Result:
Above image contains front and back maps. The point light is at the center of scene, you can see where it glows yellow the most.
Then I set up a full-screen shader pass.
I do this by transforming the tutorial code from FX to GLSL.
Author's .fx code:
float4 TexturePS(float3 normalW : TEXCOORD0, float2 tex0 : TEXCOORD1, float3 pos : TEXCOORD2) : COLOR
{
float4 texColor = tex2D(TexS, tex0 * TexScale);
pos = mul(float4(pos, 1.0f), LightView);
float L = length(pos);
float3 P0 = pos / L;
float alpha = .5f + pos.z / LightAttenuation;
P0.z = P0.z + 1;
P0.x = P0.x / P0.z;
P0.y = P0.y / P0.z;
P0.z = L / LightAttenuation;
P0.x = .5f * P0.x + .5f;
P0.y = -.5f * P0.y + .5f;
float3 P1 = pos / L;
P1.z = 1 - P1.z;
P1.x = P1.x / P1.z;
P1.y = P1.y / P1.z;
P1.z = L / LightAttenuation;
P1.x = .5f * P1.x + .5f;
P1.y = -.5f * P1.y + .5f;
float depth;
float mydepth;
float2 moments;
if(alpha >= 0.5f)
{
moments = tex2D(ShadowFrontS, P0.xy).xy;
depth = moments.x;
mydepth = P0.z;
}
else
{
moments = tex2D(ShadowBackS, P1.xy).xy;
depth = moments.x;
mydepth = P1.z;
}
float lit_factor = (mydepth <= moments[0]);
float E_x2 = moments.y;
float Ex_2 = moments.x * moments.x;
float variance = min(max(E_x2 - Ex_2, 0.0) + SHADOW_EPSILON, 1.0);
float m_d = (moments.x - mydepth);
float p = variance / (variance + m_d * m_d); //Chebychev's inequality
texColor.xyz *= max(lit_factor, p + .2f);
return texColor;
}
My translated GLSL code:
void main() {
vec3 in_vertex = texture(scenePosTexture, texCoord).xyz; // get 3D vertex from 2D screen coordinate
vec4 vert = lightViewMat * vec4(in_vertex, 1); // project vertex to point light space (view from light position, look target is -Z)
float L = length(vert.xyz);
float distance = length(lightPos - in_vertex);
float denom = distance / lightRad + 1;
float attenuation = 1.0 / (denom * denom);
// to determine which paraboloid map to use
float alpha = vert.z / attenuation + 0.5f;
vec3 P0 = vert.xyz / L;
P0.z = P0.z + 1;
P0.x = P0.x / P0.z;
P0.y = P0.y / P0.z;
P0.z = L / attenuation;
P0.x = .5f * P0.x + .5f;
P0.y = -.5f * P0.y + .5f;
vec3 P1 = vert.xyz / L;
P1.z = 1 - P1.z;
P1.x = P1.x / P1.z;
P1.y = P1.y / P1.z;
P1.z = L / attenuation;
P1.x = .5f * P1.x + .5f;
P1.y = -.5f * P1.y + .5f;
// Variance shadow mapping
float depth;
float mydepth;
vec2 moments;
if(alpha >= 0.5f)
{
moments = texture(shadowMapFrontTexture, P0.xy).xy;
depth = moments.x;
mydepth = P0.z;
}
else
{
moments = texture(shadowMapBackTexture, P1.xy).xy;
depth = moments.x;
mydepth = P1.z;
}
// Original .fx code is: float lit_factor = (mydepth <= moments[0]);
// I'm not sure my translated code belew is correct
float lit_factor = 0;
if (mydepth <= moments.x)
lit_factor = mydepth;
else
lit_factor = moments.x;
float E_x2 = moments.y;
float Ex_2 = moments.x * moments.x;
float variance = min(max(E_x2 - Ex_2, 0.0) + SHADOW_EPSILON, 1.0);
float m_d = (moments.x - mydepth);
float p = variance / (variance + m_d * m_d); //Chebychev's inequality
fragColor = texture(sceneTexture, texCoord).rgb; // sample original color
fragColor.rgb *= max(lit_factor, p + .2f);
}
Render result
Right now I'm clueless about where I'm gonna touch to render the shadow correctly. Could someone point it out for me?
Some friend of mine pointed out that the Y component is flipped, that's why shadow looked like up-side down. After adding minus to P0 and P1's Y, it starts to show quite reasonable shadow:
But another problem is the location of shadow is wrong.
Why do you duplicate the paraboloid projection computation ?
You compute it on 'vert', then 'P0' and 'P1', shouldn't you do it only on 'P0' and 'P1' ? (The original code doesn't do this thing on 'pos').
EDIT:
Your lit_factor is wrong, it should be either 0.0 or 1.0.
You could use the step() GLSL intrinsic, in this way :
float lit_factor = step(mydepth, moments[0]);

Procedural Texturing Checkerboard OpenGL

I encounter some difficulties to implement a procedural texture of checkerboard. Here is what I need to get:
Here is what i get:
It's close but my texture is kind of rotated in respect of what I need to get.
Here is the code of my shader:
#version 330
in vec2 uv;
out vec3 color;
uniform sampler1D colormap;
void main() {
float sx = sin(10*3.14*uv.x)/2 + 0.5;
float sy = sin(10*3.14*uv.y)/2 + 0.5;
float s = (sx + sy)/2;
if(true){
color = texture(colormap,s).rgb;
}
colormap is a mapping from 0 to 1, where 0 correspond to red, 1 to green.
I think the problem is coming from the formula i use, (sx+sy)/2 . I need to get the square not rotated but aligned with the border of the big square.
If someone has an idea to get the good formula.
Thanks.
You can add a rotation operation to "uv":
mat2 R(float degrees){
mat2 R = mat2(1);
float alpha = radians(degrees);
R[0][0] = cos(alpha);
R[0][1] = sin(alpha);
R[1][0] = -sin(alpha);
R[1][1] = cos(alpha);
return R;
}
void main(){
ver2 new_uv = R(45) * uv;
float sx = sin(10*3.14*new_uv.x)/2 + 0.5;
float sy = sin(10*3.14*new_uv.y)/2 + 0.5;
float s = (sx + sy)/2;
color = texture(colormap,s).rgb;
}
Maybe something like this:
float sx = sin(10.0 * M_PI * uv.x);
float sy = sin(10.0 * M_PI * uv.y);
float s = sx * sy / 2.0 + 0.5;
Example (without texture):
https://www.shadertoy.com/view/4sdSzn