Drawing circles on a sphere - glsl

I'm trying to draw lots of circles on a sphere using shaders. The basic alogrith is like this:
calculate the distance from the fragment (using it's texture coordinates) to the location of the circle's center (the circle's center is also specified in texture coordinates)
calculate the angle from the fragent to the center of the circle.
based on the angle, access a texture (which has 360 pixels in it and the red channel specifies a radius distance) and retrieve the radius for the given angle
if the distance from the fragment to the circle's center is less than the retrieved radius then the fragment's color is red, otherwise blue.
I would like to draw ... say 60 red circles on a blue sphere. I got y shader to work for one circle, but how to do 60? Here's what I've tried so far....
I passed in a data texture that specifies the radius for a given angle, but I notice artifacts creep in. I believe this is due to linear interpolation when I try to retrieve information from the data texture using:
float returnV = texture2D(angles, vec2(x, y)).r;
where angles is the data texture (Sampler2D) that contains the radius for a given angle, and x = angle / 360.0 (angle is 0 to 360) and y = 0 to 60 (y is the circle number)
I tried passing in a Uniform float radii[360], but I cannot access radii with dynamic indexing. I even tried this mess ...
getArrayValue(int index) {
if (index == 0) {
return radii[0];
}
else if (index == 1) {
return radii[1];
}
and so on ...
If I create a texture and place all of the circles on that texture and then multi-texture the blue sphere with the one containing the circles it works, but as you would expect, I have really bad aliasing. I like the idea of proceduraly generating the circles based on the position of the fragment and that of the circle because of virtually no aliasing. However, I do I do ore than one?
Thx!!!
~Bolt

i have a shader that makes circle on the terrain. It moves by the mouse moves.
maybe you get an inspiration?
this is a fragment program. it is not the main program but you can add it to your program.
try this...
for now you can give some uniform parameters in hardcode.
uniform float showCircle;
uniform float radius;
uniform vec4 mousePosition;
varying vec3 vertexCoord;
void calculateTerrainCircle(inout vec4 pixelColor)
{
if(showCircle == 1)
{
float xDist = vertexCoord.x - mousePosition.x;
float yDist = vertexCoord.y - mousePosition.y;
float dist = xDist * xDist + yDist * yDist;
float radius2 = radius * radius;
if (dist < radius2 * 1.44f && dist > radius2 * 0.64f)
{
vec4 temp = pixelColor;
float diff;
if (dist < radius2)
diff = (radius2 - dist) / (0.36f * radius2);
else
diff = (dist - radius2) / (0.44f * radius2);
pixelColor = vec4(1, 0, 0, 1.0) * (1 - diff) + pixelColor * diff;
pixelColor = mix(pixelColor, temp, diff);
}
}
}
and in vertex shader you add:
varying vec3 vertexCoord;
void main()
{
gl_Position = ftransform();
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
}

ufukgun, if you multuiply a matrix by its inverse you get the identity.
Your;
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
is therefore equivalent to
vertexCoord = vec3(gl_Vertex);

Related

How to generate camera rays for ray casting

I am trying to make a simple voxel engine with OpenGL and C++. My first step is to send out rays from the camera and detect if the ray intersected with something (for testing purposes its just two planes). I have got it working with without the camera rotating by creating a full screen quad and programming the fragment shader to send out a ray for every fragment (for now I'm just assuming a fragment is a pixel) which is in the direction texCoord.x, texCoord.y, -1. Now I am trying to implement camera rotation.
I have tried to generate a rotation matrix within the cpu and send that to the shader which will multiply it with every ray. However, when I rotate the camera, the planes start to stretch in a way which I can only describe with this video.
https://www.youtube.com/watch?v=6NScMwnPe8c
Here is the code that creates the matrix and is run every frame:
float pi = 3.141592;
// camRotX and Y are defined elsewhere and can be controlled from the keyboard during runtime.
glm::vec3 camEulerAngles = glm::vec3(camRotX, camRotY, 0);
std::cout << "X: " << camEulerAngles.x << " Y: " << camEulerAngles.y << "\n";
// Convert to radians
camEulerAngles.x = camEulerAngles.x * pi / 180;
camEulerAngles.y = camEulerAngles.y * pi / 180;
camEulerAngles.z = camEulerAngles.z * pi / 180;
// Generate Quaternian
glm::quat camRotation;
camRotation = glm::quat(camEulerAngles);
// Generate rotation matrix from quaternian
glm::mat4 camToWorldMatrix = glm::toMat4(camRotation);
// No transformation matrix is created because the rays should be relative to 0,0,0
// Send the rotation matrix to the shader
int camTransformMatrixID = glGetUniformLocation(shader, "cameraTransformationMatrix");
glUniformMatrix4fv(camTransformMatrixID, 1, GL_FALSE, glm::value_ptr(camToWorldMatrix));
And the fragment shader:
#version 330 core
in vec4 texCoord;
layout(location = 0) out vec4 color;
uniform vec3 cameraPosition;
uniform vec3 cameraTR;
uniform vec3 cameraTL;
uniform vec3 cameraBR;
uniform vec3 cameraBL;
uniform mat4 cameraTransformationMatrix;
uniform float fov;
uniform float aspectRatio;
float pi = 3.141592;
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
if(direction.z != 0)
{
float multiplicationFactorFront = cellPosition.z - origin.z;
if(multiplicationFactorFront > 0){
vec2 interceptFront = vec2(direction.x * multiplicationFactorFront + origin.x,
direction.y * multiplicationFactorFront + origin.y);
if(interceptFront.x > cellPosition.x && interceptFront.x < cellPosition.x + cellSize &&
interceptFront.y > cellPosition.y && interceptFront.y < cellPosition.y + cellSize)
{
return 1;
}
}
float multiplicationFactorBack = cellPosition.z + cellSize - origin.z;
if(multiplicationFactorBack > 0){
vec2 interceptBack = vec2(direction.x * multiplicationFactorBack + origin.x,
direction.y * multiplicationFactorBack + origin.y);
if(interceptBack.x > cellPosition.x && interceptBack.x < cellPosition.x + cellSize &&
interceptBack.y > cellPosition.y && interceptBack.y < cellPosition.y + cellSize)
{
return 2;
}
}
}
return 0;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x,texCoord.y,-1,0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, rayDirection, vec3(0,0,5), 1);
if(t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}else if(t == 2)
{
// Hit back plane
color = vec4(0, 0, 0.5, 0);
}else{
// background color
color = vec4(0, 1, 0, 0);
}
}
Okay. Its really hard to know what is wrong, I will try non-theless.
Here are few tips and notes:
1) You can debug directions by mapping them to RGB color. Keep in mind you should normalize the vectors and map from (-1,1) to (0,1). Just do the dir*0.5+1.0 type of thing. Example:
color = vec4(normalize(rayDirection) * 0.5, 0) + vec4(1);
2) You can get the rotation matrix in a more straight manner. Quaternion is initialized from an forward direction, it will first rotate around Y axis (horizontal look) then, and only then, around X axis (vertical look). Keep in mind that the rotations order is implementation dependent if you initialize from euler-angles. Use mat4_cast to avoid experimental glm extension (gtx) whenever possible. Example:
// Define rotation quaternion starting from look rotation
glm::quat camRotation = glm::vec3(0, 0, 0);
camRotation = glm::rotate(camRotation, glm::radians(camRotY), glm::vec3(0, 1, 0));
camRotation = glm::rotate(camRotation, glm::radians(camRotX), glm::vec3(1, 0, 0));
glm::mat4 camToWorldMatrix = glm::mat4_cast(camRotation);
3) Your beforeRotateRayDirection is a vector that (probably) points from (-1,-1,-1) all the way to (1,1,-1). Which is not normalized, the length of (1,1,1) is √3 ≈ 1.7320508075688772... Be sure you have taken that into account for your collision math or just normalize the vector.
My partial answer so far...
Your collision test is a bit weird... It appears you want to cast the ray into the Z plane for the given cell position (but twice, one for the front and one for the back). I have reviewed your code logic and it makes some sense, but without the vertex program, thus not knowing what the texCoord range values are, it is not possible to be sure. You might want to rethink your logic to something like this:
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
//Get triangle side vectors
vec3 tu = vec3(cellSize,0,0); //Triangle U component
vec3 tv = vec3(0,cellSize,0); //Triangle V component
//Determinant for inverse matrix
vec3 q = cross(direction, tv);
float det = dot(tu, q);
//if(abs(det) < 0.0000001) //If too close to zero
// return;
float invdet = 1.0/det;
//Solve component parameters
vec3 s = origin - cellPosition;
float u = dot(s, q) * invdet;
if(u < 0.0 || u > 1.0)
return 0;
vec3 r = cross(s, tu);
float v = dot(direction, r) * invdet;
if(v < 0.0 || v > 1.0)
return 0;
float t = dot(tv, r) * invdet;
if(t <= 0.0)
return 0;
return 1;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the
// rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x, texCoord.y, -1, 0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, normalize(rayDirection), vec3(0,0,5), 1);
if (t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}
else
{
// background color
color = vec4(0, 1, 0, 0);
}
}
This should give you a plane, let me know if it works. A cube will be very easy to do.
PS.: u and v can be used for texture mapping.

I need help converting this 2D sky shader to 3D

I found this shader function on github and managed to get it working in GameMaker Studio 2, my current programming suite of choice. However this is a 2D effect that doesn't take into account the camera up vector, nor fov. Is there anyway that can be added into this? I'm only intermediate skill level when it comes to shaders so I'm not sure exactly what route to take, or whether it would even be considered worth it at this point, or if I should start with a different example.
uniform vec3 u_sunPosition;
varying vec2 v_vTexcoord;
varying vec4 v_vColour;
varying vec3 v_vPosition;
#define PI 3.141592
#define iSteps 16
#define jSteps 8
vec2 rsi(vec3 r0, vec3 rd, float sr) {
// ray-sphere intersection that assumes
// the sphere is centered at the origin.
// No intersection when result.x > result.y
float a = dot(rd, rd);
float b = 2.0 * dot(rd, r0);
float c = dot(r0, r0) - (sr * sr);
float d = (b*b) - 4.0*a*c;
if (d < 0.0) return vec2(1e5,-1e5);
return vec2(
(-b - sqrt(d))/(2.0*a),
(-b + sqrt(d))/(2.0*a)
);
}
vec3 atmosphere(vec3 r, vec3 r0, vec3 pSun, float iSun, float rPlanet, float rAtmos, vec3 kRlh, float kMie, float shRlh, float shMie, float g) {
// Normalize the sun and view directions.
pSun = normalize(pSun);
r = normalize(r);
// Calculate the step size of the primary ray.
vec2 p = rsi(r0, r, rAtmos);
if (p.x > p.y) return vec3(0,0,0);
p.y = min(p.y, rsi(r0, r, rPlanet).x);
float iStepSize = (p.y - p.x) / float(iSteps);
// Initialize the primary ray time.
float iTime = 0.0;
// Initialize accumulators for Rayleigh and Mie scattering.
vec3 totalRlh = vec3(0,0,0);
vec3 totalMie = vec3(0,0,0);
// Initialize optical depth accumulators for the primary ray.
float iOdRlh = 0.0;
float iOdMie = 0.0;
// Calculate the Rayleigh and Mie phases.
float mu = dot(r, pSun);
float mumu = mu * mu;
float gg = g * g;
float pRlh = 3.0 / (16.0 * PI) * (1.0 + mumu);
float pp = 1.0 + gg - 2.0 * mu * g;
float pMie = 3.0 / (8.0 * PI) * ((1.0 - gg) * (mumu + 1.0)) / (sign(pp)*pow(abs(pp), 1.5) * (2.0 + gg));
// Sample the primary ray.
for (int i = 0; i < iSteps; i++) {
// Calculate the primary ray sample position.
vec3 iPos = r0 + r * (iTime + iStepSize * 0.5);
// Calculate the height of the sample.
float iHeight = length(iPos) - rPlanet;
// Calculate the optical depth of the Rayleigh and Mie scattering for this step.
float odStepRlh = exp(-iHeight / shRlh) * iStepSize;
float odStepMie = exp(-iHeight / shMie) * iStepSize;
// Accumulate optical depth.
iOdRlh += odStepRlh;
iOdMie += odStepMie;
// Calculate the step size of the secondary ray.
float jStepSize = rsi(iPos, pSun, rAtmos).y / float(jSteps);
// Initialize the secondary ray time.
float jTime = 0.0;
// Initialize optical depth accumulators for the secondary ray.
float jOdRlh = 0.0;
float jOdMie = 0.0;
// Sample the secondary ray.
for (int j = 0; j < jSteps; j++) {
// Calculate the secondary ray sample position.
vec3 jPos = iPos + pSun * (jTime + jStepSize * 0.5);
// Calculate the height of the sample.
float jHeight = length(jPos) - rPlanet;
// Accumulate the optical depth.
jOdRlh += exp(-jHeight / shRlh) * jStepSize;
jOdMie += exp(-jHeight / shMie) * jStepSize;
// Increment the secondary ray time.
jTime += jStepSize;
}
// Calculate attenuation.
vec3 attn = exp(-(kMie * (iOdMie + jOdMie) + kRlh * (iOdRlh + jOdRlh)));
// Accumulate scattering.
totalRlh += odStepRlh * attn;
totalMie += odStepMie * attn;
// Increment the primary ray time.
iTime += iStepSize;
}
// Calculate and return the final color.
return iSun * (pRlh * kRlh * totalRlh + pMie * kMie * totalMie);
}
vec3 ACESFilm( vec3 x )
{
float tA = 2.51;
float tB = 0.03;
float tC = 2.43;
float tD = 0.59;
float tE = 0.14;
return clamp((x*(tA*x+tB))/(x*(tC*x+tD)+tE),0.0,1.0);
}
void main() {
vec3 color = atmosphere(
normalize( v_vPosition ), // normalized ray direction
vec3(0,6372e3,0), // ray origin
u_sunPosition, // position of the sun
22.0, // intensity of the sun
6371e3, // radius of the planet in meters
6471e3, // radius of the atmosphere in meters
vec3(5.5e-6, 13.0e-6, 22.4e-6), // Rayleigh scattering coefficient
21e-6, // Mie scattering coefficient
8e3, // Rayleigh scale height
1.2e3, // Mie scale height
0.758 // Mie preferred scattering direction
);
// Apply exposure.
color = ACESFilm( color );
gl_FragColor = vec4(color, 1.0);
}
However this is a 2D effect that doesn't take into account the camera up vector, nor fov.
If you want to draw a sky in 3D, then you have to draw the on the back plane of the normalized device space. The normalized device space is is a cube with the left, bottom near of (-1, -1, -1) and the right, top, f ar of (1, 1, 1).
The back plane is the quad with:
bottom left: -1, -1, 1
bottom right: 1, -1, 1
top right: -1, -1, 1
top left: -1, -1, 1
Render this quad. Note, the vertex coordinates have not to be transformed by any matrix, because the are normalized device space coordinates. But you have to transform the ray which is used for the sky (the direction which is passed to atmosphere).
This ray has to be a direction in world space, from the camera position to the the sky. By the vertex coordinate of the quad you can get a ray in normalized device space. You have tor transform this ray to world space. The inverse projection matrix (MATRIX_PROJECTION) transforms from normalized devices space to view space and the inverse view matrix (MATRIX_VIEW) transforms form view space to world space. Use this matrices in the vertex shader:
attribute vec3 in_Position;
varying vec3 v_world_ray;
void main()
{
gl_Position = vec4(inPos, 1.0);
vec3 proj_ray = vec3(inverse(gm_Matrices[MATRIX_PROJECTION]) * vec4(inPos.xyz, 1.0));
v_world_ray = vec3(inverse(gm_Matrices[MATRIX_VIEW]) * vec4(proj_ray.xyz, 0.0));
}
In the fragment shader you have to rotate the ray by 90° around the x axis, but that is just caused by the way the ray is interpreted by function atmosphere:
varying vec3 v_world_ray;
// [...]
void main() {
vec3 world_ray = vec3(v_world_ray.x, v_world_ray.z, -v_world_ray.y);
vec3 color = atmosphere(
normalize( world_ray.xyz ), // normalized ray direction
vec3(0,6372e3,0), // ray origin
u_sunPosition, // position of the sun
22.0, // intensity of the sun
6371e3, // radius of the planet in meters
6471e3, // radius of the atmosphere in meters
vec3(5.5e-6, 13.0e-6, 22.4e-6), // Rayleigh scattering coefficient
21e-6, // Mie scattering coefficient
8e3, // Rayleigh scale height
1.2e3, // Mie scale height
0.758 // Mie preferred scattering direction
);
// Apply exposure.
color = ACESFilm( color );
fragColor = vec4(color.rgb, 1.0);
}

GLSL Rounded Rectangle Corners Are Stretched

I'm programing a GUI library in openGL and decided to add rounded corners because I feel like it gives a much more professional look to the units.
I've implemented the common
length(max(abs(p) - b, 0.0)) - radius
method and it almost works perfectly except for the fact tat the corners seems as though they are stretched:
My fragment shader:
in vec2 passTexCoords;
uniform vec4 color;
uniform int width;
uniform int height;
uniform int radius;
void main() {
fragment = color;
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width, height);
float alpha = 1.0 - clamp(length(max(pos - (vec2(width, height) - radius), 0.0)) - radius, 0.0, 1.0);
fragment.a = alpha;
}
The stretching does make sense to me but when I replace with
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width, height) * vec2(scaleX, scaleY);
and
float alpha = 1.0 - clamp(length(max(pos - (vec2(width, height) * vec2(scaleX, scaleY) - radius), 0.0)) - radius, 0.0, 1.0);
(where scaleX and scaleY are scalars between 0.0 and 1.0 that represent the width and height of the rectangle relative to the screen) the rectangle almost completely disappears:
The problem is that the distances are not scaled into screen space, and are therefore stretched across the greatest window axis as a result. You can fix this if you multiply the normalized position by the aspect ratio of the screen, along with the other parameters for the box. I wrote an example on Shadertoy that does this:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Input info
vec2 boxPos; // The position of the center of the box (in normalized coordinates)
vec2 boxBnd; // The half-bounds (radii) of the box (in normalzied coordinates)
float radius;// Radius
boxPos = vec2(0.5, 0.5); // center of the screen
boxBnd = vec2(0.25, 0.25); // half of the area
radius = 0.1;
// Normalize the pixel coordinates (this is "passTexCoords" in your case)
vec2 uv = fragCoord/iResolution.xy;
// (Note: iResolution.xy holds the x and y dimensions of the window in pixels)
vec2 aspectRatio = vec2(iResolution.x/iResolution.y, 1.0);
// In order to make sure visual distances are preserved, we multiply everything by aspectRatio
uv *= aspectRatio;
boxPos *= aspectRatio;
boxBnd *= aspectRatio;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(iTime+uv.xyx+vec3(0,2,4));
// Output to screen
float alpha = length(max(abs(uv - boxPos) - boxBnd, 0.0)) - radius;
// Shadertoy doesn't have an alpha in this case
if(alpha <= 0.0){
fragColor = vec4(col,1.0);
}else{
fragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
}
There may be a less computationally expensive way to do this, but this was a simple solution I cooked up.
I assume the passTexCoords is a a texture coordinate in range [0, 1]. And that width and height is the size of the screen. And scaleX and scaleY is the ration of the green area to the size of the screen.
Calculate the absolute position (pos) of the current fragment in relation to the center of the green area in pixel units:
vec2 pos = (abs(passTexCoords - 0.5) + 0.5) * vec2(width*scaleX, height*scaleY);
Calculate the distance from the center point of the arc, to the current fragment:
vec2 arc_cpt_vec = max(pos - vec2(width*scaleX, height*scaleY) + radius, 0.0);
If the length of the vector is greater than the radius, then the fragment has to be skipped:
float alpha = length(arc_cpt_vec) > radius ? 0.0 : 1.0;

Why won't my falloff work for this dot product based spotlight?

I made a spotlight that
Projects 3d models onto a render target from each light POV to simulate shadows
Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0.
This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much.
float CalculateSpotLightIntensity(
float3 LightPos_VertexSpace,
float3 LightDirection_WS,
float3 SurfaceNormal_WS)
{
//float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);
float3 lightToVertex_WS = -LightPos_VertexSpace;
float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));
// METALLIC EFFECT (deactivate for now)
float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
if(dotProduct > .95 /*&& metalEffect > .55*/)
{
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));
//return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;
//return dotProduct;
}
else
{
return 0;
}
}
float4 LightPixelShader(PixelInputType input) : SV_TARGET
{
float2 projectTexCoord;
float depthValue;
float lightDepthValue;
float4 textureColor;
// Set the bias value for fixing the floating point precision issues.
float bias = 0.001f;
// Set the default output color to the ambient light value for all pixels.
float4 lightColor = cb_ambientColor;
/////////////////// NORMAL MAPPING //////////////////
float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);
// Expand the range of the normal value from (0, +1) to (-1, +1).
bumpMap = (bumpMap * 2.0f) - 1.0f;
// Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!
float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));
//////////////// LIGHT LOOP ////////////////
for(int i = 0; i < NUM_LIGHTS; ++i)
{
// Calculate the projected texture coordinates.
projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;
if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))
{
// Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.
depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;
// Calculate the depth of the light.
lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;
// Subtract the bias from the lightDepthValue.
lightDepthValue = lightDepthValue - bias;
float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );
// Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.
// If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.
if(lightDepthValue < depthValue)
{
// Calculate the amount of light on this pixel.
float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));
if(lightIntensity > 0.0f)
{
// Determine the final diffuse color based on the diffuse color and the amount of light intensity.
float spotLightIntensity = CalculateSpotLightIntensity(
input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!
cb_lights[i].lightDirection,
bumpNormal/*input.normal*/);
lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight
//lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light
}
}
}
}
// Saturate the final light color.
lightColor = saturate(lightColor);
// lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));
// TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location.
input.tex.x += textureTranslation;
// BLENDING
float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);
float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);
float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);
textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));
// Combine the light and texture color.
float4 finalColor = lightColor * textureColor;
/////// TRANSPARENCY /////////
//finalColor.a = 0.2f;
return finalColor;
}
Oops! It's because the range of 0.95 - 1.0 was too small to make a difference! So I had to expand the range to 0~1 by doing
float expandedRange = (dotProduct - .95)/.05f;
return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))*expandedRange*expandedRange);
Now it has a soft edge. A little too soft for me honestly. Now I'm just doing a quadratic falloff by squaring the expanded range as you can see. Any tips on making it look nicer? Let me know, thanks.

Texture Warping Shader: Polar to Rectangular Coordinates

I am writing a 2D game using OpenGL and I have planned a shadow casting algorithm which needs a transformation of a texture from Polar Coordinates to Rectangular Coordinates. The desired effect is the following:
From this:
To this:
I know the formulas for converting coordinates between both Polar and Rectangular systems but I am having problems on writing the shader to achieve the desired effect.
My shader receives a texture as an input and should draw the warped texture to the screen. I planned the following (knowing that the fragment shader acts upon one fragment at a time):
Find the coordinates of the current fragment using gl_FragCoord.xy
Determine r and theta that correspond to the point (x, y).
Transform r and theta into texture_x and texture_y (which will be used to sample the texture)
Transfer the sampled pixel to the current fragment
My final result is the same input texture rotated 90 degrees clock-wise. I think that I'm missing something on step 3. I might be just getting the same x and y of the current fragment, because I'm simply using both the transform and inverse transform formulas.
How should I proceed to get the expected result?
Here is my shader:
#version 120
uniform sampler2D tex;
void main() {
vec2 fragCoords = gl_FragCoord.xy - vec2(128, 128); //shift the coordinates so that 0, 0 is in the center of the screen (the final texture is 256 * 256)
fragCoords /= vec2(256, 256);
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
if (fragCoords.y/fragCoords.x <= 0.5 && fragCoords.y/fragCoords.x >= -0.5) {
r *= 1/(256*sin(theta));
} else {
r *= 1/(0.5*256*cos(theta));
}
vec2 texCoords = vec2(r, theta);
vec4 texFrag = texture2D(tex, texCoords);
gl_FragColor = texFrag * vec4(1.0, 0.0, 0.0, 1.0);
}
In your shader you're first translating into polar coordinates
float r = sqrt(pow(fragCoords.x, 2) + pow(fragCoords.y, 2));
float theta = atan(fragCoords.y, fragCoords.x);
and then you't translating them back into cartesian
float tX = r * sin(theta);
float tY = r * cos(theta);
You want to stay in polar coordinates, so just plug r and theta into the texture coordinates
vec2 texCoords = vec2(r , theta);
vec4 texFrag = texture2D(tex, texCoords);
However by the looks of the images you pasted there's some renormalization step involved, so that (r, theta) will cover a rectangular area. If I'm not entirely mistaken, then r is scaled by the distance it takes a ray from the center-bottom to intersect with the rectangular area. If we assume theta=0 to be straight up, then for the range [-atan(0.5)…atan(0.5)] it's scaled by 1/(height*sin(theta)) and outside that range by 1/(0.5*width*cos(theta))