Volume rendering from inside volume - glsl

We've been doing lots of work trying to volume render 3D cloud fields in WebGL. The approach we've taken so far is outlined here - the start position of each ray is the current position in the front face of the volume cube, and the end position is calculated from a previous pass, which encodes the xyx vales as a backface texture.
How can we extend this to work when the camera is inside the volume? Do we need to create smaller volume cubes on the fly? Can we just change the shader to start marching from the camera instead of the front face, and project onto the back of the cube?
We're not really sure where to start with this!
Thanks in advance

Render only a single pass.
In that pass you render the back faces only. The camera position needs to be translated from world coordinates into a coordinate system that is build by the 3 axes with their sizes of the volume box you render. Your goal is to create a 4x4 matrix where the all column vectors are a vec4(...,0) and x,y,z of these vectors are defined by x,y,z-axis directions with length of the volume box. If the box is parallel to x axis, that vector is (1,0,0). If it is stretched to (2,0,0) then that is its own x-axis and that will be the column vector for column 0 in the matrix. Do so with y and z axis with their length. The last column vector in the matrix is the position of the box as vec4(tx,ty,tz,1) as this matrix then defines a coordinate system and you use it to transform the camera position into the uniform (0,0,0)-(1,1,1) box of the volume.
Create the inverse of that volumes box matrix and multiply the cam as vec4( campos, 1) from the right side to the invVolMatrix. Send the resulting vec3 as UNIFORM to shader.
Render only backfaces with (0,0,0) to (1,1,1) coordinates on their respective volBox corners - as you already did. Now you have in your shader
uniform campos
back face voltex coordinate
you know your volbox is a unit cube in a local coordinate system with diagonal from (0,0,0) to (1,1,1)
In the shader do:
varying vec3 vLocalUnitTexCoord; // backface interpolated coordinate
uniform vec3 LOCAL_CAM_POS; // localised camPos
struct AABB {
vec3 min; // (0,0,0)
vec3 max; // (1,1,1)
};
struct Ray {
vec3 origin; vec3 dir;
};
float getUnitAABBEntry( in Ray r ) {
AABB b;
b.min = vec3( 0 );
b.max = vec3( 1 );
// compute clipping for box.min and box.max corner
vec3 rInvDir = vec3( 1.0 ) / r.dir;
vec3 tMinima = ( b.min - r.origin ) * rInvDir;
vec3 tMaxima = ( b.max - r.origin ) * rInvDir;
// sort for nearest corner
vec3 tEntries = min( tMinima, tMaxima );
// find first real entry value of 3 t-distance values in vec3 container
vec2 tMaxEntryCandidates = max( vec2( tEntries.st ), vec2( tEntries.pp ) );
float tMaxEntry = max( tMaxEntryCandidates.s, tMaxEntryCandidates.t );
}
vec3 getCloserPos( in vec3 camera, in vec3 frontFaceIntersection, in float t ) {
float useFrontCoord = 0.5 + 0.5 * sign( t );
vec3 startPos = mix( camera, frontFaceIntersection, useFrontCoord );
return startPos;
}
vec4 main(void)
{
Ray r;
r.origin = LOCAL_CAM_POS;
r.dir = normalize( vLocalUnitTexCoord - LOCAL_CAM_POS );
float t = getUnitAABBEntry( r );
vec3 frontFaceLocalUnitTexCoord = r.origin + r.dir * t;
vec3 startPos = getCloserPos( LOCAL_CAM_POS, frontFaceLocalUnitTexCoord, t );
// loop for integration follows here
vec3 start = startpos;
vec3 end = vLocalUnitTexCoord;
...for loop..etc...
}
Happy coding!

Related

SSAO implementation in Babylon JS and GLSL, using view ray for depth comparison

I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code.
It is created with Babylon JS engine as a BABYLON.ShaderMaterial and set in a BABYLON.RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above.
Before explaining the whole thing, please note that Babylon JS uses a left-handed coordinate system, which may have quite an incidence on my code.
Here are my classic steps:
First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position.
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners[] array to be sent in the vertex shader.
In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass.
These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its .z component is, therefore, equal to the far plane distance to camera).
The fragment shader follows the instructions of John Chapman's tutorial (see commented code below).
I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. A depth texture lookup actually returns (according to Babylon JS's depth shaders):
(gl_FragCoord.z / gl_FragCoord.w) / far, with:
gl_FragCoord.z: the non-linear depth
gl_FragCoord.z = 1/Wc, where Wc is the clip-space vertex position (i.e. gl_Position.w in the vertex shader)
far: the positive distance from camera to the far plane.
The kernel samples are arranged in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation.
As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue():
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
Here are my vertex and fragment shader codes (without function declarations):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
A horizontal and vertical Gaussian shader blur clears the noise generated by the random texture afterwards.
My parameters are:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
Here is the result:
The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code?
Thanks a lot!
fragPositionVS is a position in view space coordinates and radius is length in view coordinates. You use them to calculate the samplePosition:
samplePosition = samplePosition * radius + fragPositionVS;
But in the line rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;, you compare the difference of fragDepth and sampleDepth with radius. That makes no sense, since fragDepth and sampleDepth are values from the depth buffer in, the range [0, 1] and radius is a lenght in the view space.
In the line occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;, you calculate the difference of samplePosition.z and sampleDepth. While samplePosition.z is a view space coordinate inbetween -near and -far, sampleDepth is a depth in range [0, 1]. Calculating a difference between these two values doesn't make any sense either.
I suggest using always Z coordinates, if you want to calculate distances or if you want to compare distances.
If you have a depth value, the Z-coordinate in view space can be calculated by converting the depth value to normalized device coordinate and converting the normalized device coordinate to a view coordinate:
float DepthToZ( in float depth )
{
float near = .... ; // distance to near plane (absolute value)
float far = .... ; // distance to far plane (absolute value)
float z_ndc = 2.0 * depth - 1.0;
float z_eye = 2.0 * near * far / (far + near - z_ndc * (far - near));
return -z_eye;
}
The depth is a value in the range [0, 1] and maps the range from the distance to the near plane and the distance to the far plane (in view space), but not linear (for perspective projection).
For this reason, the code line vec3 fragPositionVS = (vCornerPositionVS / far) * fragDepth; will not calculate a correct fragment position, but you can do it like this:
vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
Note, in view space the z axis comes out of the view port. If the corner positions are set up in view space, then the Z-coordinate has to be the negative distance to the far plane:
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, -far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, -far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, -far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, -far);
In the vertex shader the assignment of the corner positions is mixed. The lower left position of the viewport is (-1,-1) and the top right position is (1,1) (in normalized device coordinates).Adapt the code like this:
JavaScript:
var farCornersVec = [bottomLeft, bottomRight, topLeft, topRight];
Vertex shader:
// bottomLeft=0*2+0*1, bottomRight=0*2+1*1, topLeft=1*2+0*1, topRight=1*2+1*1;
int i = (positionVS.y > 0.0 ? 2 : 0) + (positionVS.x > 0.0 ? 1 : 0);
vCornerPositionVS = farCorners[i];
Note, if you could add an additional vertex attribute for the corner position, then it would be simplified.
The calculation of the fragment position can be simplified, if the aspect ratio, the field of view angle and the normalized device coordinates of the fragment (fragment position in range [-1,1]) are known:
ndc_xy = vUV * 2.0 - 1.0;
tanFov_2 = tan( radians( fov / 2 ) )
aspect = vp_size_x / vp_size_y
fragZ = DepthToZ( fragDepth );
fragPos = vec3( ndc_xy.x * aspect * tanFov_2, ndc_xy.y * tanFov_2, -1.0 ) * abs( fragZ );
If the perspective projection matrix is known, this can be calculated easily:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec4 viewH = inverse( projection ) * vec4( ndc_xy, fragDepth * 2.0 - 1.0, 1.0 );
vec3 fragPosition = viewH.xyz / viewH.w;
If the perspective projection is symmetric (the filed of view is not displaced and the Z-axis of the view space is in the center of the viewport), this can be simplified:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPosition = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs(DepthToZ(fragDepth));
See also:
How to recover view space position given view space depth value and ndc xy
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I suggest to write the fragment shader somehow like this:
float fragDepth = getDepth(depthBufferTexture, vUV);
float ambientOcclusion = 1.0;
if (fragDepth > 0.0)
{
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0;
rvec.z = 0.0;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
mat3 tbn = mat3(tangent, cross(normal, tangent), normal);
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPositionVS = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs( DepthToZ(fragDepth) );
// vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
float occlusion = 0.0;
for (int i = 0; i < NB_SAMPLES; i++)
{
vec3 samplePosition = fragPositionVS + radius * tbn * kernelSamples[i];
// Project sample position from view space to screen space:
vec4 offset = projection * vec4(samplePosition, 1.0);
offset.xy /= offset.w; // Perspective division -> [-1,1]
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth
float sampleZ = DepthToZ( getDepth(depthTexture, offset.xy) );
// Range check and accumulate if fragment contributes to occlusion:
float rangeCheck = step( abs(fragPositionVS.z - sampleZ), radius );
occlusion += step( samplePosition.z - sampleZ, -depthBias ) * rangeCheck;
}
// Inversion
ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
}
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
See the WebGL example, which demonstrates the full algorithm (Unfortunately the full code would exceed the limit of 30000 signs, which an answer is limited to):
JSFiddle or GitHub
Extension to the answer
The depth as it would be stored in the depth buffer is calculated like this:
(see OpenGL ES write depth data to color)
float ndc_depth = vPosPrj.z / vPosPrj.w;
float depth = ndc_depth * 0.5 + 0.5;
This value is already calculated in the fragment shader and is contained in gl_FragCoord.z. See the Khronos Group reference page for gl_FragCoord which says:
The z component is the depth value that would be used for the fragment's depth if no shader contained any writes to gl_FragDepth.
If the depth has to be stored in a RGBA8 buffer, the depth has to be encoded to the 4 bytes of the buffer to avoid a loss of accuracy, and has to be decoded when read from the buffer:
encode
vec3 PackDepth( in float depth )
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
decode
float UnpackDepth( in vec3 pack )
{
float depth = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
return depth * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
See also the answers to the following questions:
How do I convert between float and vec4,vec3,vec2?
OpenGL ES write depth data to color
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?

OpenGL Computing Normals and TBN Matrix from Depth Buffer (SSAO implementation)

I'm implementing SSAO in OpenGL, following this tutorial: Jhon Chapman SSAO
Basically the technique described uses an Hemispheric kernel which is oriented along the fragment's normal. The view space z position of the sample is then compared to its screen space depth buffer value.
If the value in the depth buffer is higher, it means the sample ended up in a geometry so this fragment should be occluded.
The goal of this technique is to get rid of the classic implementation artifact where objects flat faces are greyed out.
I've have the same implementation with 2 small differencies
I'm not using a Noise texture to rotate my kernel, so I have banding artifacts, that's fine for now
I don't have access to a buffer with Per-pixel normals, so I have to compute my normal and TBN matrix only using the depth buffer.
The algorithm seems to be working fine, I can see the fragments being occluded, BUT I still have my faces greyed out...
IMO it's coming from the way I'm calculating my TBN matrix. The normals look OK but something must be wrong as my kernel doesn't seem to be properly aligned causing samples to end up in the faces.
Screenshots are with a Kernel of 8 samples and a radius of .1. the first is only the result of SSAO pass and the second one is the debug render of the generated normals.
Here is the code for the function that computes the Normal and TBN Matrix
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 x = vec3(uv.x, 0., ld);
vec3 y = vec3(0., uv.y, ld);
x = dFdx(x);
y = dFdy(y);
x = normalize(x);
y = normalize(y);
vec3 normal = normalize(cross(x, y));
return mat3(x, y, normal);
}
And the SSAO shader
#include "helper.glsl"
in vec2 vertTexcoord;
uniform sampler2D depthTex;
const int MAX_KERNEL_SIZE = 8;
uniform vec4 gKernel[MAX_KERNEL_SIZE];
// Kernel Radius in view space (meters)
const float KERNEL_RADIUS = .1;
uniform mat4 cameraProjectionMatrix;
uniform mat4 cameraProjectionMatrixInverse;
out vec4 FragColor;
void main()
{
// Get the current depth of the current pixel from the depth buffer (stored in the red channel)
float originDepth = texture(depthTex, vertTexcoord).r;
// Debug linear depth. Depth buffer is in the range [1.0];
float oLinearDepth = getLinearDepth(depthTex, vertTexcoord);
// Compute the view space position of this point from its depth value
vec4 viewport = vec4(0,0,1,1);
vec3 originPosition = getViewSpaceFromWindow(cameraProjectionMatrix, cameraProjectionMatrixInverse, viewport, vertTexcoord, originDepth);
mat3 lookAt = computeTBNMatrixFromDepth(depthTex, vertTexcoord);
vec3 normal = lookAt[2];
float occlusion = 0.;
for (int i=0; i<MAX_KERNEL_SIZE; i++)
{
// We align the Kernel Hemisphere on the fragment normal by multiplying all samples by the TBN
vec3 samplePosition = lookAt * gKernel[i].xyz;
// We want the sample position in View Space and we scale it with the kernel radius
samplePosition = originPosition + samplePosition * KERNEL_RADIUS;
// Now we need to get sample position in screen space
vec4 sampleOffset = vec4(samplePosition.xyz, 1.0);
sampleOffset = cameraProjectionMatrix * sampleOffset;
sampleOffset.xyz /= sampleOffset.w;
// Now to get the depth buffer value at the projected sample position
sampleOffset.xyz = sampleOffset.xyz * 0.5 + 0.5;
// Now can get the linear depth of the sample
float sampleOffsetLinearDepth = -getLinearDepth(depthTex, sampleOffset.xy);
// Now we need to do a range check to make sure that object
// outside of the kernel radius are not taken into account
float rangeCheck = abs(originPosition.z - sampleOffsetLinearDepth) < KERNEL_RADIUS ? 1.0 : 0.0;
// If the fragment depth is in front so it's occluding
occlusion += (sampleOffsetLinearDepth >= samplePosition.z ? 1.0 : 0.0) * rangeCheck;
}
occlusion = 1.0 - (occlusion / MAX_KERNEL_SIZE);
FragColor = vec4(vec3(occlusion), 1.0);
}
Update 1
This variation of the TBN calculation function gives the same results
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv)
{
// Compute the normal and TBN matrix
float ld = -getLinearDepth(depthTex, uv);
vec3 a = vec3(uv, ld);
vec3 x = vec3(uv.x + dFdx(uv.x), uv.y, ld + dFdx(ld));
vec3 y = vec3(uv.x, uv.y + dFdy(uv.y), ld + dFdy(ld));
//x = dFdx(x);
//y = dFdy(y);
//x = normalize(x);
//y = normalize(y);
vec3 normal = normalize(cross(x - a, y - a));
vec3 first_axis = cross(normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, normal);
return mat3(normalize(first_axis), normalize(second_axis), normal);
}
I think the problem is probably that you are mixing coordinate systems. You are using texture coordinates in combination with the linear depth. You can imagine two vertical surfaces facing slightly to the left of the screen. Both have the same angle from the vertical plane and should thus have the same normal right?
But let's then imagine that one of these surfaces are much further from the camera. Since fFdx/fFdy functions basically tell you the difference from the neighbor pixel, the surface far away from the camera will have greater linear depth difference over one pixel, than the surface close to the camera. But the uv.x / uv.y derivative will have the same value. That means that you will get different normals depending on the distance from the camera.
The solution is to calculate the view coordinate and use the derivative of that to calculate the normal.
vec3 viewFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
float ld = -getLinearDepth(depthTex, uv);
/// I assume ld is negative for fragments in front of the camera
/// not sure how getLinearDepth is implemented
vec3 z_scaled_view = (view / view.z) * ld;
return z_scaled_view;
}
mat3 computeTBNMatrixFromDepth(in sampler2D depthTex, in vec2 uv, in vec3 view)
{
vec3 view = viewFromDepth(depthTex, uv);
vec3 view_normal = normalize(cross(dFdx(view), dFdy(view)));
vec3 first_axis = cross(view_normal, vec3(1.0f, 0.0f, 0.0f));
vec3 second_axis = cross(first_axis, view_normal);
return mat3(view_normal, normalize(first_axis), normalize(second_axis));
}

Creating a rectangular light source in OpenGL?

I am trying to create a rectangular, sharp-edge light source in OpenGL for one application. My idea is to create a spot light and somehow mask the shape of the shade into a rectangle, the mask of course has to be invisible through camera. When I was trying to implement this idea, it turns out that OpenGL will just skip rendering objects outside the camera, although lighting source outside camera is still valid. This has prevented me from creating the effect I wanted and I am wondering if any of you have come across similar problems before.
To make my question more specific, consider the following case of my question:
spot light at 0,0,5
target object at 0,0,0
mask object (a simple quad parallel to x-axis) at 0,0,3.
When camera is at 0,0,4, light passes through mask object and leaves a rectangular shape on the target object (which is what I wanted), but I can also see the mask object!(while I need the mask object to be invisible)
When I move the camera closer to the target object, say 0,0,2. The mask object is behind the camera and therefore invisible. However, since it's invisible, OpenGL stopped rendering it and therefore the mask object does not have any effect on the target object, and the light shade is still round!
My guess would be to start from a spot light, but separating the angle calculation:
* Project the L vector on the YZ plane to calculate the angle on the X axis
* Project the L vector on the XZ plane to calculate the angle on the Y axis
A very naive implementation of this could be (GLSL):
varying vec3 v_V; // World-space position
varying vec3 v_N; // World-space normal
uniform float time; // global time in seconds since shaderprogram link
uniform vec2 uSpotSize; // Spot size, on X and Y axes
vec3 lp = vec3(0.0, 0.0, 7.0 + cos(time) * 5.0); // Light world-space position
vec3 lz = vec3(0.0, 0.0, -1.0); // Light direction (Z vector)
// Light radius (for attenuation calculation)
float lr = 3.0;
void main()
{
// Calculate L, the vector from model surface to light
vec3 L = lp - v_V;
// Project L on the YZ / XZ plane
vec3 LX = normalize(vec3(L.x, 0.0, L.z));
vec3 LY = normalize(vec3(0.0, L.y, L.z));
// Calculate the angle on X and Y axis using projected vectors just above
float ax = dot(LX, -lz);
float ay = dot(LY, -lz);
// Light attenuation
float d = distance(lp, v_V);
float attenuation = 1.0 / (1.0 + (2.0/lr)*d + (1.0/(lr*lr))*d*d);
float shaded = max(0.0, dot(v_N, L)) * attenuation;
if(ax > cos(uSpotSize.x) && ay > cos(uSpotSize.y))
gl_FragColor = vec4(shaded); // Inside the light influence zone, light it up !
else
gl_FragColor = vec4(0.1); // Outside the light influence zone.
}
Again, this is very naive. For instance, the X/Y projection is done in world-space. If you want to be able to rotate the light rectangle, you might have to introduce a vector pointing to the right of the light.
Thus, you'll be able to get the fragment coordinate in the light's coordinate frame, and with this, you can decide whether to shade the fragment or not.
One solution might be adapting the calculations used for projective texture lookups to simulate a rectangular light source. You did not specify which OpenGL version you're using, but projective texture lookups can even be achieved with the fixed function pipeline
- although they're arguably easier to do in a shader.
Of course, this would not simulate a rectangular area light source, just a point light source that is constrained to a rectangular region.
Using this approach, you'd have to specify view & projection matrices for the light source; where the view matrix is essentially generated by a 'look at' with the light position & it's direction; the projection matrix encodes a perspective projection with your desired horizontal & vertical 'field of view'.
If you just want a rectangular area, you don't even need a texture; A simple vertex/ fragment shader pair could look like this:
( the vertex shader basically transforms the position to the light's clip space, the fragment shader performs the clipping & computes a lambert shading if the fragment is inside the light frustum )
#version 330 core
layout ( location = 0 ) in vec3 vertexPosition;
layout ( location = 1 ) in vec3 vertexNormal;
layout ( location = 3 ) in vec3 vertexDiffuse;
uniform mat4 modelTf;
uniform mat3 normalTf;
uniform mat4 viewTf; // view matrix for render camera
uniform mat4 projectiveTf; // projection matrix for render camera
uniform mat4 viewTf_lightCam; // view matrix of light source
uniform mat4 projectiveTf_lightCam; // projective matrix of light source
uniform vec4 lightPosition_worldSpace;
out vec3 diffuseColor;
out vec3 normal_worldSpace;
out vec3 toLight_worldSpace;
out vec4 position_lightClipSpace;
void main()
{
diffuseColor = vertexDiffuse;
vec4 vertexPosition_worldSpace = modelTf * vec4( vertexPosition, 1.0 );
normal_worldSpace = normalTf * vertexNormal;
toLight_worldSpace = normalize( lightPosition_worldSpace - vertexPosition_worldSpace ).xyz;
position_lightClipSpace = projectiveTf_lightCam * viewTf_lightCam * vertexPosition_worldSpace;
gl_Position = projectiveTf * viewTf * vertexPosition_worldSpace;
}
#version 330 core
layout ( location=0 ) out vec4 fragColor;
in vec3 diffuseColor;
in vec3 normal_worldSpace;
in vec3 toLight_worldSpace;
in vec4 position_lightClipSpace;
uniform vec3 ambientLight;
void main()
{
// clipping against the light frustum
bool isInsideX = ( position_lightClipSpace.x <= position_lightClipSpace.w && position_lightClipSpace.x >= -position_lightClipSpace.w );
bool isInsideY = ( position_lightClipSpace.y <= position_lightClipSpace.w && position_lightClipSpace.y >= -position_lightClipSpace.w );
bool isInsideZ = ( position_lightClipSpace.z <= position_lightClipSpace.w && position_lightClipSpace.z >= -position_lightClipSpace.w );
bool isInside = isInsideX && isInsideY && isInsideZ;
vec3 N = normalize( normal_worldSpace );
vec3 L = normalize( toLight_worldSpace );
vec3 lightColor = isInside ? max( dot( N, L ), 0.0 ) * vec3( 0.99, 0.66, 0.33 ) : vec3( 0.0 );
fragColor = vec4( clamp( ( ambientLight + lightColor ) * diffuseColor, vec3( 0.0 ), vec3( 1.0 ) ), 1.0 );
}
There are a lot of good papers on this, Brian Karis wrote about it in 2013 (in regards to UE4) here:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
And more recently Michal Drobot wrote an article about area lights in GPU Pro 5.
If you are using a metalness workflow you can also crank up the roughness as an approximation to area lighting, a technique introduced by Tri-Ace:
http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

How to make a billboard spherical

Following this turorial here
I have managed to create a cylindrical billboard (it utilizes a geometry shader which takes points and produces quads). The problem is that when i move the camera so that it's higher than the billboard (using gluLookat) the billboard does not rotate to truly face the camera (as if it was a cylindrical billboard).
How do I make it into spherical?
if anyone interested, here is slightly modified geometry shader code:
#version 330
//based on a great tutorial at http://ogldev.atspace.co.uk/www/tutorial27/tutorial27.html
layout (points) in;
layout (triangle_strip) out;
layout (max_vertices = 4) out;
uniform mat4 mvp;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 right = normalize(cross(up, toCamera)); //right-handed coordinate system
//vec3 right = cross(toCamera, up); //left-handed coordinate system
pos -= (right*0.5);
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,1);
EmitVertex();
pos.y -= 1.0;
pos += right;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,1);
EmitVertex();
}
EDIT:
As I said before, I have tried the approach of setting the 3,3-submatrix to identity. I might have explained the behaviour wrong, but this gif should do it better:
In the picture above, the camera is rotated with the billboard (red) using identity submatrix approach.
The billboard, however, should not move through the surface (white), it should maintain it's position correctly and always be on one side of the surface, which does not happen.
A alternative to create billboards is to throw the geometry shaders away and do it manually like this:
Vector3 DiffCamera = Billboard.position - Camera.position;
Vector3 UpVector = new Vector3(0.0f, 1.0f, 0.0f);
Vector3 CrossA = DiffCamera.cross(UpVector).normalize(); // (Step A)
Vector3 CrossB = DiffCamera.cross(CrossA).normalize(); // (Step B)
// now you can use CrossA and CrossB and the billboard position to calculate the positions of the edges of the billboard-rectangle
// like this
Vector3 Pos1 = Billboard.position + CrossA + CrossB;
Vector3 Pos2 = Billboard.position - CrossA + CrossB;
Vector3 Pos3 = Billboard.position + CrossA - CrossB;
Vector3 Pos4 = Billboard.position - CrossA - CrossB;
we calculate in Step A the cross-product because we want the horizontal aligned direction of the billboard.
In step B we do it for the vertical direction.
do this for every billbaord in the scene.
or better as geometry shader (just a try)
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 CrossA = normalize(cross(up, toCamera));
vec3 CrossB = normalize(cross(CrossA, toCamera));
// set coordinates of the 4 points
Just reset the top left 3×3 subpart of the modelview matrix to identity, leaving the 4th column and row as it is, i.e.:
1 0 0 …
0 1 0 …
0 0 1 …
… … … …
UPDATE World space axis following billboards
The key insight into efficiently implementing aligned billboards is to realize
how they work in view space. By definition the normal vector of a billboard in
view space is Z = (0, 0, 1). This leaves only one free parameter, namely the
rotation of the billboard around this axis. In a view aligned billboard the
billboard right and up axes are merely forced to be view X and Y. This is what
setting the upper left 3×3 of the modelview matrix does.
Now when we want the billboard be aligned to a certain axis within the scene
yet still face the viewer, the only parameter we can vary is the billboards
rotation. For this we do the following:
In world space we choose an axis that should be the up axis of the billboard.
Note that if the viewing axis is parallel to the billboard up axis the following
steps become singular, i.e. the rotation of the billboard is undefined. You have
to deal with this in some way, that I leave undefined here.
This chosen axis we bring into view space. Now an axis is the same kind of
thing like a normal, i.e. a direction, so we transform it the same way as we do
with normals. We transform it by the inverse transpose of the modelview matrix
as you to with normals; note that since we defined the axis in world space, we
need to actually use the inverse transpose of the world to view transformation
matrix then.
The transformed major axis of the billboard is now in view space. Next step is
to orthogonalize it to the viewing direction. For this you use the Gram-Schmidt
method. Now we got the Z and the Y column of the billboard transform. Remains
the X column, which we get by taking the cross product of the Z with the Y column.
In case anyone wonders how I solved this.
I have based my solution on Quonux's answer, the only problem with it was that the billboard would rotate very fast when the camera is right above it (when the up vector is almost parallel to the camera look vector). This strange behaviour is a result of using a cross product to find the right vector: when the camera hovers over the top of the billboard, the cross product changes it's sign, and so does the right vector's direction. That explains the rotation that happens.
So all I needed was to find a right vector using some other way.
As I knew camera's rotation angles (both horizontal and vertical) I decided to use that to find a right vector:
rotatedRight = Vector4.Transform(unRotatedRight, Matrix4.CreateRotationY((-alpha)));
and the geometry shader:
...
uniform vec3 rotRight;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 CrossA = rotRight;
... (Continues as Quonux's code)

OpenGL, target spot-light "following me around the room"!

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.