I'm using vispy to render 3D parametric surfaces with OpenGL. It seems to be working well for the most part but it looks weird
in some parts around the edges of the geometry, almost as if the normals were pointing inwards (example).
I suspect the problem lies either with my fragment shader or the surface normals, especially in areas where the they are close to perpendicular to the camera direction. Here's how the vertices are created:
# --- Python imports ---
import numpy as np
# --- Internal imports ---
from glmesh import MeshApp3D
# -----------------------------------------------------------
# Torus parameters
radius1 = 1.0
radius2 = 0.2
# Define torus
torus = ( lambda u,v : np.cos(u)*(radius1+radius2*np.cos(v)),
lambda u,v : np.sin(u)*(radius1+radius2*np.cos(v)),
lambda u,v : radius2*np.sin(v) )
# Generate grid of points on the torus
nSamples = 100
U = np.linspace(0.0,2*np.pi,nSamples)
V = np.linspace(0.0,-2*np.pi,nSamples)
grid = np.array(
[
[ [torus[componentIndex](u,v) for u in U] for v in V ] for componentIndex in (0,1,2)
],
dtype=np.float32 )
# -----------------------------------------------------------
# Rearrange grid to a list of points
vertices = np.reshape(
np.transpose( grid,
axes=(1,2,0)),
( nSamples*nSamples ,3))
# Generate triangle indices in a positive permutation
# For example:
# 7---8---9
# | / | / |
# 4---5---6
# | / | / |
# 1---2---3
#
# (1,5,4) (1,2,5) (2,6,5) (2,3,6) (4,8,7) (4,5,8) (5,9,8) (5,6,9)
faces = np.zeros( ( 2*(nSamples-1)*(nSamples-1), 3 ), dtype=np.uint32 )
k = 0
for j in range(nSamples-1):
for i in range(nSamples-1):
jni = j*nSamples+i
faces[k] = [ jni,
jni+nSamples+1,
jni+nSamples ]
faces[k+1] = [ jni,
jni+1,
jni+1+nSamples ]
k+=2
# -----------------------------------------------------------
# -----------------------------------------------------------
meshApp = MeshApp3D( {'vertices' : vertices, 'faces' : faces},
colors=(1.0,1.0,1.0) )
meshApp.run()
The fragment shader:
varying vec3 position;
varying vec3 normal;
varying vec4 color;
void main() {
// Diffuse
vec3 lightDir = normalize( position - vec3($lightPos) );
float diffuse = dot( normal, lightDir );
//diffuse = min( max(diffuse,0.0), 1.0 );
// Specular
vec3 halfWayVector = normalize( ( lightDir+vec3($cameraDir) )/2.0 );
float specular = min( max( dot(halfWayVector,normal) ,0.0), 1.0 );
specular = specular * specular * specular * specular;
specular = specular * specular * specular * specular;
specular = specular * specular * specular * specular;
// Additive
vec3 fragColor = vec3($lightColor) * color.xyz;
fragColor = fragColor * ( $diffuseMaterialConstant * diffuse +
$specularMaterialConstant * specular)
+ $ambientMaterialConstant * vec3($ambientLight) * color.xyz;
gl_FragColor = vec4(fragColor,1.0);
}
The rest of the code is broken down in modules but in a nutshell, here's what happens:
a MeshData is constructed from the vertices and faces
the normals are generated in MeshData internally, based on the vertex positions and the order in which they define their corresponding triangles
a canvas and an OpenGL window is created and initialized
the mesh is drawn as triangle strips
A problematic part can be the definition of the mesh triangles. Example:
7---8---9
| / | / |
4---5---6
| / | / |
1---2---3
v
^
|
---> u
(1,5,4) (1,2,5) (2,6,5) (2,3,6) (4,8,7) (4,5,8) (5,9,8) (5,6,9)
It was originally meant to generate triangles for 2D functions z=f(x,y) and worked well for them, but I ran into problems when applying it to parametric surfaces r(u,v) = [ x(u,v), y(u,v), z(u,v) ]. For example, this is the reason why the parameter v goes from 0 to -2pi instead of +2pi (otherwise the normals would be pointing inwards).
Any ideas on what might be causing the glitches seen in the image?
(or any advice on better triangle generation?)
Related
I've been following along with the OpenGL 4 Shading Language cookbook and have gotten a teapot rendering with bezier surfaces. The next step I'm attempting is to draw a wireframe over the surfaces using a geometry shader. The directions can be found here on pages 228-230. Following the code that is given, I've gotten the wireframe to display, however, I also have multiple fragments that flicker different shades of my material color.
An image of this can be seen
I have narrowed down the possible issues and have discovered that for some reason, when I perform my triangle height calculations, I am getting variable side lengths for my calculations, as if I hard code the values in the edge distance for each vertex of the triangle within the geometry shader, the teapot no longer flickers, but neither does a wireframe display. (variables ha, hb, hc in the geo shader below)
I was wondering if anyone has run into this issue before or are aware of a workaround.
Below are some sections of my code:
Geometry Shader:
/*
* Geometry Shader
*
* CSCI 499, Computer Graphics, Colorado School of Mines
*/
#version 410 core
layout( triangles ) in;
layout( triangle_strip, max_vertices = 3 ) out;
out vec3 GNormal;
out vec3 GPosition;
out vec3 ghalfwayVec;
out vec3 GLight;
noperspective out vec3 GEdgeDistance;
in vec4 TENormal[];
in vec4 TEPosition[];
in vec3 halfwayVec[];
in vec3 TELight[];
uniform mat4 ViewportMatrix;
void main() {
// Transform each vertex into viewport space
vec3 p0 = vec3(ViewportMatrix * (gl_in[0].gl_Position / gl_in[0].gl_Position.w));
vec3 p1 = vec3(ViewportMatrix * (gl_in[1].gl_Position / gl_in[1].gl_Position.w));
vec3 p2 = vec3(ViewportMatrix * (gl_in[2].gl_Position / gl_in[2].gl_Position.w));
// Find the altitudes (ha, hb and hc)
float a = length(p1 - p2);
float b = length(p2 - p0);
float c = length(p1 - p0);
float alpha = acos( (b*b + c*c - a*a) / (2.0*b*c) );
float beta = acos( (a*a + c*c - b*b) / (2.0*a*c) );
float ha = abs( c * sin( beta ) );
float hb = abs( c * sin( alpha ) );
float hc = abs( b * sin( alpha ) );
// Send the triangle along with the edge distances
GEdgeDistance = vec3( ha, 0, 0 );
GNormal = vec3(TENormal[0]);
GPosition = vec3(TEPosition[0]);
gl_Position = gl_in[0].gl_Position;
EmitVertex();
GEdgeDistance = vec3( 0, hb, 0 );
GNormal = vec3(TENormal[1]);
GPosition = vec3(TEPosition[1]);
gl_Position = gl_in[1].gl_Position;
EmitVertex();
GEdgeDistance = vec3( 0, 0, hc );
GNormal = vec3(TENormal[2]);
GPosition = vec3(TEPosition[2]);
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
ghalfwayVec = halfwayVec[0];
GLight = TELight[0];
}
Fragment Shader:
/*
* Fragment Shader
*
* CSCI 441, Computer Graphics, Colorado School of Mines
*/
#version 410 core
in vec3 ghalfwayVec;
in vec3 GLight;
in vec3 GNormal;
in vec3 GPosition;
noperspective in vec3 GEdgeDistance;
layout( location = 0 ) out vec4 FragColor;
uniform vec3 mDiff, mAmb, mSpec;
uniform float shininess;
uniform light {
vec3 lAmb, lDiff, lSpec, lPos;
};
// The mesh line settings
uniform struct LineInfo {
float Width;
vec4 Color;
} Line;
vec3 phongModel( vec3 pos, vec3 norm ) {
vec3 lightVec2 = normalize(GLight);
vec3 normalVec2 = -normalize(GNormal);
vec3 halfwayVec2 = normalize(ghalfwayVec);
float sDotN = max( dot(lightVec2, normalVec2), 0.0 );
vec4 diffuse = vec4(lDiff * mDiff * sDotN, 1);
vec4 specular = vec4(0.0);
if( sDotN > 0.0 ) {
specular = vec4(lSpec * mSpec * pow( max( 0.0, dot( halfwayVec2, normalVec2 ) ), shininess ),1);
}
vec4 ambient = vec4(lAmb * mAmb, 1);
vec3 fragColorOut = vec3(diffuse + specular + ambient);
// vec4 fragColorOut = vec4(0.0,0.0,0.0,0.0);
return fragColorOut;
}
void main() {
// /*****************************************/
// /******* Final Color Calculations ********/
// /*****************************************/
// The shaded surface color.
vec4 color=vec4(phongModel(GPosition, GNormal), 1.0);
// Find the smallest distance
float d = min( GEdgeDistance.x, GEdgeDistance.y );
d = min( d, GEdgeDistance.z );
// Determine the mix factor with the line color
float mixVal = smoothstep( Line.Width - 1, Line.Width + 1, d );
// float mixVal = 1;
// Mix the surface color with the line color
FragColor = vec4(mix( Line.Color, color, mixVal ));
FragColor.a = 1;
}
I ended up stumbling across the solution to my issue. In the geometry shader, I was passing the halfway vector and the light vector after ending the primitive, as such, the values of these vectors was never being correctly sent to the fragment shader. Since no data was given to the fragment shader, garbage values were used and the Phong shading model used random values to compute the fragment color. Moving the two lines after EndPrimative() to the top of the main function in the geometry shader resolved the issue.
I'm trying to create my own SSAO shader in forward rendering (not in post processing) with GLSL. I'm encountering some issues, but I really can't figure out what's wrong with my code.
It is created with Babylon JS engine as a BABYLON.ShaderMaterial and set in a BABYLON.RenderTargetTexture, and it is mainly inspired by this renowned SSAO tutorial: http://john-chapman-graphics.blogspot.fr/2013/01/ssao-tutorial.html
For performance reasons, I have to do all the calculation without projecting and unprojecting in screen space, I'd rather use the view ray method described in the tutorial above.
Before explaining the whole thing, please note that Babylon JS uses a left-handed coordinate system, which may have quite an incidence on my code.
Here are my classic steps:
First, I calculate my four camera far plane corners positions in my JS code. They might be constants every time as they are calculated in view space position.
// Calculating 4 corners manually in view space
var tan = Math.tan;
var atan = Math.atan;
var ratio = SSAOSize.x / SSAOSize.y;
var far = scene.activeCamera.maxZ;
var fovy = scene.activeCamera.fov;
var fovx = 2 * atan(tan(fovy/2) * ratio);
var xFarPlane = far * tan(fovx/2);
var yFarPlane = far * tan(fovy/2);
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, far);
var farCornersVec = [topLeft, topRight, bottomRight, bottomLeft];
var farCorners = [];
for (var i = 0; i < 4; i++) {
var vecTemp = farCornersVec[i];
farCorners.push(vecTemp.x, vecTemp.y, vecTemp.z);
}
These corner positions are sent to the vertex shader -- that is why the vector coordinates are serialized in the farCorners[] array to be sent in the vertex shader.
In my vertex shader, position.x and position.y signs let the shader know which corner to use at each pass.
These corners are then interpolated in my fragment shader for calculating a view ray, i.e. a vector from the camera to the far plane (its .z component is, therefore, equal to the far plane distance to camera).
The fragment shader follows the instructions of John Chapman's tutorial (see commented code below).
I get my depth buffer as a BABYLON.RenderTargetTexture with the DepthRenderer.getDepthMap() method. A depth texture lookup actually returns (according to Babylon JS's depth shaders):
(gl_FragCoord.z / gl_FragCoord.w) / far, with:
gl_FragCoord.z: the non-linear depth
gl_FragCoord.z = 1/Wc, where Wc is the clip-space vertex position (i.e. gl_Position.w in the vertex shader)
far: the positive distance from camera to the far plane.
The kernel samples are arranged in a hemisphere with random floats in [0,1], most being distributed close to origin with a linear interpolation.
As I don't have a normal texture, I calculate them from the current depth buffer value with getNormalFromDepthValue():
vec3 getNormalFromDepthValue(float depth) {
vec2 offsetX = vec2(texelSize.x, 0.0);
vec2 offsetY = vec2(0.0, texelSize.y);
// texelSize = size of a texel = (1/SSAOSize.x, 1/SSAOSize.y)
float depthOffsetX = getDepth(depthTexture, vUV + offsetX); // Horizontal neighbour
float depthOffsetY = getDepth(depthTexture, vUV + offsetY); // Vertical neighbour
vec3 pX = vec3(offsetX, depthOffsetX - depth);
vec3 pY = vec3(offsetY, depthOffsetY - depth);
vec3 normal = cross(pY, pX);
normal.z = -normal.z; // We want normal.z positive
return normalize(normal); // [-1,1]
}
Finally, my getDepth() function allows me to get the depth value at current UV in 32-bit float:
float getDepth(sampler2D tex, vec2 texcoord) {
return unpack(texture2D(tex, texcoord));
// unpack() retreives the depth value from the 4 components of the vector given by texture2D()
}
Here are my vertex and fragment shader codes (without function declarations):
// ---------------------------- Vertex Shader ----------------------------
precision highp float;
uniform float fov;
uniform float far;
uniform vec3 farCorners[4];
attribute vec3 position; // 3D position of each vertex (4) of the quad in object space
attribute vec2 uv; // UV of each vertex (4) of the quad
varying vec3 vPosition;
varying vec2 vUV;
varying vec3 vCornerPositionVS;
void main(void) {
vPosition = position;
vUV = uv;
// Map current vertex with associated frustum corner position in view space:
// 0: top left, 1: top right, 2: bottom right, 3: bottom left
// This frustum corner position will be interpolated so that the pixel shader always has a ray from camera->far-clip plane.
vCornerPositionVS = vec3(0.0);
if (positionVS.x > 0.0) {
if (positionVS.y <= 0.0) { // top left
vCornerPositionVS = farCorners[0];
}
else if (positionVS.y > 0.0) { // top right
vCornerPositionVS = farCorners[1];
}
}
else if (positionVS.x <= 0.0) {
if (positionVS.y > 0.0) { // bottom right
vCornerPositionVS = farCorners[2];
}
else if (positionVS.y <= 0.0) { // bottom left
vCornerPositionVS = farCorners[3];
}
}
gl_Position = vec4(position * 2.0, 1.0); // 2D position of each vertex
}
// ---------------------------- Fragment Shader ----------------------------
precision highp float;
uniform mat4 projection; // Projection matrix
uniform float radius; // Scaling factor for sample position, by default = 1.7
uniform float depthBias; // 1e-5
uniform vec2 noiseScale; // (SSAOSize.x / noiseSize, SSAOSize.y / noiseSize), with noiseSize = 4
varying vec3 vCornerPositionVS; // vCornerPositionVS is the interpolated position calculated from the 4 far corners
void main() {
// Get linear depth in [0,1] with texture2D(depthBufferTexture, vUV)
float fragDepth = getDepth(depthBufferTexture, vUV);
float occlusion = 0.0;
if (fragDepth < 1.0) {
// Retrieve fragment's view space normal
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
// Random rotation: rvec.xyz are the components of the generated random vector
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0; // [-1,1]
rvec.z = 0.0; // Random rotation around Z axis
// Get view ray, from camera to far plane, scaled by 1/far so that viewRayVS.z == 1.0
vec3 viewRayVS = vCornerPositionVS / far;
// Current fragment's view space position
vec3 fragPositionVS = viewRay * fragDepth;
// Creation of TBN matrix
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
for (int i = 0; i < NB_SAMPLES; i++) {
// Get sample kernel position, from tangent space to view space
vec3 samplePosition = tbn * kernelSamples[i];
// Add VS kernel offset sample to fragment's VS position
samplePosition = samplePosition * radius + fragPosition;
// Project sample position from view space to screen space:
vec4 offset = vec4(samplePosition, 1.0);
offset = projection * offset; // To view space
offset.xy /= offset.w; // Perspective division
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth:
float sampleDepth = getDepth(depthTexture, offset.xy);
float rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;
// Reminder: fragDepth == fragPosition.z
// Range check and accumulate if fragment contributes to occlusion:
occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;
}
}
// Inversion
float ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
}
A horizontal and vertical Gaussian shader blur clears the noise generated by the random texture afterwards.
My parameters are:
NB_SAMPLES = 16
radius = 1.7
depthBias = 1e-5
power = 1.0
Here is the result:
The result has artifacts on its edges, and the close shadows are not very strong... Would anyone see something wrong or weird in my code?
Thanks a lot!
fragPositionVS is a position in view space coordinates and radius is length in view coordinates. You use them to calculate the samplePosition:
samplePosition = samplePosition * radius + fragPositionVS;
But in the line rangeCheck = abs(fragDepth - sampleDepth) < radius ? 1.0 : 0.0;, you compare the difference of fragDepth and sampleDepth with radius. That makes no sense, since fragDepth and sampleDepth are values from the depth buffer in, the range [0, 1] and radius is a lenght in the view space.
In the line occlusion += (samplePosition.z - sampleDepth >= depthBias ? 1.0 : 0.0) * rangeCheck;, you calculate the difference of samplePosition.z and sampleDepth. While samplePosition.z is a view space coordinate inbetween -near and -far, sampleDepth is a depth in range [0, 1]. Calculating a difference between these two values doesn't make any sense either.
I suggest using always Z coordinates, if you want to calculate distances or if you want to compare distances.
If you have a depth value, the Z-coordinate in view space can be calculated by converting the depth value to normalized device coordinate and converting the normalized device coordinate to a view coordinate:
float DepthToZ( in float depth )
{
float near = .... ; // distance to near plane (absolute value)
float far = .... ; // distance to far plane (absolute value)
float z_ndc = 2.0 * depth - 1.0;
float z_eye = 2.0 * near * far / (far + near - z_ndc * (far - near));
return -z_eye;
}
The depth is a value in the range [0, 1] and maps the range from the distance to the near plane and the distance to the far plane (in view space), but not linear (for perspective projection).
For this reason, the code line vec3 fragPositionVS = (vCornerPositionVS / far) * fragDepth; will not calculate a correct fragment position, but you can do it like this:
vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
Note, in view space the z axis comes out of the view port. If the corner positions are set up in view space, then the Z-coordinate has to be the negative distance to the far plane:
var topLeft = new BABYLON.Vector3(-xFarPlane, yFarPlane, -far);
var topRight = new BABYLON.Vector3( xFarPlane, yFarPlane, -far);
var bottomRight = new BABYLON.Vector3( xFarPlane, -yFarPlane, -far);
var bottomLeft = new BABYLON.Vector3(-xFarPlane, -yFarPlane, -far);
In the vertex shader the assignment of the corner positions is mixed. The lower left position of the viewport is (-1,-1) and the top right position is (1,1) (in normalized device coordinates).Adapt the code like this:
JavaScript:
var farCornersVec = [bottomLeft, bottomRight, topLeft, topRight];
Vertex shader:
// bottomLeft=0*2+0*1, bottomRight=0*2+1*1, topLeft=1*2+0*1, topRight=1*2+1*1;
int i = (positionVS.y > 0.0 ? 2 : 0) + (positionVS.x > 0.0 ? 1 : 0);
vCornerPositionVS = farCorners[i];
Note, if you could add an additional vertex attribute for the corner position, then it would be simplified.
The calculation of the fragment position can be simplified, if the aspect ratio, the field of view angle and the normalized device coordinates of the fragment (fragment position in range [-1,1]) are known:
ndc_xy = vUV * 2.0 - 1.0;
tanFov_2 = tan( radians( fov / 2 ) )
aspect = vp_size_x / vp_size_y
fragZ = DepthToZ( fragDepth );
fragPos = vec3( ndc_xy.x * aspect * tanFov_2, ndc_xy.y * tanFov_2, -1.0 ) * abs( fragZ );
If the perspective projection matrix is known, this can be calculated easily:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec4 viewH = inverse( projection ) * vec4( ndc_xy, fragDepth * 2.0 - 1.0, 1.0 );
vec3 fragPosition = viewH.xyz / viewH.w;
If the perspective projection is symmetric (the filed of view is not displaced and the Z-axis of the view space is in the center of the viewport), this can be simplified:
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPosition = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs(DepthToZ(fragDepth));
See also:
How to recover view space position given view space depth value and ndc xy
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
I suggest to write the fragment shader somehow like this:
float fragDepth = getDepth(depthBufferTexture, vUV);
float ambientOcclusion = 1.0;
if (fragDepth > 0.0)
{
vec3 normal = getNormalFromDepthValue(fragDepth); // in [-1,1]
vec3 rvec = texture2D(randomSampler, vUV * noiseScale).rgb * 2.0 - 1.0;
rvec.z = 0.0;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
mat3 tbn = mat3(tangent, cross(normal, tangent), normal);
vec2 ndc_xy = vUV.xy * 2.0 - 1.0;
vec3 fragPositionVS = vec3( ndc_xy.x / projection[0][0], ndc_xy.y / projection[1][1], -1.0 ) * abs( DepthToZ(fragDepth) );
// vec3 fragPositionVS = vCornerPositionVS * abs( DepthToZ(fragDepth) / far );
float occlusion = 0.0;
for (int i = 0; i < NB_SAMPLES; i++)
{
vec3 samplePosition = fragPositionVS + radius * tbn * kernelSamples[i];
// Project sample position from view space to screen space:
vec4 offset = projection * vec4(samplePosition, 1.0);
offset.xy /= offset.w; // Perspective division -> [-1,1]
offset.xy = offset.xy * 0.5 + 0.5; // [-1,1] -> [0,1]
// Get current sample depth
float sampleZ = DepthToZ( getDepth(depthTexture, offset.xy) );
// Range check and accumulate if fragment contributes to occlusion:
float rangeCheck = step( abs(fragPositionVS.z - sampleZ), radius );
occlusion += step( samplePosition.z - sampleZ, -depthBias ) * rangeCheck;
}
// Inversion
ambientOcclusion = 1.0 - (occlusion / float(NB_SAMPLES));
ambientOcclusion = pow(ambientOcclusion, power);
}
gl_FragColor = vec4(vec3(ambientOcclusion), 1.0);
See the WebGL example, which demonstrates the full algorithm (Unfortunately the full code would exceed the limit of 30000 signs, which an answer is limited to):
JSFiddle or GitHub
Extension to the answer
The depth as it would be stored in the depth buffer is calculated like this:
(see OpenGL ES write depth data to color)
float ndc_depth = vPosPrj.z / vPosPrj.w;
float depth = ndc_depth * 0.5 + 0.5;
This value is already calculated in the fragment shader and is contained in gl_FragCoord.z. See the Khronos Group reference page for gl_FragCoord which says:
The z component is the depth value that would be used for the fragment's depth if no shader contained any writes to gl_FragDepth.
If the depth has to be stored in a RGBA8 buffer, the depth has to be encoded to the 4 bytes of the buffer to avoid a loss of accuracy, and has to be decoded when read from the buffer:
encode
vec3 PackDepth( in float depth )
{
float depthVal = depth * (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( depthVal * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return encode.xyz - encode.yzw / 256.0 + 1.0/512.0;
}
decode
float UnpackDepth( in vec3 pack )
{
float depth = dot( pack, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
return depth * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
See also the answers to the following questions:
How do I convert between float and vec4,vec3,vec2?
OpenGL ES write depth data to color
How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?
I'm trying to add a fog effect to my scene in OpenGL 3.3. I tried following this tutorial. However, I can't seem to get the same effect on my screen. All that seems to happen is that my objects get darker, but there's no gray foggy mist on the screen. What could be the problem?
Here's my result.
When it should look like:
Here's my Fragment Shader with multiple light sources. It works fine without any fog. All GLSL variables are set and working correctly.
for (int i = 0; i < NUM_LIGHTS; i++)
{
float distance = length(lightVector[i]);
vec3 l;
// point light
attenuation = 1.0 / (gLight[i].attenuation.x + gLight[i].attenuation.y * distance + gLight[i].attenuation.z * distance * distance);
l = normalize( vec3(lightVector[i]) );
float cosTheta = clamp( dot( n, l ), 0,1 );
vec3 E = normalize(eyeVector);
vec3 R = reflect( -l, n );
float cosAlpha = clamp( dot( E, R ), 0,1 );
vec3 MaterialDiffuseColor = v_color * materialCoefficients.diffuse;
vec3 MaterialAmbientColor = v_color * materialCoefficients.ambient;
lighting += vec3(
MaterialAmbientColor
+ (
MaterialDiffuseColor * gLight[i].color * cosTheta * attenuation
)
+ (
materialCoefficients.specular * gLight[i].color * pow(cosAlpha, materialCoefficients.shininess)
)
);
}
float fDiffuseIntensity = max(0.0, dot(normalize(normal), -gLight[0].position.xyz));
color = vec4(lighting, 1.0f) * vec4(gLight[0].color*(materialCoefficients.ambient+fDiffuseIntensity), 1.0f);
float fFogCoord = abs(eyeVector.z/1.0f);
color = mix(color, fogParams.vFogColor, getFogFactor(fogParams, fFogCoord));
Two things.
First you should verify your fogParams.vFogColor value is getting set correctly. The simplest way to do this is to just short-circut the shader and set color to fogParams.vFogColor and immediately return. If the scene is black, then you know your fog color isn't being sent to the shader correctly.
Second, you need to eliminate your skybox. You can simply set glClearColor() with the fog color and not use a skybox at all, since everywhere the skybox should be visible you should be seeing fog instead, right? More advanced usage could modify the skybox shader to move from fog to the skybox texture depending on the angle of the vec3 off of horizontal, so when looking up the sky is (somewhat) visible, but looking horizontally simply shows the fog, and have a smooth transition between the two.
I was under the assumption that normal mapping should eliminate the visibility of triangles on a mesh, as lighting will be calculated based on unique normals per fragment instead of per vertex. As you can see in the image below, the normal map is definitely working but triangles are still visible. Is this an error?
I compute tangents as follows :
vec3 vert1( vertices[a+1] - vertices[a] );
vec3 vert2( vertices[a+2] - vertices[a] );
vec2 uv1( uvs[a+1] - uvs[a] );
vec2 uv2( uvs[a+2] - uvs[a] );
float r = (uv1.x * uv2.y) - (uv1.y * uv2.x);
vec3 tangent(vert1 * uv2.y - vert2 * uv1.y)*r;
Vertex Shader :
mat3 TBN_MATRIX;
TBN_MATRIX[0] = (MODEL_MATRIX * vec4( tangent,0 )).xyz;
TBN_MATRIX[2] = (MODEL_MATRIX * vec4( normal,0 )).xyz;
TBN_MATRIX[1] = cross( TBN_MATRIX[2], TBN_MATRIX[0] );
Fragment Shader :
fragment_normal = normalize( TBN_MATRIX * vec3(( 2 * texture( normal_map, uv_coordinates ).rgb ) - 1.0 ) );
My first thought is that a cross product is somehow not enough for the bitangent?
I am trying to create a rectangular, sharp-edge light source in OpenGL for one application. My idea is to create a spot light and somehow mask the shape of the shade into a rectangle, the mask of course has to be invisible through camera. When I was trying to implement this idea, it turns out that OpenGL will just skip rendering objects outside the camera, although lighting source outside camera is still valid. This has prevented me from creating the effect I wanted and I am wondering if any of you have come across similar problems before.
To make my question more specific, consider the following case of my question:
spot light at 0,0,5
target object at 0,0,0
mask object (a simple quad parallel to x-axis) at 0,0,3.
When camera is at 0,0,4, light passes through mask object and leaves a rectangular shape on the target object (which is what I wanted), but I can also see the mask object!(while I need the mask object to be invisible)
When I move the camera closer to the target object, say 0,0,2. The mask object is behind the camera and therefore invisible. However, since it's invisible, OpenGL stopped rendering it and therefore the mask object does not have any effect on the target object, and the light shade is still round!
My guess would be to start from a spot light, but separating the angle calculation:
* Project the L vector on the YZ plane to calculate the angle on the X axis
* Project the L vector on the XZ plane to calculate the angle on the Y axis
A very naive implementation of this could be (GLSL):
varying vec3 v_V; // World-space position
varying vec3 v_N; // World-space normal
uniform float time; // global time in seconds since shaderprogram link
uniform vec2 uSpotSize; // Spot size, on X and Y axes
vec3 lp = vec3(0.0, 0.0, 7.0 + cos(time) * 5.0); // Light world-space position
vec3 lz = vec3(0.0, 0.0, -1.0); // Light direction (Z vector)
// Light radius (for attenuation calculation)
float lr = 3.0;
void main()
{
// Calculate L, the vector from model surface to light
vec3 L = lp - v_V;
// Project L on the YZ / XZ plane
vec3 LX = normalize(vec3(L.x, 0.0, L.z));
vec3 LY = normalize(vec3(0.0, L.y, L.z));
// Calculate the angle on X and Y axis using projected vectors just above
float ax = dot(LX, -lz);
float ay = dot(LY, -lz);
// Light attenuation
float d = distance(lp, v_V);
float attenuation = 1.0 / (1.0 + (2.0/lr)*d + (1.0/(lr*lr))*d*d);
float shaded = max(0.0, dot(v_N, L)) * attenuation;
if(ax > cos(uSpotSize.x) && ay > cos(uSpotSize.y))
gl_FragColor = vec4(shaded); // Inside the light influence zone, light it up !
else
gl_FragColor = vec4(0.1); // Outside the light influence zone.
}
Again, this is very naive. For instance, the X/Y projection is done in world-space. If you want to be able to rotate the light rectangle, you might have to introduce a vector pointing to the right of the light.
Thus, you'll be able to get the fragment coordinate in the light's coordinate frame, and with this, you can decide whether to shade the fragment or not.
One solution might be adapting the calculations used for projective texture lookups to simulate a rectangular light source. You did not specify which OpenGL version you're using, but projective texture lookups can even be achieved with the fixed function pipeline
- although they're arguably easier to do in a shader.
Of course, this would not simulate a rectangular area light source, just a point light source that is constrained to a rectangular region.
Using this approach, you'd have to specify view & projection matrices for the light source; where the view matrix is essentially generated by a 'look at' with the light position & it's direction; the projection matrix encodes a perspective projection with your desired horizontal & vertical 'field of view'.
If you just want a rectangular area, you don't even need a texture; A simple vertex/ fragment shader pair could look like this:
( the vertex shader basically transforms the position to the light's clip space, the fragment shader performs the clipping & computes a lambert shading if the fragment is inside the light frustum )
#version 330 core
layout ( location = 0 ) in vec3 vertexPosition;
layout ( location = 1 ) in vec3 vertexNormal;
layout ( location = 3 ) in vec3 vertexDiffuse;
uniform mat4 modelTf;
uniform mat3 normalTf;
uniform mat4 viewTf; // view matrix for render camera
uniform mat4 projectiveTf; // projection matrix for render camera
uniform mat4 viewTf_lightCam; // view matrix of light source
uniform mat4 projectiveTf_lightCam; // projective matrix of light source
uniform vec4 lightPosition_worldSpace;
out vec3 diffuseColor;
out vec3 normal_worldSpace;
out vec3 toLight_worldSpace;
out vec4 position_lightClipSpace;
void main()
{
diffuseColor = vertexDiffuse;
vec4 vertexPosition_worldSpace = modelTf * vec4( vertexPosition, 1.0 );
normal_worldSpace = normalTf * vertexNormal;
toLight_worldSpace = normalize( lightPosition_worldSpace - vertexPosition_worldSpace ).xyz;
position_lightClipSpace = projectiveTf_lightCam * viewTf_lightCam * vertexPosition_worldSpace;
gl_Position = projectiveTf * viewTf * vertexPosition_worldSpace;
}
#version 330 core
layout ( location=0 ) out vec4 fragColor;
in vec3 diffuseColor;
in vec3 normal_worldSpace;
in vec3 toLight_worldSpace;
in vec4 position_lightClipSpace;
uniform vec3 ambientLight;
void main()
{
// clipping against the light frustum
bool isInsideX = ( position_lightClipSpace.x <= position_lightClipSpace.w && position_lightClipSpace.x >= -position_lightClipSpace.w );
bool isInsideY = ( position_lightClipSpace.y <= position_lightClipSpace.w && position_lightClipSpace.y >= -position_lightClipSpace.w );
bool isInsideZ = ( position_lightClipSpace.z <= position_lightClipSpace.w && position_lightClipSpace.z >= -position_lightClipSpace.w );
bool isInside = isInsideX && isInsideY && isInsideZ;
vec3 N = normalize( normal_worldSpace );
vec3 L = normalize( toLight_worldSpace );
vec3 lightColor = isInside ? max( dot( N, L ), 0.0 ) * vec3( 0.99, 0.66, 0.33 ) : vec3( 0.0 );
fragColor = vec4( clamp( ( ambientLight + lightColor ) * diffuseColor, vec3( 0.0 ), vec3( 1.0 ) ), 1.0 );
}
There are a lot of good papers on this, Brian Karis wrote about it in 2013 (in regards to UE4) here:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
And more recently Michal Drobot wrote an article about area lights in GPU Pro 5.
If you are using a metalness workflow you can also crank up the roughness as an approximation to area lighting, a technique introduced by Tri-Ace:
http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/