Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm attempting to implement normal mapping into my glsl shaders for the first time. I've written an ObjLoader that calculates the Tangents and Bitangents - I then pass the relevant information to my shaders (I'll show code in a bit). However, when I run the program, my models end up looking like this:
Looks great, I know, but not quite what I am trying to achieve!
I understand that I should be simply calculating direction vectors and not moving the vertices - but it seems somewhere down the line I end up making that mistake.
I am unsure if I am making the mistake when reading my .obj file and calculating the tangent/bitangent vectors, or if the mistake is happening within my Vertex/Fragment Shader.
Now for my code:
In my ObjLoader - when I come across a face, I calculate the deltaPositions and deltaUv vectors for all three vertices of the face - and then calculate the tangent and bitangent vectors:
I then organize the vertex data collected to construct my list of indices - and in that process I restructure the tangent and bitangent vectors to respect the newly constructed indice list.
Lastly - I perform Orthogonalization and calcuate the final bitangent vector.
After binding the VAO, VBO, IBO, and passing all the information respectively - my shader calculations are as follows:
Vertex Shader:
void main()
{
// Output position of the vertex, in clip space
gl_Position = MVP * vec4(pos, 1.0);
// Position of the vertex, in world space
v_Position = (M * vec4(pos, 0.0)).xyz;
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
// Vector that goes from the vertex to the camera, in camera space
vec3 vPos_cameraspace = (V * M * vec4(pos, 1.0)).xyz;
camdir_cameraspace = normalize(-vPos_cameraspace);
// Vector that goes from the vertex to the light, in camera space
vec3 lighPos_cameraspace = (V * vec4(lightPos_worldspace, 0.0)).xyz;
lightdir_cameraspace = normalize((lighPos_cameraspace - vPos_cameraspace));
v_TexCoord = texcoord;
lightdir_tangentspace = TBN * lightdir_cameraspace;
camdir_tangentspace = TBN * camdir_cameraspace;
}
Fragment Shader:
void main()
{
// Light Emission Properties
vec3 LightColor = (CalcDirectionalLight()).xyz;
float LightPower = 20.0;
// Cutting out texture 'black' areas of texture
vec4 tempcolor = texture(AlbedoTexture, v_TexCoord);
if (tempcolor.a < 0.5)
discard;
// Material Properties
vec3 MaterialDiffuseColor = tempcolor.rgb;
vec3 MaterialAmbientColor = material.ambient * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0, 0, 0);
// Local normal, in tangent space
vec3 TextureNormal_tangentspace = normalize(texture(NormalTexture, v_TexCoord)).rgb;
TextureNormal_tangentspace = (TextureNormal_tangentspace * 2.0) - 1.0;
// Distance to the light
float distance = length(lightPos_worldspace - v_Position);
// Normal of computed fragment, in camera space
vec3 n = TextureNormal_tangentspace;
// Direction of light (from the fragment)
vec3 l = normalize(TextureNormal_tangentspace);
// Find angle between normal and light
float cosTheta = clamp(dot(n, l), 0, 1);
// Eye Vector (towards the camera)
vec3 E = normalize(camdir_tangentspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l, n);
// Find angle between eye vector and reflect vector
float cosAlpha = clamp(dot(E, R), 0, 1);
color =
MaterialAmbientColor +
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance * distance) +
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha, 5) / (distance * distance);
}
I can spot 1 obvious mistake in your code. TBN is generated by the bitangent, tangent and normal. While the bitangent and tangent are transformed from model space to view space, the normal is not transformed. That does not make any sense. All the 3 vetors have to be related to the same coordinate system:
vec4 bitan = V * M * vec4(bitangent, 0.0);
vec4 tang = V * M * vec4(tangent, 0.0);
vec4 norm = V * M * vec4(normal, 0.0);
mat3 TBN = transpose(mat3(tang.xyz, bitan.xyz, norm.xyz));
Related
I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx
I'm currently learning C++ and OpenGL and was wondering if anyone could walk me through what is exactly happening with the below code. It currently calculates the positioning and resolution of a shadow map within a 3D environment.
The code currently works, just looking to get a grasp on things.
//Vertex Shader Essentials.
Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
Normal = (ViewMatrix * WorldMatrix * vec4 (VertexNormal, 0)).xyz;
EyeSpaceLightPosition = ViewMatrix * LightPosition;
EyeSpacePosition = ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1);
STCoords = VertexST;
//What is this block of code currently doing?
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
ShadowCoord = ShadowCoord / ShadowCoord.w;
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
//Alters the Shadow Map Resolution.
// Please Note - c is a slider that I control in the program execution.
float rounding = (c + 2.1) * 100.0;
ShadowCoord.x = (floor (ShadowCoord.x * rounding)) / rounding;
ShadowCoord.y = (floor (ShadowCoord.y * rounding)) / rounding;
ShadowCoord.z = (floor (ShadowCoord.z * rounding)) / rounding;
gl_Position = Position;
ShadowCoord = ProjectionMatrix * ShadowMatrix * WorldMatrix * vec4 (VertexPosition, 1);
This calculates the position of this vertex within the eye space of the light. What you're recomputing is what the Position = ProjectionMatrix * ViewMatrix * WorldMatrix * vec4 (VertexPosition, 1); line must have produced back when you were rendering to the shadow buffer.
ShadowCoord = ShadowCoord / ShadowCoord.w;
This applies a perspective projection, figuring out where your shadow coordinate should fall on the light's view plane.
Think about it like this: from the light's point of view the coordinate at (1, 1, 1) should appear on the same spot as the one at (2, 2, 2). For both of those you should sample the same 2d location on the depth buffer. Dividing by w achieves that.
ShadowCoord = (ShadowCoord + vec4 (1.0, 1.0, 1.0, 1.0)) * vec4 (1.0/2.0, 1.0/2.0, 1.0/2.0, 1.0);
This also is about sampling at the right spot. The projection above has the thing in the centre of the light's view — the thing at e.g. (0, 0, 1) — end up at (0, 0). But (0, 0) is the bottom left of the light map, not the centre. This line ensures that the lightmap is taken to cover the area from (-1, -1) across to (1, 1) in the light's projection space.
... so, in total, the code is about mapping from 3d vectors that describe the vector from the light to the point in the light's space, to 2d vectors that describe where the point falls on the light's view plane — the plane that was rendered to produce the depth map.
I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);
I am writing a GLSL shader that simulates chromatic aberration for simple objects. I am staying OpenGL 2.0 compatible, so I use the built-in OpenGL matrix stack. This is the simple vertex shader:
uniform vec3 cameraPos;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}
The cameraPos uniform is the position of the camera in model space, as one might imagine. Here is the fragment shader:
const float etaR = 1.14;
const float etaG = 1.12;
const float etaB = 1.10;
const float fresnelPower = 2.0;
const float F = ((1.0 - etaG) * (1.0 - etaG)) / ((1.0 + etaG) * (1.0 + etaG));
uniform samplerCube environment;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec3 i = normalize(incident);
vec3 n = normalize(normal);
float ratio = F + (1.0 - F) * pow(1.0 - dot(-i, n), fresnelPower);
vec3 refractR = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaR), 1.0));
vec3 refractG = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaG), 1.0));
vec3 refractB = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaB), 1.0));
vec3 reflectDir = vec3(gl_TextureMatrix[0] * vec4(reflect(i, n), 1.0));
vec4 refractColor;
refractColor.ra = textureCube(environment, refractR).ra;
refractColor.g = textureCube(environment, refractG).g;
refractColor.b = textureCube(environment, refractB).b;
vec4 reflectColor;
reflectColor = textureCube(environment, reflectDir);
vec3 combinedColor = mix(refractColor, reflectColor, ratio);
gl_FragColor = vec4(combinedColor, 1.0);
}
The environment is a cube map that is rendered live from the drawn object's environment.
Under normal circumstances, the shader behaves (I think) like expected, yielding this result:
However, when the camera is rotated 180 degrees around its target, so that it now points at the object from the other side, the refracted/reflected image gets warped like so (This happens gradually for angles between 0 and 180 degrees, of course):
Similar artifacts appear when the camera is lowered/raised; it only seems to behave 100% correctly when the camera is directly over the target object (pointing towards negative Z, in this case).
I am having trouble figuring out which transformation in the shader that is responsible for this warped image, but it should be something obvious related to how cameraPos is handled. What is causing the image to warp itself in this way?
This looks suspect to me:
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
Is your cameraPos defined in world space? You're subtracting a view space vector (position), from a supposedly world space cameraPos vector. You either need to do the calculation in world space or view space, but you can't mix them.
To do this correctly in world space you'll have to upload the model matrix separately to get the world space incident vector.
I know there are couple of threads on the net about the same problem but I haven't got help from these because my implementation is different.
I'm rendering colors, normals and depth in view space into textures. In second I bind textures with fullscreen quad and calculate lighting. Directional light seems to work fine but point lights are moving with camera.
I share corresponding shader code:
Lighting step vertex shader
in vec2 inVertex;
in vec2 inTexCoord;
out vec2 texCoord;
void main() {
gl_Position = vec4(inVertex, 0, 1.0);
texCoord = inTexCoord;
}
Lighting step fragment shader
float depth = texture2D(depthBuffer, texCoord).r;
vec3 normal = texture2D(normalBuffer, texCoord).rgb;
vec3 color = texture2D(colorBuffer, texCoord).rgb;
vec3 position;
position.z = -nearPlane / (farPlane - (depth * (farPlane - nearPlane))) * farPlane;
position.x = ((gl_FragCoord.x / width) * 2.0) - 1.0;
position.y = (((gl_FragCoord.y / height) * 2.0) - 1.0) * (height / width);
position.x *= -position.z;
position.y *= -position.z;
normal = normalize(normal);
vec3 lightVector = lightPosition.xyz - position;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - position);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = lightAmbient;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(color * lightColor * attenuation), 1.0);
I send light attribues to shader as uniforms:
shader->set("lightPosition", (viewMatrix * modelMatrix).inverse().transpose() * vec4(0, 10, 0, 1.0));
viewmatrix is camera matrix and modelmatrix is just identity here.
Why point lights are translating with camera not with models?
Any suggestions are welcome!
In addition to Nobody's comment that all the vectors you compute with have to be normalized, you have to make sure that they all are in the same space. If you use the view space position as view vector, the normal vector has to be in view space, too (has to be transformed by the inverse transpose modelview matrix before getting written into the G-buffer in the first pass). And the light vector has to be in view space, too. Therefore you have to transform the light position by the view matrix (or the modelview matrix, if the light position is not in world space), instead of its inverse transpose.
shader->set("lightPosition", viewMatrix * modelMatrix * vec4(0, 10, 0, 1.0));
EDIT: For the directional light the inverse transpose is actually a good idea if you specify the light direction as the direction to the light (like vec4(0, 1, 0, 0) for a light pointing in the -z direction).