I follow tutorial http://www.mbsoftworks.sk/tutorials/opengl3/ and try to compile 10th example.
Everything work fine besides place I send (projection matrix mul modelview matrix) to the shader.
There is place where I send matrix:
//...
// render.cpp
glm::mat4 projectionMatrix = *(oglControl->getProjectionMatrix());
glm::mat4 cam = glm::translate(mModelView, cCamera.vEye);
auto newM = projectionMatrix * cam;
spDirectionalLight.setUniform("projectionMatrixMulModelViewMatrix",&newM);
//...
//...
// setUniform implementation
void CShaderProgram::setUniform(string sName, glm::mat4* mMatrices, int iCount)
{
int iLoc = glGetUniformLocation(uiProgram, sName.c_str());
glUniformMatrix4fv(iLoc, iCount, FALSE, (GLfloat*)mMatrices);
}
//...
and at the end mMatrices contain
] .
Shader code
#version 330 core
uniform mat4 projectionMatrixMulModelViewMatrix;
uniform mat4 normalMatrix;
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
out vec2 texCoord;
smooth out vec3 vNormal;
void main()
{
gl_Position = projectionMatrixMulModelViewMatrix*vec4(inPosition, 1.0);
texCoord = inCoord;
vec4 vRes = normalMatrix*vec4(inNormal, 0.0);
vNormal = vRes.xyz;
}
The result is blank screen. Renderdoc debugger tells me that gl_position matrix completely NaN.
Renderdoc screeenshot
When I send glm::mat4(1) I got valid result.
Why after multiplication shader got NaN vector?
Seems like VS2017 tricked me. I haven't initialize matrix deep inside code:
glm::mat4 mModelView = cCamera.look();
glm::mat4 CFlyingCamera::look()
{
glm::mat4 result = glm::mat4(1.0f);
result = glm::lookAt(vEye, vView, vUp);
return result;
}
and here is my fault
void CFlyingCamera::update()
{
// Change camera view direction
rotateWithMouse();
// Get view direction
glm::vec3 vMove = vView-vEye;
vMove = glm::normalize(vMove);
vMove *= fSpeed;
// Get normal to view direction vector
glm::vec3 vStrafe = glm::cross(vView-vEye, vUp);
vStrafe = glm::normalize(vStrafe);
vStrafe *= fSpeed;
int iMove = 0;
////
////
///// ERROR HERE, vMoveBy isn't initialized
glm::vec3 vMoveBy;
/////
////
////
// Get vector of move
if(Keys::key(iForw))vMoveBy += vMove*appMain.sof(1.0f);
if(Keys::key(iBack))vMoveBy -= vMove*appMain.sof(1.0f);
if(Keys::key(iLeft))vMoveBy -= vStrafe*appMain.sof(1.0f);
if(Keys::key(iRight))vMoveBy += vStrafe*appMain.sof(1.0f);
vEye += vMoveBy; vView += vMoveBy;
}
But debugger gave me valid matrix anyway. Do you know why it's happened?
Related
In a similar way than in this related question, I am trying to render complex shapes by the mean of ray-tracing inside a cube: This is, 12 triangles are used to generate a bounding box and each fragment is used to render the given shape by ray-tracing.
For this example, I am using the easiest shape: a sphere.
The problem is that when the cube is rotated, different triangle angles are distording the sphere:
What I have attempted so far:
I tried making the raytracing in World space, also in View-space as suggested in the related question.
I checked that the worldCoordinate of the fragment is correct, by making a reverse projection from gl_fragCoord, with the same output.
I switched to orthographic projection, where the distortion is reversed:
My conclusion is that, as described in the related question, the interpolant of the coordinates and the projection are the origin of the problem.
I could project the cube to a plane perpendicular to the camera direction, but I would like to understand the bottom of the question.
Related code:
Vertex shader:
#version 420 core
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
in vec3 in_Position;
in vec3 in_Normal;
out Vertex
{
vec4 worldCoord;
vec4 worldNormal;
} v;
void main(void)
{
mat4 mv = view * model;
// Position
v.worldCoord = model * vec4(in_Position, 1.0);
gl_Position = projection * mv * vec4(in_Position, 1.0);
// Normal
v.worldNormal = normalize(vec4(in_Normal, 0.0));
}
Fragment shader:
#version 420 core
uniform mat4 view;
uniform vec3 cameraPosView;
in Vertex
{
vec4 worldCoord;
vec4 worldNormal;
} v;
out vec4 out_Color;
bool sphereIntersection(vec4 rayOrig, vec4 rayDirNorm, vec4 spherePos, float radius, out float t_out)
{
float r2 = radius * radius;
vec4 L = spherePos - rayOrig;
float tca = dot(L, rayDirNorm);
float d2 = dot(L, L) - tca * tca;
if(d2 > r2)
{
return false;
}
float thc = sqrt(r2 - d2);
float t0 = tca - thc;
float t1 = tca + thc;
if (t0 > 0)
{
t_out = t0;
return true;
}
if (t1 > 0)
{
t_out = t1;
return true;
}
return false;
}
void main()
{
vec3 color = vec3(1);
vec4 spherePos = vec4(0.0, 0.0, 0.0, 1.0);
float radius = 1.0;
float t_out=0.0;
vec4 cord = v.worldCoord;
vec4 rayOrig = (inverse(view) * vec4(-cameraPosView, 1.0));
vec4 rayDir = normalize(cord-rayOrig);
if (sphereIntersection(rayOrig, rayDir, spherePos, 0.3, t_out))
{
out_Color = vec4(1.0);
}
else
{
discard;
}
}
Im having troubles with creating skeletal animation, my model is having incorrect transformations
void Animation(glm::mat4 a[])
{
float Factor= fmod(glfwGetTime(),1.0);
for(int b=0;b<BoneList.size();b++)
{
Bone *BoneT = &BoneList[b];
aiMatrix4x4 temp = inverse;
while(BoneT)
{
aiVector3D sc= BoneT->ScaleFrame[0] + (Factor * (BoneT->ScaleFrame[1] - BoneT->ScaleFrame[0]));
aiMatrix4x4 S=aiMatrix4x4();
S[0][0]=sc.x;
S[1][1]=sc.y;
S[2][2]=sc.z;
aiVector3D tr= BoneT->LocFrame[0] + (Factor * (BoneT->LocFrame[1] - BoneT->LocFrame[0]));
aiMatrix4x4 T=aiMatrix4x4();
T[0][3]=tr.x;
T[1][3]=tr.y;
T[2][3]=tr.z;
aiQuaternion R;
aiQuaternion::Interpolate(R, BoneT->RotFrame[0], BoneT->RotFrame[1], Factor);
R = R.Normalize();
temp*=BoneT->NodeTransform*(T* aiMatrix4x4(R.GetMatrix()) * S );
BoneT=BoneT->BoneParent;
}
temp*=BoneList[b].offset;
temp.Transpose();
ai_to_glm(temp,a[b]);
}
}
im creating a Temp aiMatrix4x4 to preserve assimp matrix multiplcation order, then i convert the aiMatrix4x4 to glm::mat4 using function:
void ai_to_glm(const aiMatrix4x4 &from, glm::mat4 &to)
{
to[0][0] = from[0][0];
to[0][1] = from[0][1];
to[0][2] = from[0][2];
to[0][3] = from[0][3];
to[1][0] = from[1][0];
to[1][1] = from[1][1];
to[1][2] = from[1][2];
to[1][3] = from[1][3];
to[2][0] = from[2][0];
to[2][1] = from[2][1];
to[2][2] = from[2][2];
to[2][3] = from[2][3];
to[3][0] = from[3][0];
to[3][1] = from[3][1];
to[3][2] = from[3][2];
to[3][3] = from[3][3];
}
however the end frame of the model looks like this:
i noticed if i removed the translation matrix from the function, the model looks closer what it was supposed to be
the skinning is done in shader
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 Tpos;
layout (location = 2) in ivec4 Bones;
layout (location = 3) in vec4 Weight;
uniform mat4 view;
uniform mat4 proj;
uniform mat4 LocRot;
uniform mat4 Test[8];
out vec3 vertexColor;
out vec2 TextCoord;
void main()
{
mat4 BoneTransform = Test[Bones.x]* Weight.x;
BoneTransform += Test[Bones.y] * Weight.y;
BoneTransform += Test[Bones.z] * Weight.z;
BoneTransform += Test[Bones.w] * Weight.w;
gl_Position = proj * view *LocRot* BoneTransform *vec4(aPos, 1.0);
vertexColor = vec3(1,1,1);
TextCoord = Tpos;
}
and the uniform is acessed
for(int i=0; i<4;i++)
{
glUniformMatrix4fv(glGetUniformLocation(ActiveShader->ID, "Test[0]")+i, 1, GL_FALSE, glm::value_ptr(AnimMatrix[i]));
}
what i am aware:
-inverse matrix is identity matrix, ,which doesnt do anything right now to this model.
-some weight sum arent equal 1.0 but i think its not the problem
-changing matrix multiplication order doesnt solve it
model is created and exported in blender
link to model https://send.firefox.com/download/fe0b85d3f4581630/#6S0Vr9EIjgLNN03rerMW0w
my bet is that ai_to_glm function is at fault here, but i am not sure.
Edit: I noticed that rotations are flipped aswell, as shown on images, however multiplying it by inverseai (inverted root bone transformation) does nothing.
Update: i transposed the assimp matrix before conversion and it fixed most problems, but the offsets and parent inheritance is bugged out
before any suspiction, i had no idea that i had account on stack overflow, and i answer this question from my real account
to fix this, it required multiple things:
Iterating bones and assigning children/parents were using unstable pointers and were corrupted, after solving it it fixed the major thing
i have used codingadventures's answer from question Matrix calculations for gpu skinning
my ai_to_glm was wrong, and after replacing it with
glm::mat4 ai_to_glm(aiMatrix4x4* from)
{
glm::mat4 to = glm::mat4(1.0f);
to[0][0] = (GLfloat)from->a1; to[0][1] = (GLfloat)from->b1; to[0][2] = (GLfloat)from->c1; to[0][3] = (GLfloat)from->d1;
to[1][0] = (GLfloat)from->a2; to[1][1] = (GLfloat)from->b2; to[1][2] = (GLfloat)from->c2; to[1][3] = (GLfloat)from->d2;
to[2][0] = (GLfloat)from->a3; to[2][1] = (GLfloat)from->b3; to[2][2] = (GLfloat)from->c3; to[2][3] = (GLfloat)from->d3;
to[3][0] = (GLfloat)from->a4; to[3][1] = (GLfloat)from->b4; to[3][2] = (GLfloat)from->c4; to[3][3] = (GLfloat)from->d4;
return to;
};
after doing that, it got fixed
SOLUTION:
Thanks to Rabbid76, I needed to update the Normal vector in the vertex shader with the model matrix. Updated vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = vec3(model * vec4(aNormal, 0.0));
gl_Position = projection * view * vec4(FragPos, 1.0);
}
QUESTION
I am trying to correctly load a collada (dae) file in Assimp, but the normals seem to come out wrong. I would like help with figuring this out. I have a feeling it is to do with how I am handling the transformation matrix. As an example, here's a screenshot of the OpenGL application loading an obj file:
In the above screenshot, the light is positioned directly above the models at x=0 and z=0. The normals are displaying correctly. When I load a dae file, I get the following:
The light position seems to be coming from the -z side.
here is the code I currently have to load the models:
Load the model file, and call the processNode() method which includes an aiMatrix4x4()
void Model::loadModel(std::string filename)
{
Assimp::Importer importer;
const aiScene *scene = importer.ReadFile(filename, aiProcess_Triangulate | aiProcess_FlipUVs | aiProcess_CalcTangentSpace | aiProcess_GenBoundingBoxes);
if (!scene || !scene->mRootNode) {
std::cout << "ERROR::ASSIMP Could not load model: " << importer.GetErrorString() << std::endl;
}
else {
this->directory = filename.substr(0, filename.find_last_of('/'));
this->processNode(scene->mRootNode, scene, aiMatrix4x4());
}
}
processNode() is a recursive method which primarily iterates over node->mMeshes i multiply the transformation.
void Model::processNode(aiNode* node, const aiScene* scene, aiMatrix4x4 transformation)
{
for (unsigned int i = 0; i < node->mNumMeshes; i++) {
aiMesh* mesh = scene->mMeshes[node->mMeshes[i]];
// only apply transformation on meshs not entities such as lights or camera.
transformation *= node->mTransformation;
this->meshes.push_back(processMesh(mesh, scene, transformation));
}
for (unsigned int i = 0; i < node->mNumChildren; i++)
{
processNode(node->mChildren[i], scene, transformation);
}
}
processMesh() handles collecting all mesh data (vertices, indices etc)
Mesh Model::processMesh(aiMesh* mesh, const aiScene* scene, aiMatrix4x4 transformation)
{
glm::vec3 extents;
glm::vec3 origin;
std::vector<Vertex> vertices = this->vertices(mesh, extents, origin, transformation);
std::vector<unsigned int> indices = this->indices(mesh);
std::vector<Texture> textures = this->textures(mesh, scene);
return Mesh(
vertices,
indices,
textures,
extents,
origin,
mesh->mName
);
}
Next the vertices() method is called to get all the vertices. It passes the transformation matrix. Here, i multiply the vertices with the matrix (transformation * mesh->mVertices[i];). I have a strong feeling that I am not doing something right here, and I am missing something.
std::vector<Vertex> Model::vertices(aiMesh* mesh, glm::vec3& extents, glm::vec3 &origin, aiMatrix4x4 transformation)
{
std::vector<Vertex> vertices;
for (unsigned int i = 0; i < mesh->mNumVertices; i++) {
Vertex vertex;
glm::vec3 vector3;
aiVector3D v = transformation * mesh->mVertices[i];
// Vertices
vector3.x = v.x;
vector3.y = v.y;
vector3.z = v.z;
vertex.position = vector3;
// Normals
if (mesh->mNormals) {
vector3.x = mesh->mNormals[i].x;
vector3.y = mesh->mNormals[i].y;
vector3.z = mesh->mNormals[i].z;
vertex.normal = vector3;
}
// Texture coordinates
if (mesh->mTextureCoords[0]) {
glm::vec2 vector2;
vector2.x = mesh->mTextureCoords[0][i].x;
vector2.y = mesh->mTextureCoords[0][i].y;
vertex.texCoord = vector2;
}
else {
vertex.texCoord = glm::vec2(0, 0);
}
if (mesh->mTangents) {
vector3.x = mesh->mTangents[i].x;
vector3.y = mesh->mTangents[i].y;
vector3.z = mesh->mTangents[i].z;
vertex.tangent = vector3;
}
// Bitangent
if (mesh->mBitangents) {
vector3.x = mesh->mBitangents[i].x;
vector3.y = mesh->mBitangents[i].y;
vector3.z = mesh->mBitangents[i].z;
vertex.bitangent = vector3;
}
vertices.push_back(vertex);
}
glm::vec3 min = glm::vec3(mesh->mAABB.mMin.x, mesh->mAABB.mMin.y, mesh->mAABB.mMin.z);
glm::vec3 max = glm::vec3(mesh->mAABB.mMax.x, mesh->mAABB.mMax.y, mesh->mAABB.mMax.z);
extents = (max - min) * 0.5f;
origin = glm::vec3((min.x + max.x) / 2.0f, (min.y + max.y) / 2.0f, (min.z + max.z) / 2.0f);
printf("%f,%f,%f\n", origin.x, origin.y, origin.z);
return vertices;
}
As an added note, if it is helpful, here is the fragment shader i am using on the model:
#version 330 core
out vec4 FragColor;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos;
uniform vec3 viewPos;
vec3 lightColor = vec3(1,1,1);
vec3 objectColor = vec3(0.6, 0.6, 0.6);
uniform float shininess = 32.0f;
uniform vec3 material_specular = vec3(0.1f, 0.1f, 0.1f);
uniform vec3 light_specular = vec3(0.5f, 0.5f, 0.5f);
void main()
{
// ambient
float ambientStrength = 0.2;
vec3 ambient = ambientStrength * lightColor;
// diffuse
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
// specular
vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm);
float spec = pow(max(dot(viewDir, reflectDir), 0.0), shininess);
vec3 specular = light_specular * (spec * material_specular);
vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);
}
Here is the vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
out vec3 FragPos;
out vec3 Normal;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform float scale;
void main()
{
FragPos = vec3(model * vec4(aPos, 1.0));
Normal = aNormal;
gl_Position = projection * view * vec4(FragPos, 1.0);
}
FragPos is a position in world space, because it is the vertex position transformed by the model matrix. lightPos and viewPos seems to be positions in world space, too.
So have to transform the normal vector aNormal, from model space to world space, too.
You have to transform the normal vector by the the inverse transposed of the upper left 3*3, of the 4*4 model matrix:
Normal = transpose(inverse(mat3(model))) * aNormal;
Possibly it is sufficient to transform by the upper left 3*3, of the 4*4 model matrix:
(See In which cases is the inverse matrix equal to the transpose?)
Normal = mat3(model) * aNormal;
See also:
Why is the transposed inverse of the model view matrix used to transform the normal vectors?
Why transforming normals with the transpose of the inverse of the modelview matrix?
The normal mapping looks great when the objects aren't rotated from the origin, and spot lights and directional lights work, but when I spin an object on the spot it darkens and then lightens again, just on the top face.
I'm testing using a cube. I've used a geometry shader to visualise my calculated normals (after multiplying by a TBN matrix), and they appear to be in the correct places. If I take the normal map out of the equation then the lighting is fine.
Here's where the TBN is calculated:
void calculateTBN()
{
//get the normal matrix
mat3 model = mat3(transpose(inverse(mat3(transform))));
vec3 T = normalize(vec3(model * tangent.xyz ));
vec3 N = normalize(vec3(model * normal ));
vec3 B = cross(N, T);
mat3 TBN = mat3( T , B , N);
outputVertex.TBN =TBN;
}
And the normal is sampled and transformed:
vec3 calculateNormal()
{
//Sort the input so that the normal is between 1 and minus 1 instead of 0 and 1
vec3 input = texture2D(normalMap, inputFragment.textureCoord).xyz;
input = 2.0 * input - vec3(1.0, 1.0, 1.0);
vec3 newNormal = normalize(inputFragment.TBN* input);
return newNormal;
}
My Lighting is in world space (as far as I understand the term, it takes into account the transform matrix but not the camera or projection matrix)
I did try the technique where I pass down the TBN as inverse (or transpose) and then multiplied every vector apart from the normal by it. That had the same effect. I'd rather work in world space anyway as apparently this is better for deffered lighting? Or so I've heard.
If you'd like to see any of the lighting code and so on I'll add it in but I didn't think it was necessary as it works apart from this.
EDIT::
As requested, here is vertex and part of frag shader
#version 330
uniform mat4 T; // Translation matrix
uniform mat4 S; // Scale matrix
uniform mat4 R; // Rotation matrix
uniform mat4 camera; // camera matrix
uniform vec4 posRelParent; // the position relative to the parent
// Input vertex packet
layout (location = 0) in vec4 position;
layout (location = 2) in vec3 normal;
layout (location = 3) in vec4 tangent;
layout (location = 4) in vec4 bitangent;
layout (location = 8) in vec2 textureCoord;
// Output vertex packet
out packet {
vec2 textureCoord;
vec3 normal;
vec3 vert;
mat3 TBN;
vec3 tangent;
vec3 bitangent;
vec3 normalTBN;
} outputVertex;
mat4 transform;
mat3 TBN;
void calculateTBN()
{
//get the model matrix, the transform of the object with scaling and transform removeds
mat3 model = mat3(transpose(inverse(transform)));
vec3 T = normalize(model*tangent.xyz);
vec3 N = normalize(model*normal);
//I used to retrieve the bitangents by crossing the normal and tangent but now they are calculated independently
vec3 B = normalize(model*bitangent.xyz);
TBN = mat3( T , B , N);
outputVertex.TBN = TBN;
//Pass though TBN vectors for colour debugging in the fragment shader
outputVertex.tangent = T;
outputVertex.bitangent = B;
outputVertex.normalTBN = N;
}
void main(void) {
outputVertex.textureCoord = textureCoord;
// Setup local variable pos in case we want to modify it (since position is constant)
vec4 pos = vec4(position.x, position.y, position.z, 1.0) + posRelParent;
//Work out the transform matrix
transform = T * R * S;
//Work out the normal for lighting
mat3 normalMat = transpose(inverse(mat3(transform)));
outputVertex.normal = normalize(normalMat* normal);
calculateTBN();
outputVertex.vert =(transform* pos).xyz;
//Work out the final pos of the vertex
gl_Position = camera * transform * pos;
}
And Lighting vector of fragment:
vec3 applyLight(Light thisLight, vec3 baseColor, vec3 surfacePos, vec3 surfaceToCamera)
{
float attenuation = 1.0f;
vec3 lightPos = (thisLight.finalLightMatrix*thisLight.position).xyz;
vec3 surfaceToLight;
vec3 coneDir = normalize(thisLight.coneDirection);
if (thisLight.position.w == 0.0f)
{
//Directional Light (all rays same angle, use position as direction)
surfaceToLight = normalize( (thisLight.position).xyz);
attenuation = 1.0f;
}
else
{
//Point light
surfaceToLight = normalize(lightPos - surfacePos);
float distanceToLight = length(lightPos - surfacePos);
attenuation = 1.0 / (1.0f + thisLight.attenuation * pow(distanceToLight, 2));
//Work out the Cone restrictions
float lightToSurfaceAngle = degrees(acos(dot(-surfaceToLight, normalize(coneDir))));
if (lightToSurfaceAngle > thisLight.coneAngle)
{
attenuation = 0.0;
}
}
}
Here's the main of the frag shader too:
void main(void) {
//get the base colour from the texture
vec4 tempFragColor = texture2D(textureImage, inputFragment.textureCoord).rgba;
//Support for objects with and without a normal map
if (useNormalMap == 1)
{
calcedNormal = calculateNormal();
}
else
{
calcedNormal = inputFragment.normal;
}
vec3 surfaceToCamera = normalize((cameraPos_World) - (inputFragment.vert));
vec3 tempColour = vec3(0.0, 0.0, 0.0);
for (int count = 0; count < numLights; count++)
{
tempColour += applyLight(allLights[count], tempFragColor.xyz, inputFragment.vert, surfaceToCamera);
}
vec3 gamma = vec3(1.0 / 2.2);
fragmentColour = vec4(pow(tempColour,gamma), tempFragColor.a);
//fragmentColour = vec4(calcedNormal, 1);
}
Edit 2:
The geometry shader used to visualize "sampled" normals by the TBN matrix as shown here:
void GenerateLineAtVertex(int index)
{
vec3 testSampledNormal = vec3(0, 0, 1);
vec3 bitangent = cross(gs_in[index].normal, gs_in[index].tangent);
mat3 TBN = mat3(gs_in[index].tangent, bitangent, gs_in[index].normal);
testSampledNormal = TBN * testSampledNormal;
gl_Position = gl_in[index].gl_Position;
EmitVertex();
gl_Position =
gl_in[index].gl_Position
+ vec4(testSampledNormal, 0.0) * MAGNITUDE;
EmitVertex();
EndPrimitive();
}
And it's vertex shader
void main(void) {
// Setup local variable pos in case we want to modify it (since position is constant)
vec4 pos = vec4(position.x, position.y, position.z, 1.0);
mat4 transform = T* R * S;
// Apply transformation to pos and store result in gl_Position
gl_Position = projection* camera* transform * pos;
mat3 normalMatrix = mat3(transpose(inverse(camera * transform)));
vs_out.tangent = normalize(vec3(projection * vec4(normalMatrix * tangent.xyz, 0.0)));
vs_out.normal = normalize(vec3(projection * vec4(normalMatrix * normal , 0.0)));
}
Here is the TBN vectors visualized. The slight angles on the points are due to an issue with how I'm applying the projection matrix, rather than mistakes in the actual vectors. The red lines just show where the arrows I've drawn on the texture are, they're not very clear from that angle that's all.
Problem Solved!
Actually nothing to do with the code above, although thanks to everyone that helped.
I was importing the texture using my own texture loader, which uses by default non-gamma corrected, SRGB colour in 32 bit. I switched it to 24bit and just RGB colour and it worked straight away. Typical developer problems....
While implementing SSLR, I ran into the problem of incorrectly displaying objects: they are infinitely projected "down" and displayed in no way at all in the mirror. I give the code and screenshot below.
Fragment SSLR shader:
#version 330 core
uniform sampler2D normalMap; // in view space
uniform sampler2D depthMap; // in view space
uniform sampler2D colorMap;
uniform sampler2D reflectionStrengthMap;
uniform mat4 projection;
uniform mat4 inv_projection;
in vec2 texCoord;
layout (location = 0) out vec4 fragColor;
vec3 calcViewPosition(in vec2 texCoord) {
// Combine UV & depth into XY & Z (NDC)
vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r);
// Convert from (0, 1) range to (-1, 1)
vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1);
// Undo Perspective transformation to bring into view space
vec4 ViewPosition = inv_projection * ScreenSpacePosition;
// Perform perspective divide and return
return ViewPosition.xyz / ViewPosition.w;
}
vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
dir *= 0.25f;
for (int i = 0; i < 20; i++) {
hitCoord += dir;
vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
float depth = calcViewPosition(projectedCoord.xy).z;
dDepth = hitCoord.z - depth;
if(dDepth < 0.0) return projectedCoord.xy;
}
return vec2(-1.0);
}
void main() {
vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0;
vec3 viewPos = calcViewPosition(texCoord);
// Reflection vector
vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));
// Ray cast
vec3 hitPos = viewPos;
float dDepth;
float minRayStep = 0.1f;
vec2 coords = rayCast(reflected * max(minRayStep, -viewPos.z), hitPos, dDepth);
if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r);
else fragColor = texture(colorMap, texCoord);
}
Screenshot:
Also, the lamp is not reflected at all
I will grateful for help
UPDATE:
colorMap:
normalMap:
depthMap:
UPDATE: I solved the problem with the wrong reflection, but there are still problems.
I solved it as follows: ViewPosition.y *= -1
Now, as you can see in the screenshot, the lower parts of the objects are not reflected for some reason.
The question still remains open.
I m struggling to get a fine ssr too. I found two things that could help.
To get the view space normals you have to keep only the rotation of the camera and remove the translation, because if you dont, you will get the normals stretched to the opposite direction of the camera movement and will no longer have the right direction even if you normalize them again, for column major mat4 you can do it like:
mat4 viewNoTranslation = view;
viewNoTranslation[3] = vec4(0.0, 0.0, 0.0, 1.0);
The depth sampling from the depth image is logarithmic and if you linearize it you will get indeed the values from 0 to 1 but they will be inaccurate as to the needed precision. I tried to get the depth value straight from the vertex shader:
gl_Position = ubo.projection * ubo.view * ubo.model * inPos;
depth = gl_Position.z;
I dont know if it is right but the depth now is more accurate.
If you make proggress, please update :)