quaternion inverts vector - c++

I'm using glm::quaternions to rotate a vector around a certain axis. But the vector inverts the axis everytime it is called.
Here is my code to rotate my object around the side axis:
void PlayerController::rotateForward(float angle) {
angle = (angle*M_PI) / 180.0f;
fquat rot = fquat(angle, playerObject->getVecSide());
normalize(rot);
vec4 newUpVec = rot * vec4(playerObject->getVecUp(), 1.0f);
normalize(newUpVec);
vec3 upVec3 = normalize(vec3(newUpVec.x ,newUpVec.y, newUpVec.z));
playerObject->setVecUp(upVec3);
playerObject->setVecForward(-normalize(cross(playerObject->getVecUp(), playerObject->getVecSide())));
vec3 newPlayerPos = currentPlanet->getPosition() + (playerObject->getVecUp() * radius);
playerObject->setPosition(newPlayerPos);
}
everytime I call this method, my up vector is turned around the axis, but also inverted. I can work around this by using:
vec4 newUpVec = rot * -vec4(playerObject->getVecUp(), 1.0f);
but this is more like treating symptoms instead of finding the cause. Maybe someone around here can help me in understanding what the quaternion does here.

well here is the answer, the quaternion initialization was wrong...
these lines were missing:
rotateAround = rotateAround * sinf(angle/2.0f);
angle = cosf(angle/2.0f);
so the correct version of the method looks like this:
vec3 GameObjectRotator::rotateVector(float angle, vec3 toRotate, vec3 rotateAround) {
angle = (angle*M_PI) / 180.0f;
rotateAround = rotateAround * sinf(angle/2.0f);
angle = cosf(angle/2.0f);
fquat rot = fquat(angle, rotateAround);
normalize(rot);
vec4 rotated = rot * vec4(toRotate, 1.0f);
normalize(rotated);
return normalize(vec3(rotated));
}
(the class has gone through quite a bit refactoring since the first version, but the idea should be clear anyways)

Related

What is the best OpenGL self-shadowing technique?

I chose the quite old, but sufficient method of shadow mapping, which is OK overall, but I quickly discovered some self-shadowing problems:
It seems, this problem appears because of the bias offset, which is necessary to eliminate shadow acne artifacts.
After some googling, it seems that there is no easy solution to this, so I tried some shader tricks which worked, but not very well.
My first idea was to perform a calculation of a dot multiplication between a light direction vector and a normal vector. If the result is lower than 0, the angle between vectors is >90 degrees, so this surface is pointing outward at the light source, hence it is not illuminated. This works good, except shadows may appear too sharp and hard:
After I was not satisfied with the results, I tried another trick, by multiplying the shadow value by the abs value of the dot product of light direction and normal vector (based on the normal map), and it did work (hard shadows from the previous image got smooth transition from shadow to regular diffuse color), except it created another artifact in situations, when the normal map normal vector is pointing somewhat at the sun, but the face normal vector does not. It also made self-shadows much brighter (but it is fixable):
Can I do something about it, or should I just choose the lesser evil?
Shader shadows code for example 1:
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
vec4 fragPosLightSpace = lightSpaceMatrix * vec4(FragPos, 1.0);
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// keep the shadow at 0.0 when outside the far_plane region of the light's frustum.
if (currentDepth > 1.0)
{
return 0.0;
}
// calculate bias (based on depth map resolution and slope)
float bias = max(0.005 * (1.0 - dot(normal, lightDir)), 0.0005);
vec2 texelSize = 1.0 / vec2(textureSize(material.texture_shadow, 0));
const int sampleRadius = 2;
const float sampleRadiusCount = pow(sampleRadius * 2 + 1, 2);
for(int x = -sampleRadius; x <= sampleRadius; ++x)
{
for(int y = -sampleRadius; y <= sampleRadius; ++y)
{
float pcfDepth = texture(material.texture_shadow, vec3(projCoords.xy + vec2(x, y) * texelSize, layer)).r;
shadow += (currentDepth - bias) > pcfDepth ? ambientShadow : 0.0;
}
}
shadow /= sampleRadiusCount;
Hard self shadows trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
// "Normal" is a face normal vector, "normal" is calculated based on normal map. I know there is a naming problem with that))
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = max(abs(vectorNormalDot), ambientShadow);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...
}
Dot product multiplication trick code:
float shadow = 0.0f;
float ambientShadow = 0.9f;
float faceNormalDot = dot(Normal, lightDir);
float vectorNormalDot = dot(normal, lightDir);
if (faceNormalDot <= 0 || vectorNormalDot <= 0)
{
shadow = ambientShadow * abs(vectorNormalDot);
}
else
{
vec4 fragPosViewSpace = view * vec4(FragPos, 1.0);
float depthValue = abs(fragPosViewSpace.z);
...

Opengl Camera rotation around X

Working on an opengl project in visual studio.Im trying to rotate the camera around the X and Y axis.
Thats the math i should use
Im having trouble because im using glm::lookAt for camera position and it takes glm::vec3 as arguments.
Can someone explain how can i implement this in opengl?
PS:i cant use quaternions
The lookAt function should take three inputs:
vec3 cameraPosition
vec3 cameraLookAt
vec3 cameraUp
For my past experience, if you want to move the camera, first find the transform matrix of the movement, then apply the matrix onto these three vectors, and the result will be three new vec3, which are your new input into the lookAt function.
vec3 newCameraPosition = movementMat4 * cameraPosition
//Same for other two
Another approach could be finding the inverse movement of the one you want the camera to do and applying it to the whole scene. Since moving the camera is kind of equals to move the object onto inverse movement while keep the camera not move :)
Below the camera is rotated around the z-axis.
const float radius = 10.0f;
float camX = sin(time) * radius;
float camZ = cos(time) * radius;
glm::vec3 cameraPos = glm::vec3(camX, 0.0, camZ);
glm::vec3 objectPos = glm::vec3(0.0, 0.0, 0.0);
glm::vec3 up = glm::vec3(0.0, 1.0, 0.0);
glm::mat4 view = glm::lookAt(cameraPos, objectPos, up);
Check out https://learnopengl.com/, its a great site to learn!

blinn phong lighting creating strange results

I'm creating a raycaster and trying to use global illumination as a shading method
I've calculated the intersect of a a sphere and cube as well as their normals
creating each of the separate ambient, diffuse and specular result in shading the object as expected however
overall once adding them together as shown in the code below
glm::vec3 n = surfaceNormal(position, intersect);
glm::vec3 lightStart = glm::vec3(-10, 1, 10);//light points in 3d space
glm::vec3 lightDir = glm::normalize(lightStart - intersect); // direction towards light source
glm::vec3 viewDir = glm::normalize(cam.pos - intersect); // direction towards camera
glm::vec3 midDir = glm::normalize(lightDir + viewDir);//mid point between light and view
glm::vec3 lightColor = glm::vec3(1, 1, 1);//color of light
glm::vec3 objectColor = color ;
float shinyness = 10.0f;
float ambientStr = 0.1f;
///ambient
glm::vec3 ambient = lightColor * ambientStr;
///diffuse
glm::vec3 diffuse = lightColor * glm::max(glm::dot(n,lightDir), 0.0f);
///specular
//ks * light color * facing * (max(n dot h))
glm::vec3 specular = lightColor * facing(n, lightDir) *
std::pow(glm::max(glm::dot(n, midDir), 0.0f), shinyness);
glm::vec3 outColor = ambient + diffuse + specular;
return outColor * objectColor * 255.0f;
the facing method returns 1 if the cross product of (n,lightDir) > 0 else it returns 0
this is to avoid lighting faces pointed in the wrong direction
this is the result:
The *255f was suspicious already (typical usage of OpenGL works with color components in the range of [0...1]), and you added a comment
the output are float values for color being drawn straight to screen.
It is a simple overflow issue then. Your weight is a sum of 0.1+[0...1]+[0...1] (ambient, diffuse and specular), which falls into [0.1 ... 2.1], and when you multiply it with a color component larger than 0.5 (approximately, 1/2.1 is the precise limit), their product exceeds 1. Then this number is multiplied with 255, and the result will be above 255, and when truncated to a byte, it "starts over" from black, component-wise.
Based on the code you show, you could probably try something like
return glm::min(outColor * objectColor * 255.0f, glm::vec3(255f, 255f, 255f));
but there may be a better function for that.

Parallax mapping - only works in one direction

I'm working on parallax mapping (from this tutorial: http://sunandblackcat.com/tipFullView.php?topicid=28) and I seem to only get good results when I move along one axis (e.g. left-to-right) while looking at a parallaxed quad. The image below illustrates this:
You can see it clearly at the left and right steep edges. If I'm moving to the right the right steep edge should have less width than the left one (which looks correct on the left image) [Camera is at right side of cube]. However, if I move along a different axis (instead of west to east I now move top to bottom) you can see that this time the steep edges are incorrect [Camera is again on right side of cube].
I'm using the most simple form of parallax mapping and even that has the same problems. The fragment shader looks like this:
void main()
{
vec2 texCoords = fs_in.TexCoords;
vec3 viewDir = normalize(viewPos - fs_in.FragPos);
vec3 V = normalize(fs_in.TBN * viewDir);
vec3 L = normalize(fs_in.TBN * lightDir);
float height = texture(texture_height, texCoords).r;
float scale = 0.2;
vec2 texCoordsOffset = scale * V.xy * height;
texCoords += texCoordsOffset;
// calculate diffuse lighting
vec3 N = texture(texture_normal, texCoords).rgb * 2.0 - 1.0;
N = normalize(N); // normal already in tangent-space
vec3 ambient = vec3(0.2f);
float diff = clamp(dot(N, L), 0, 1);
vec3 diffuse = texture(texture_diffuse, texCoords).rgb * diff;
vec3 R = reflect(L, N);
float spec = pow(max(dot(R, V), 0.0), 32);
vec3 specular = vec3(spec);
fragColor = vec4(ambient + diffuse + specular, 1.0);
}
TBN matrix is created as follows in the vertex shader:
vs_out.TBN = transpose(mat3(normalize(tangent), normalize(bitangent), normalize(vs_out.Normal)));
I use the transpose of the TBN to transform all relevant vectors to tangent space. Without offsetting the TexCoords, the lighting looks solid with normal mapped texture so my guess is that it's not the TBN matrix that's causing the issues. What could be causing this that it only works in one direction?
edit
Interestingly, If I invert the y coordinate of the TexCoords input variable parallax mapping seems to work. I have no idea why this works though and I need it to work without the inversion.
vec2 texCoords = vec2(fs_in.TexCoords.x, 1.0 - fs_in.TexCoords.y);

Stable Shadow Mapping

I'm trying to stabilise the shadows in my 3D renderer. I'm using CSMs.
Here's the code I've got, making no attempt to stabilise. The size of the projection in world space should at least be constant though:
void SkyLight::update() {
// direction is the direction that the light is facing
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
// .first is the far plane, .second is a struct of 8 vec3 making a world space camera frustum
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
// calculates the bounding box centre of the frustum
vec3 frusCentre = sq::calc_frusCentre(csm.second);
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
That works. But, the shadows flicker and swim like mad. So, here's an attempt at stabilising, by trying to snap them to texel-sized increments, as recommended everywhere but seemingly never explained:
void SkyLight::update() {
vec3 tangent = sq::make_tangent(direction);
for (int i = 0; i < 4; i++) {
const std::pair<float, sq::Frustum>& csm = camera->csmArr[i];
vec3 frusCentre = sq::calc_frusCentre(csm.second);
double qStep = csm.first / 1024.0; // shadow texture size
frusCentre.x = std::round(frusCentre.x / qStep) * qStep;
frusCentre.y = std::round(frusCentre.y / qStep) * qStep;
frusCentre.z = std::round(frusCentre.z / qStep) * qStep;
mat4 viewMat = glm::lookAt(frusCentre-direction, frusCentre, tangent);
mat4 projMat = glm::ortho(-csm.first, csm.first, -csm.first, csm.first, -csm.first, csm.first);
matArr[i] = projMat * viewMat;
}
}
This makes a difference, in that the shadows now swim slowly, rather than bouncing too fast to notice any pattern. However, I'm pretty sure that's just by chance, and that I'm not at all snapping to the right thing, or even snapping the right thing.
To fix this, I need to do the snapping in light space, not world space:
viewMat[3][0] -= glm::mod(viewMat[3][0], 2.f * split / texSize);
viewMat[3][1] -= glm::mod(viewMat[3][1], 2.f * split / texSize);
viewMat[3][2] -= glm::mod(viewMat[3][2], 2.f * split / texSize);
Old (Wrong) answer:
So, I revisited this today and managed to solve it in about 10 minutes :D
Just round frusCentre like this:
frusCentre -= glm::mod(frusCentre, 2.f * csm.first / 1024.f);
or, more generally:
frusCentre -= glm::mod(frusCentre, 2.f * split / texSize);
EDIT: No, that's no good... I'll keep trying.