Quaternions getting warped or flickers - c++

I am trying to create my own quaternion class and I get weird results. Either the cube I am trying to rotate is flickering like crazy, or it is getting warped.
This is my code:
void Quaternion::AddRotation(vec4 v)
{
Quaternion temp(v.x, v.y, v.z, v.w);
*this = temp * (*this);
}
mat4 Quaternion::GenerateMatrix(Quaternion &q)
{
q.Normalize();
//Row order
mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z, 2*q.x*q.y - 2*q.w*q.z, 2*q.x*q.z + 2*q.w*q.y, 0,
2*q.x*q.y + 2*q.w*q.z, 1 - 2*q.x*q.x - 2*q.z*q.z, 2*q.y*q.z + 2*q.w*q.x, 0,
2*q.x*q.z - 2*q.w*q.y, 2*q.y*q.z - 2*q.w*q.x, 1 - 2*q.x*q.x - 2*q.y*q.y, 0,
0, 0, 0, 1);
//Col order
// mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z,2*q.x*q.y + 2*q.w*q.z,2*q.x*q.z - 2*q.w*q.y,0,
// 2*q.x*q.y - 2*q.w*q.z,1 - 2*q.x*q.x - 2*q.z*q.z,2*q.y*q.z - 2*q.w*q.x,0,
// 2*q.x*q.z + 2*q.w*q.y,2*q.y*q.z + 2*q.w*q.x,1 - 2*q.x*q.x - 2*q.y*q.y,0,
// 0,0,0,1);
return m;
}
When I create the entity I give it a quaternion:
entity->Quat.AddRotation(vec4(1.0f, 1.0f, 0.0f, 45.f));
And each frame I try to rotate it additionally by a small amount:
for (int i = 0; i < Entities.size(); i++)
{
if (Entities[i] != NULL)
{
Entities[i]->Quat.AddRotation(vec4(0.5f, 0.2f, 1.0f, 0.000005f));
Entities[i]->DrawModel();
}
else
break;
}
And finally this is how I draw each cube:
void Entity::DrawModel()
{
glPushMatrix();
//Rotation
mat4 RotationMatrix;
RotationMatrix = this->Quat.GenerateMatrix(this->Quat);
//Position
mat4 TranslationMatrix = glm::translate(mat4(1.0f), this->Pos);
this->Trans = TranslationMatrix * RotationMatrix;
glMultMatrixf(value_ptr(this->Trans));
if (this->shape != NULL)
this->shape->DrawShape();
glPopMatrix();
}
EDIT: This is the tutorial I used to learn quaternions:
http://www.cprogramming.com/tutorial/3d/quaternions.html

Without studying your rotation matrix to the end, there are two possible bugs I can think of. The first one is that your rotation matrix R is not orthogonal, i.e. the inverse of R is not equal to the transposed. This could cause warping of the object. The second place to hide a bug is inside the multiplication of your quaternions.

There's a mistake in the rotation matrix. Try exchanging the element (2,3) with element (3,2).

Related

Picking with a physics library (OPENGL & BULLET 3D

I'm trying to use bullet physics to draw a ray to hit a game object in the scene so I can select it, i am using the camera matrix to draw a ray and then pick an object in space and then look through a list of game objects and look for the same location.
On Mouse press I have the following code, it seems to be off and only picks the items some times:
glm::vec4 lRayStart_NDC(
((float)lastX / (float)RECT_WIDTH - 0.5f) * 2.0f,
((float)lastY / (float)RECT_HEIGHT - 0.5f) * 2.0f,
-1.0,
1.0f
);
glm::vec4 lRayEnd_NDC(
((float)lastX / (float)RECT_WIDTH - 0.5f) * 2.0f,
((float)lastY / (float)RECT_HEIGHT - 0.5f) * 2.0f,
0.0,
1.0f
);
projection = glm::perspective(glm::radians(SceneManagement::getInstance()->MainCamera->GetVOW()), (float)RECT_WIDTH / (float)RECT_HEIGHT, 0.1f, 100.0f);
glm::mat4 InverseProjectionMatrix = glm::inverse(projection);
view = SceneManagement::getInstance()->MainCamera->GetViewMatrix();
glm::mat4 InverseViewMatrix = glm::inverse(view);
glm::vec4 lRayStart_camera = InverseProjectionMatrix * lRayStart_NDC;
lRayStart_camera /= lRayStart_camera.w;
glm::vec4 lRayStart_world = InverseViewMatrix * lRayStart_camera;
lRayStart_world /= lRayStart_world.w;
glm::vec4 lRayEnd_camera = InverseProjectionMatrix * lRayEnd_NDC;
lRayEnd_camera /= lRayEnd_camera.w;
glm::vec4 lRayEnd_world = InverseViewMatrix * lRayEnd_camera;
lRayEnd_world /= lRayEnd_world.w;
glm::vec3 lRayDir_world(lRayEnd_world - lRayStart_world);
lRayDir_world = glm::normalize(lRayDir_world);
glm::vec3 out_end = SceneManagement::getInstance()->MainCamera->GetCamPosition() + SceneManagement::getInstance()->MainCamera->GetCamFront() * 1000.0f;
btCollisionWorld::ClosestRayResultCallback RayCallback(
btVector3(SceneManagement::getInstance()->MainCamera->GetCamPosition().x, SceneManagement::getInstance()->MainCamera->GetCamPosition().y, SceneManagement::getInstance()->MainCamera->GetCamPosition().z),
btVector3(out_end.x, out_end.y, out_end.z)
);
PhysicsManager::getInstance()->dynamicsWorld->rayTest(
btVector3(SceneManagement::getInstance()->MainCamera->GetCamPosition().x, SceneManagement::getInstance()->MainCamera->GetCamPosition().y, SceneManagement::getInstance()->MainCamera->GetCamPosition().z),
btVector3(out_end.x, out_end.y, out_end.z),
RayCallback
);
if (RayCallback.hasHit())
{
btTransform position = RayCallback.m_collisionObject->getInterpolationWorldTransform();
printf("Collision \n");
for (int i = 0; i < SceneManagement::getInstance()->gObjects.size(); i++)
{
if (SceneManagement::getInstance()->gObjects.at(i)->transform.Position.x == position.getOrigin().getX() &&
SceneManagement::getInstance()->gObjects.at(i)->transform.Position.y == position.getOrigin().getY() &&
SceneManagement::getInstance()->gObjects.at(i)->transform.Position.z == position.getOrigin().getZ())
{
int select = i;
SceneManagement::getInstance()->SelectedGameObject = SceneManagement::getInstance()->gObjects.at(select);
SceneManagement::getInstance()->SelectedGameObject->DisplayInspectorUI();
return;
}
}
}
This check
SceneManagement::getInstance()->gObjects.at(i)->transform.Position.x == position.getOrigin().getX() &&
SceneManagement::getInstance()->gObjects.at(i)->transform.Position.y == position.getOrigin().getY() &&
SceneManagement::getInstance()->gObjects.at(i)->transform.Position.z == position.getOrigin().getZ()
fails due to numerical precision issues.
You should check the "equality" of vectors only up to some precision.
Please, use some distance function and the following check instead of your if() condition:
const double EPSILON = 1e-4; // experiment with this value
auto objPos = SceneManagement::getInstance()->gObjects.at(i)->transform.Position;
bool isWithinRange = distance3D(objPos, position.getOrigin()) < EPSILON;
if (isWithinRange)
{
int select = i;
...
}
The distance3D function is simply the euclidean distance in 3D. The glm::distance function should do, just convert both vectors to glm::vec3 format.

Issue with Picking (custom unProject() function)

I'm currently working on a STL file viewer. This one use an Arcball camera :
To provide more features on this viewer (which can handle more than one object) I would like to implement a click select. To achieve it, I have used picking(Pseudo code I have used)
At this time, my code to check for a any object 3D between 2 points works. However the conversion of mouse position to a correct set of vector is far away from working:
glm::vec3 range = transform.GetPosition() + ( transform.GetFront() * 1000.0f);
// x and y are cursor position on the screen
glm::vec3 start = UnProject(x,y, transform.GetPosition().z);
glm::vec3 end = UnProject(x,y,range.z);
/*
The code which iterate over all objects in the scene and checks for collision
between my start / end and the object hitbox
*/
As you can see I have tried (maybe it is stupid) to set a the z distance between my start and my end to 100 * theFront vector of my camera. But it's not working the set of vectors I get are incoherents.
By example, placing the camera at 0 0 0 with a front of 0 0 -1 give me this set of Vectors :
Start : 0.0000~ , 0.0000~ , 0.0000~
End : 0.0000~ , 0.0000~ , 0.0000~
which is (by my logic) incoherent, I would have expected something more like (Start : 0, 0, 0) ( End : 0, 0, -1000)
I think there's an issue with my UnProject function :
glm::vec3 UnProject(float winX, float winY, float winZ)
{
// Compute (projection x modelView) ^ -1:
glm::mat4 modelView = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
const glm::mat4 m = glm::inverse(projection * modelView);
// Need to invert Y since screen Y-origin point down,
// while 3D Y-origin points up (this is an OpenGL only requirement):
winY = ScreenSize.cy - winY;
// Transformation of normalized coordinates between -1 and 1:
glm::vec4 in;
in.x = winX / ScreenSize.cx * 2.0 - 1.0;
in.y = winY / ScreenSize.cy * 2.0 - 1.0;
in.z = 2.0 * winZ - 1.0;
in.w = 1.0;
// To world coordinates:
glm::vec4 out(m * in);
if (out.w == 0.0) // Avoid a division by zero
{
return glm::vec3(0.0f);
}
out.w = 1.0 / out.w;
return glm::vec3(out.x * out.w, out.y * out.w,out.z * out.w);
}
Since this function is basic rewrite of the pseudo code (from here) and I'm far from behind good at mathematics I don't really see what could go wrong...
PS: my view matrix (provided by GetViewMatrix()) is correct (since I use it to show my scene)
my projection matrix is also correct
the ScreenSize object carry my viewport size
I have found what's wrong, the return vec3 should be made by dividing each component by the perspective instead of being multiply by it. Here is the new UnProject function :
glm::vec3 UnProject2(float winX, float winY,float winZ){
glm::mat4 View = GetViewMatrix() * glm::mat4(1.0f);
glm::mat4 projection = GetProjectionMatrix(ScreenSize);
glm::mat4 viewProjInv = glm::inverse(projection * View);
winY = ScreenSize.cy - winY;
glm::vec4 clickedPointOnSreen;
clickedPointOnSreen.x = ((winX - 0.0f) / (ScreenSize.cx)) *2.0f -1.0f;
clickedPointOnSreen.y = ((winY - 0.0f) / (ScreenSize.cy)) * 2.0f -1.0f;
clickedPointOnSreen.z = 2.0f*winZ-1.0f;
clickedPointOnSreen.w = 1.0f;
glm::vec4 clickedPointOrigin = viewProjInv * clickedPointOnSreen;
return glm::vec3(clickedPointOrigin.x / clickedPointOrigin.w,clickedPointOrigin.y / clickedPointOrigin.w,clickedPointOrigin.z / clickedPointOrigin.w);
}
I also changed the way start and end are calculated :
glm::vec3 start = UnProject2(x,y,0.0f);
glm::vec3 end = UnProject2(x,y,1.0f);

OpenGL moving vertices with mouse

I am using legacy OpenGL and trying to move vertices around with the mouse. To test whether a vertex is clicked on I loop through all vertices and multiply them by the model and projection matrix before dividing by the w value. This works fine and is shown below:
for (Vertex *vertex : context->getMesh().vertices) {
QVector4D vert(vertex->xPos, vertex->yPos, vertex->zPos, 1.0f);
QVector4D transformedVert = projectionMatrix * modelMatrix * vert;
transformedVert /= transformedVert.w();
if ((mappedX < (transformedVert.x() + 0.1) && mappedX > (transformedVert.x() - 0.1)) &&
(mappedY < (transformedVert.y() + 0.1) && mappedY > (transformedVert.y() - 0.1))) {
std::cout << "SUCCESS" << std::endl;
vertexPicked = true;
currentVertex = vertex;
}
}
Then when I move the mouse I try to work backwards by first multiplying the current mouse coordinates by the same W value as in the first step and then multiplying by the inverse of the projection and model matrices. This moves the vertex around but not to where the mouse is.
float mouseX = ((2.0f * event->x()) / width() - 1.0f);
float mouseY = -((2.0f * event->y()) / height() - 1.0f);
float x = (modelMatrix.inverted() * projectionMatrix.inverted() *
(QVector4D(mouseX, mouseY, 1, 1) * (projectionMatrix * modelMatrix * QVector4D(MousePicker::currentVertex->xPos, MousePicker::currentVertex->yPos, MousePicker::currentVertex->zPos, 1)).w())).x();
MousePicker::currentVertex->xPos = x;
I am currently only trying to change the X coordinate.

DirectX 11 Translation/Rotation Issue

I can translate my 2d image to 0, 0 using the below code.
D3DXMATRIX worldMatrix, viewMatrix, orthoMatrix, rotation, movement;
// Get the world, view, and ortho matrices from the camera.
m_camera.GetViewMatrix(viewMatrix);
m_camera.GetWorldMatrix(worldMatrix);
m_camera.GetOrthoMatrix(orthoMatrix);
// Move the texture to the new position
D3DXMatrixTranslation(&movement, ((m_VerticeProperties->screenWidth / 2) * -1) + m_posX,
(m_VerticeProperties->screenHeight / 2) - m_posY, 0.0f);
worldMatrix = movement;
//float m_rotationZ = -90 * 0.0174532925f;
//D3DXMatrixRotationYawPitchRoll(&rotation, 0, 0, m_rotationZ);
//worldMatrix = rotation;
// Give the bitmap class what it needs to make source rect
m_bitmap->SetVerticeProperties(m_VerticeProperties->screenWidth, m_VerticeProperties->screenHeight,
m_VerticeProperties->frameWidth, m_VerticeProperties->frameHeight, m_VerticeProperties->U, m_VerticeProperties->V);
//Render the model (the vertices)
m_bitmap->Render(m_d3dManager.GetDeviceContext(), flipped);
//Render the shader
m_shader->Render(m_d3dManager.GetDeviceContext(), m_bitmap->GetIndexCount(), worldMatrix, viewMatrix,
orthoMatrix, m_bitmap->GetTexture(), m_textureTranslationU, m_VerticeProperties->translationPercentageV);
The result:
I can also rotate the image with this code:
D3DXMATRIX worldMatrix, viewMatrix, orthoMatrix, rotation, movement;
// Get the world, view, and ortho matrices from the camera.
m_camera.GetViewMatrix(viewMatrix);
m_camera.GetWorldMatrix(worldMatrix);
m_camera.GetOrthoMatrix(orthoMatrix);
//// Move the texture to the new position
//D3DXMatrixTranslation(&movement, ((m_VerticeProperties->screenWidth / 2) * -1) + m_posX,
// (m_VerticeProperties->screenHeight / 2) - m_posY, 0.0f);
//worldMatrix = movement;
float m_rotationZ = 90 * 0.0174532925f;
D3DXMatrixRotationYawPitchRoll(&rotation, 0, 0, m_rotationZ);
worldMatrix = rotation;
// Give the bitmap class what it needs to make source rect
m_bitmap->SetVerticeProperties(m_VerticeProperties->screenWidth, m_VerticeProperties->screenHeight,
m_VerticeProperties->frameWidth, m_VerticeProperties->frameHeight, m_VerticeProperties->U, m_VerticeProperties->V);
//Render the model (the vertices)
m_bitmap->Render(m_d3dManager.GetDeviceContext(), flipped);
//Render the shader
m_shader->Render(m_d3dManager.GetDeviceContext(), m_bitmap->GetIndexCount(), worldMatrix, viewMatrix,
orthoMatrix, m_bitmap->GetTexture(), m_textureTranslationU, m_VerticeProperties->translationPercentageV);
The result:
I thought multiplying the translation and rotation matrices and setting them = to the world matrix would allow me to see both effects at once.
D3DXMatrixTranslation(&movement, ((m_VerticeProperties->screenWidth / 2) * -1) + m_posX,
(m_VerticeProperties->screenHeight / 2) - m_posY, 0.0f);
float m_rotationZ = 90 * 0.0174532925f;
D3DXMatrixRotationYawPitchRoll(&rotation, 0, 0, m_rotationZ);
worldMatrix = rotation * movement;
It doesn't. The image no longer appears on the screen.
Can anyone tell me what im doing wrong? Thanks.
just do world * -translate * rotation * translate it will make you rotate local
here my code for example
void Ojbect::RotZ(float angle, Vec3 origin)
{
Mat4 w, rz, t;
rz.RotZ(angle);
t.Translation(-origin.x, -origin.y, 0);
w = t * rz * -1 * t;
Vec4 newPos;
for (int i = 0; i < countV; i++)
{
Vec3 pos(vertex[i].x, vertex[i].y, 1);
newPos.Transform(pos, w);
vertex[i].x = newPos.x;
vertex[i].y = newPos.y;
}
UpdateVertex(countV);
}

calculating vertex normals in opengl with c++

could anyone please help me calculating vertex normals in OpenGL?
I am loading an obj file and adding Gouraud shading by calculating vertex normals without using glNormal3f or glLight functions..
I have declared functions like operators, crossproduct, innerproduct,and etc..
I have understood that in order to get vertex normals, I first need to calculate surface normal aka normal vector with crossproduct.. and also
since I am loading an obj file.. and I am placing the three points of Faces of the obj file in id1,id2,id3 something like that
I would be grateful if anyone can help me writing codes or give me a guideline how to start the codes. please ...
thanks..
its to draw
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
This is the equation for color calculation
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
*This answer is for triangular mesh and can be extended to poly mesh as well.
tempVertices stores list of all vertices.
vertexIndices stores details of faces(triangles) of the mesh in a vector (in a flat manner).
std::vector<glm::vec3> v_normal;
// initialize vertex normals to 0
for (int i = 0; i != tempVertices.size(); i++)
{
v_normal.push_back(glm::vec3(0.0f, 0.0f, 0.0f));
}
// For each face calculate normals and append to the corresponding vertices of the face
for (unsigned int i = 0; i < vertexIndices.size(); i += 3)
{
//vi v(i+1) v(i+2) are the three faces of a triangle
glm::vec3 A = tempVertices[vertexIndices[i] - 1];
glm::vec3 B = tempVertices[vertexIndices[i + 1] - 1];
glm::vec3 C = tempVertices[vertexIndices[i + 2] - 1];
glm::vec3 AB = B - A;
glm::vec3 AC = C - A;
glm::vec3 ABxAC = glm::cross(AB, AC);
v_normal[vertexIndices[i] - 1] += ABxAC;
v_normal[vertexIndices[i + 1] - 1] += ABxAC;
v_normal[vertexIndices[i + 2] - 1] += ABxAC;
}
Now normalize each v_normal and use.
Note that the number of vertex normals is equal to the number of vertices of the mesh.
This code works fine on my machine
glm::vec3 computeFaceNormal(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3) {
// Uses p2 as a new origin for p1,p3
auto a = p3 - p2;
auto b = p1 - p2;
// Compute the cross product a X b to get the face normal
return glm::normalize(glm::cross(a, b));
}
void Mesh::calculateNormals() {
this->normals = std::vector<glm::vec3>(this->vertices.size());
// For each face calculate normals and append it
// to the corresponding vertices of the face
for (unsigned int i = 0; i < this->indices.size(); i += 3) {
glm::vec3 A = this->vertices[this->indices[i]];
glm::vec3 B = this->vertices[this->indices[i + 1LL]];
glm::vec3 C = this->vertices[this->indices[i + 2LL]];
glm::vec3 normal = computeFaceNormal(A, B, C);
this->normals[this->indices[i]] += normal;
this->normals[this->indices[i + 1LL]] += normal;
this->normals[this->indices[i + 2LL]] += normal;
}
// Normalize each normal
for (unsigned int i = 0; i < this->normals.size(); i++)
this->normals[i] = glm::normalize(this->normals[i]);
}
It seems all you need to implement is the function to get the average vector from N vectors. This is one of the ways to do it:
struct Vector3f {
float x, y, z;
};
typedef struct Vector3f Vector3f;
Vector3f averageVector(Vector3f *vectors, int count) {
Vector3f toReturn;
toReturn.x = .0f;
toReturn.y = .0f;
toReturn.z = .0f;
// sum all the vectors
for(int i=0; i<count; i++) {
Vector3f toAdd = vectors[i];
toReturn.x += toAdd.x;
toReturn.y += toAdd.y;
toReturn.z += toAdd.z;
}
// divide with number of vectors
// TODO: check (count == 0)
float scale = 1.0f/count;
toReturn.x *= scale;
toReturn.y *= scale;
toReturn.z *= scale;
return toReturn;
}
I am sure you can port that to your C++ class. The result should then be normalized unless the length iz zero.
Find all surface normals for every vertex you have. Then use the averageVector and normalize the result to get the smooth normals you are looking for.
Still as already mentioned you should know that this is not appropriate for edged parts of the shape. In those cases you should use the surface vectors directly. You would probably be able to solve most of such cases by simply ignoring a surface normal(s) that are too different from the others. Extremely edgy shapes like cube for instance will be impossible with this procedure. What you would get for instance is:
{
1.0f, .0f, .0f,
.0f, 1.0f, .0f,
.0f, .0f, 1.0f
}
With the normalized average of {.58f, .58f, .58f}. The result would pretty much be an extremely low resolution sphere rather then a cube.