I've been trying to produce a glossy shading using the Phong model but for some reason instead of glossy appearance, all I get is big white blotch on the front of the sphere. Initially the model worked for a single sphere, but now I've updated the code so I can draw multiple spheres and the model has started to fail despite applying the same logic and I don't know why.
single sphere: diffuse and specular
diffuse, multiple
diffuse+specular, multiple
main part
vec color(const ray& r)
{
vector <sphere> objects;
vector <Light> lighting;
objects.push_back(sphere(vec(0,-100.5,-3), 100, vec(0, 1, 0)));
objects.push_back(sphere(vec(0, 0, -1), 0.5, vec(1, 0, 0)));
objects.push_back(sphere(vec(0, 1 ,-1), 0.5, vec(1, 0, 1)));
lighting.push_back(Light(vec(0, 0, -1), vec(0, -1, 0)));
float infinity = 2000.0;
sphere* closest = NULL;
vec background_color( .678, .847, .902);
vec totalLight(0.0, 0.0, 0.0);
int pos = 0;
for(int j = 0; j < objects.size(); j++)
{
float t = objects[j].intersect(r);
if(t > 0.0)
{
if(t < 2000.0)
{
infinity = t;
closest = &objects[j];
pos = j;
}
}
}
if(infinity == 2000.0)
return background_color;
else
{
float a = objects[pos].intersect(r);
vec view_dir = vec(-2, 2, 10) - r.p_at_par(a);
vec normal = unit_vector((r.p_at_par(a) - closest->centre)/closest->radius);
vec light = unit_vector(vec(-2, 0, 0) - r.p_at_par(a));
vec reflection = 2.0*dot(light, normal)*normal - light;
vec specular = vec(1, 1, 1)*pow(max(0.f, dot(reflection, view_dir)), 256);
vec diffuse = (closest->color)*max(0.f, dot(normal, light));
vec total = diffuse + specular;
return total;
}
}
as I understand, specular = white * dot(view_dir, L_dir)^n * ks and the total lighting is = specular + diffuse + ambient.
You are indeed right on your specular contribution. You can see it as how much light is reflected into my viewing direction.
First of all I don't see you normalising view_dir. Make sure all vectors are normalised. If a and b have length 1 the next is true
Also to help debugging in the future you may want to generate false color images. This images can help you see what's going on. e.g you can just render your flat color, surface normals (xyz to rgb), the number of light sources affecting a certain pixel, ... . This may help you spotting unexpected behaviours.
Hope this helps.
Related
I have a 3D Webgl scene. I am using Regl http://regl.party/ . Which is WebGL. So I am essentially writing straight GLSL.
This is a game project. I have an array of 3D positions [[x,y,z] ...] which are bullets, or projectiles. I want to draw these bullets as a simple cube, sphere, or particle. No requirement on the appearance.
How can I make shaders and a draw call for this without having to create a repeated duplicate set of geometry for the bullets?
Preferring an answer with a vert and frag shader example that demonstrates the expected data input and can be reverse engineered to handle the CPU binding layer
You create an regl command which encapsulates a bunch of data. You can then call it with an object.
Each uniform can take an optional function to supply its value. That function is passed a regl context as the first argument and then the object you passed as the second argument so you can call it multiple times with a different object to draw the same thing (same vertices, same shader) somewhere else.
var regl = createREGL()
const objects = [];
const numObjects = 100;
for (let i = 0; i < numObjects; ++i) {
objects.push({
x: rand(-1, 1),
y: rand(-1, 1),
speed: rand(.5, 1.5),
direction: rand(0, Math.PI * 2),
color: [rand(0, 1), rand(0, 1), rand(0, 1), 1],
});
}
function rand(min, max) {
return Math.random() * (max - min) + min;
}
const starPositions = [[0, 0, 0]];
const starElements = [];
const numPoints = 5;
for (let i = 0; i < numPoints; ++i) {
for (let j = 0; j < 2; ++j) {
const a = (i * 2 + j) / (numPoints * 2) * Math.PI * 2;
const r = 0.5 + j * 0.5;
starPositions.push([
Math.sin(a) * r,
Math.cos(a) * r,
0,
]);
}
starElements.push([
0, 1 + i * 2, 1 + i * 2 + 1,
]);
}
const drawStar = regl({
frag: `
precision mediump float;
uniform vec4 color;
void main () {
gl_FragColor = color;
}`,
vert: `
precision mediump float;
attribute vec3 position;
uniform mat4 mat;
void main() {
gl_Position = mat * vec4(position, 1);
}`,
attributes: {
position: starPositions,
},
elements: starElements,
uniforms: {
mat: (ctx, props) => {
const {viewportWidth, viewportHeight} = ctx;
const {x, y} = props;
const aspect = viewportWidth / viewportHeight;
return [.1 / aspect, 0, 0, 0,
0, .1, 0, 0,
0, 0, 0, 0,
x, y, 0, 1];
},
color: (ctx, props) => props.color,
}
})
regl.frame(function () {
regl.clear({
color: [0, 0, 0, 1]
});
objects.forEach((o) => {
o.direction += rand(-0.1, 0.1);
o.x += Math.cos(o.direction) * o.speed * 0.01;
o.y += Math.sin(o.direction) * o.speed * 0.01;
o.x = (o.x + 3) % 2 - 1;
o.y = (o.y + 3) % 2 - 1;
drawStar(o);
});
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/regl/1.3.11/regl.min.js"></script>
You can draw all of the bullets as point sprites, in which case you just need to provide the position and size of each bullet and draw them as GL_POINTS. Each “point” is rasterized to a square based on the output of your vertex shader (which runs once per point). Your fragment shader is called for each fragment in that square, and can color the fragment however it wants—with a flat color, by sampling a texture, or however else you want.
Or you can provide a single model for all bullets, a separate transform for each bullet, and draw them as instanced GL_TRIANGLES or GL_TRIANGLE_STRIP or whatever. Read about instancing on the OpenGL wiki.
Not a WebGL coder so read with prejudice...
Encode the vertexes in a texture
beware of clamping use texture format that does not clamp to <0.0,+1.0> like GL_LUMINANCE32F_ARB or use vertexes in that range only. To check for clamping use:
GLSL debug prints
Render single rectangle covering whole screen
and use the texture from #1 as input. This will ensure that a fragment shader is called for each pixel of the screen/view exactly once.
Inside fragment shader read the texture and check the distance of a fragment to your vertexes
based on it render your stuff or dicard() fragment... spheres are easy, but boxes and other shapes might be complicated to render based on the distance of vertex especially if they can be arbitrary oriented (which need additional info in the input texture).
To ease up this you can prerender them into some texture and use the distance as texture coordinates ...
This answer of mine is using this technique:
raytrace through 3D mesh
You can sometimes get away with using GL_POINTS with a large gl_PointSize and a customized fragment shader.
An example shown here using distance to point center for fragment alpha. (You could also just as well sample a texture)
The support for large point sizes might be limited though, so check that before deciding on this route.
var canvas = document.getElementById('cvs');
gl = canvas.getContext('webgl');
var vertices = [
-0.5, 0.75,0.0,
0.0, 0.5, 0.0,
-0.75,0.25,0.0,
];
var vertex_buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
var vertCode =
`attribute vec3 coord;
void main(void) {
gl_Position = vec4(coord, 1.0);
gl_PointSize = 50.0;
}`;
var vertShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertShader, vertCode);
gl.compileShader(vertShader);
var fragCode =
`void main(void) {
mediump float ds = distance(gl_PointCoord.xy, vec2(0.5,0.5))*2.0;
mediump vec4 fg_color=vec4(0.0, 0.0, 0.0,1.0- ds);
gl_FragColor = fg_color;
}`;
var fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragShader, fragCode);
gl.compileShader(fragShader);
var shaderProgram = gl.createProgram();
gl.attachShader(shaderProgram, vertShader);
gl.attachShader(shaderProgram, fragShader);
gl.linkProgram(shaderProgram);
gl.useProgram(shaderProgram);
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
var coord = gl.getAttribLocation(shaderProgram, "coord");
gl.vertexAttribPointer(coord, 3, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(coord);
gl.viewport(0,0,canvas.width,canvas.height);
gl.drawArrays(gl.POINTS, 0, 3);
<!doctype html>
<html>
<body>
<canvas width = "400" height = "400" id = "cvs"></canvas>
</body>
</html>
could anyone please help me calculating vertex normals in OpenGL?
I am loading an obj file and adding Gouraud shading by calculating vertex normals without using glNormal3f or glLight functions..
I have declared functions like operators, crossproduct, innerproduct,and etc..
I have understood that in order to get vertex normals, I first need to calculate surface normal aka normal vector with crossproduct.. and also
since I am loading an obj file.. and I am placing the three points of Faces of the obj file in id1,id2,id3 something like that
I would be grateful if anyone can help me writing codes or give me a guideline how to start the codes. please ...
thanks..
its to draw
FACE cur_face = cube.face[i];
glColor3f(cube.vertex_color[cur_face.id1].x,cube.vertex_color[cur_face.id1].y,cube.vertex_color[cur_face.id1].z);
glVertex3f(cube.vertex[cur_face.id1].x,cube.vertex[cur_face.id1].y,cube.vertex[cur_face.id1].z);
glColor3f(cube.vertex_color[cur_face.id2].x,cube.vertex_color[cur_face.id2].y,cube.vertex_color[cur_face.id2].z);
glVertex3f(cube.vertex[cur_face.id2].x,cube.vertex[cur_face.id2].y,cube.vertex[cur_face.id2].z);
glColor3f(cube.vertex_color[cur_face.id3].x,cube.vertex_color[cur_face.id3].y,cube.vertex_color[cur_face.id3].z);
glVertex3f(cube.vertex[cur_face.id3].x,cube.vertex[cur_face.id3].y,cube.vertex[cur_face.id3].z);
}
This is the equation for color calculation
VECTOR kd;
VECTOR ks;
kd=VECTOR(0.8, 0.8, 0.8);
ks=VECTOR(1.0, 0.0, 0.0);
double inner = kd.InnerProduct(ks);
int i, j;
for(i=0;i<cube.vertex.size();i++)
{
VECTOR n = cube.vertex_normal[i];
VECTOR l = VECTOR(100,100,0) - cube.vertex[i];
VECTOR v = VECTOR(0,0,1) - cube.vertex[i];
float xl = n.InnerProduct(l)/n.Magnitude();
VECTOR x = (n * (1.0/ n.Magnitude())) * xl;
VECTOR r = x - (l-x);
VECTOR color = kd * (n.InnerProduct(l)) + ks * pow((v.InnerProduct(r)),10);
cube.vertex_color[i] = color;
*This answer is for triangular mesh and can be extended to poly mesh as well.
tempVertices stores list of all vertices.
vertexIndices stores details of faces(triangles) of the mesh in a vector (in a flat manner).
std::vector<glm::vec3> v_normal;
// initialize vertex normals to 0
for (int i = 0; i != tempVertices.size(); i++)
{
v_normal.push_back(glm::vec3(0.0f, 0.0f, 0.0f));
}
// For each face calculate normals and append to the corresponding vertices of the face
for (unsigned int i = 0; i < vertexIndices.size(); i += 3)
{
//vi v(i+1) v(i+2) are the three faces of a triangle
glm::vec3 A = tempVertices[vertexIndices[i] - 1];
glm::vec3 B = tempVertices[vertexIndices[i + 1] - 1];
glm::vec3 C = tempVertices[vertexIndices[i + 2] - 1];
glm::vec3 AB = B - A;
glm::vec3 AC = C - A;
glm::vec3 ABxAC = glm::cross(AB, AC);
v_normal[vertexIndices[i] - 1] += ABxAC;
v_normal[vertexIndices[i + 1] - 1] += ABxAC;
v_normal[vertexIndices[i + 2] - 1] += ABxAC;
}
Now normalize each v_normal and use.
Note that the number of vertex normals is equal to the number of vertices of the mesh.
This code works fine on my machine
glm::vec3 computeFaceNormal(glm::vec3 p1, glm::vec3 p2, glm::vec3 p3) {
// Uses p2 as a new origin for p1,p3
auto a = p3 - p2;
auto b = p1 - p2;
// Compute the cross product a X b to get the face normal
return glm::normalize(glm::cross(a, b));
}
void Mesh::calculateNormals() {
this->normals = std::vector<glm::vec3>(this->vertices.size());
// For each face calculate normals and append it
// to the corresponding vertices of the face
for (unsigned int i = 0; i < this->indices.size(); i += 3) {
glm::vec3 A = this->vertices[this->indices[i]];
glm::vec3 B = this->vertices[this->indices[i + 1LL]];
glm::vec3 C = this->vertices[this->indices[i + 2LL]];
glm::vec3 normal = computeFaceNormal(A, B, C);
this->normals[this->indices[i]] += normal;
this->normals[this->indices[i + 1LL]] += normal;
this->normals[this->indices[i + 2LL]] += normal;
}
// Normalize each normal
for (unsigned int i = 0; i < this->normals.size(); i++)
this->normals[i] = glm::normalize(this->normals[i]);
}
It seems all you need to implement is the function to get the average vector from N vectors. This is one of the ways to do it:
struct Vector3f {
float x, y, z;
};
typedef struct Vector3f Vector3f;
Vector3f averageVector(Vector3f *vectors, int count) {
Vector3f toReturn;
toReturn.x = .0f;
toReturn.y = .0f;
toReturn.z = .0f;
// sum all the vectors
for(int i=0; i<count; i++) {
Vector3f toAdd = vectors[i];
toReturn.x += toAdd.x;
toReturn.y += toAdd.y;
toReturn.z += toAdd.z;
}
// divide with number of vectors
// TODO: check (count == 0)
float scale = 1.0f/count;
toReturn.x *= scale;
toReturn.y *= scale;
toReturn.z *= scale;
return toReturn;
}
I am sure you can port that to your C++ class. The result should then be normalized unless the length iz zero.
Find all surface normals for every vertex you have. Then use the averageVector and normalize the result to get the smooth normals you are looking for.
Still as already mentioned you should know that this is not appropriate for edged parts of the shape. In those cases you should use the surface vectors directly. You would probably be able to solve most of such cases by simply ignoring a surface normal(s) that are too different from the others. Extremely edgy shapes like cube for instance will be impossible with this procedure. What you would get for instance is:
{
1.0f, .0f, .0f,
.0f, 1.0f, .0f,
.0f, .0f, 1.0f
}
With the normalized average of {.58f, .58f, .58f}. The result would pretty much be an extremely low resolution sphere rather then a cube.
Ok. So, I've been messing around with shadows in my game engine for the last week. I've mostly implemented cascading shadow maps (CSM), but I'm having a bit of a problem with shadowing that I just can't seem to solve.
The only light in this scene is a directional light (sun), pointing {-0.1 -0.25 -0.65}. I calculate 4 sets of frustum bounds for the four splits of my CSMs with this code:
// each projection matrix calculated with same near plane, different far
Frustum make_worldFrustum(const glm::mat4& _invProjView) {
Frustum fr; glm::vec4 temp;
temp = _invProjView * glm::vec4(-1, -1, -1, 1);
fr.xyz = glm::vec3(temp) / temp.w;
temp = _invProjView * glm::vec4(-1, -1, 1, 1);
fr.xyZ = glm::vec3(temp) / temp.w;
...etc 6 more times for ndc cube
return fr;
}
For the light, I get a view matrix like this:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
I then create each ortho matrix from the bounds of each frustum:
lightMatVec.clear();
for (auto& frus : cam.frusVec) {
glm::vec3 arr[8] {
glm::vec3(viewMat * glm::vec4(frus.xyz, 1)),
glm::vec3(viewMat * glm::vec4(frus.xyZ, 1)),
etc...
};
glm::vec3 minO = {INFINITY, INFINITY, INFINITY};
glm::vec3 maxO = {-INFINITY, -INFINITY, -INFINITY};
for (auto& vec : arr) {
minO = glm::min(minO, vec);
maxO = glm::max(maxO, vec);
}
glm::mat4 projMat = glm::ortho(minO.x, maxO.x, minO.y, maxO.y, minO.z, maxO.z);
lightMatVec.push_back(projMat * viewMat);
}
I have a 4 layer TEXTURE_2D_ARRAY bound to 4 framebuffers that I draw the scene into with a very simple vertex shader (frag disabled or punchthrough alpha).
I then draw the final scene. The vertex shader outputs four shadow texcoords:
out vec3 slShadcrd[4];
// stuff
for (int i = 0; i < 4; i++) {
vec4 sc = WorldBlock.skylMatArr[i] * vec4(world_pos, 1);
slShadcrd[i] = sc.xyz / sc.w * 0.5f + 0.5f;
}
And a fragment shader, which determines the split to use with:
int csmIndex = 0;
for (uint i = 0u; i < CameraBlock.csmCnt; i++) {
if (-view_pos.z > CameraBlock.csmSplits[i]) index++;
else break;
}
And samples the shadow map array with this function:
float sample_shadow(vec3 _sc, int _csmIndex, sampler2DArrayShadow _tex) {
return texture(_tex, vec4(_sc.xy, _csmIndex, _sc.z)).r;
}
And, this is the scene I get (with each split slightly tinted and the 4 depth layers overlayed):
Great! Looks good.
But, if I turn the camera slightly to the right:
Then shadows start disappearing (and depending on the angle, appearing where they shouldn't be).
I have GL_DEPTH_CLAMP enabled, so that isn't the issue. I'm culling front faces, but turning that off doesn't make a difference to this issue.
What am I missing? I feel like it's an issue with one of my projections, but they all look right to me. Thanks!
EDIT:
All four of the the light's frustums drawn. They are all there, but only z is changing relative to the camera (see comment below):
EDIT:
Probably more useful, this is how the frustums look when I only update them once, when the camera is at (0,0,0) and pointing forwards (0,1,0). Also I drew them with depth testing this time.
IMPORTANT EDIT:
It seems that this issue is directly related to the light's view matrix, currently:
glm::mat4 viewMat = glm::lookAt(cam.pos, cam.pos + lightDir, {0,0,1});
Changing the values for eye and target seems to affect the buggered shadows. But I don't know what I should actually be setting this to? Should be easy for someone with a better understanding than me :D
Solved it! It was indeed an issue with the light's view matrix! All I had to do was replace camPos with the centre point of each frustum! Meaning that each split's light matrix needed a different view matrix. So I just create each view matrix like this...
glm::mat4 viewMat = glm::lookAt(frusCentre, frusCentre+lightDir, {0,0,1});
And get frusCentre simply...
glm::vec3 calc_frusCentre(const Frustum& _frus) {
glm::vec3 min(INFINITY, INFINITY, INFINITY);
glm::vec3 max(-INFINITY, -INFINITY, -INFINITY);
for (auto& vec : {_frus.xyz, _frus.xyZ, _frus.xYz, _frus.xYZ,
_frus.Xyz, _frus.XyZ, _frus.XYz, _frus.XYZ}) {
min = glm::min(min, vec);
max = glm::max(max, vec);
}
return (min + max) / 2.f;
}
And bam! Everything works spectacularly!
EDIT (Last one!):
What I had was not quite right. The view matrix should actually be:
glm::lookAt(frusCentre-lightDir, frusCentre, {0,0,1});
I found a few strange HLSL bugs - or Pix is telling nonsense:
I have 2 orthogonal Vectors: A = { 0.0f, -1.0f, 0.0f } and B { 0.0f, 0.0f, 1.0f }
If I use the HLSL dot function, the output is (-0.0f) which makes sense BUT now the acos of that output is -0.0000675917 (that's what Pix says - and what the shader outputs) which is not what I had expected;
Even if I compute the dotproduct myself (A.x*B.x + A.y * B.y + etc.) the result is still 0.0f but the acos of my result isn't zero.
I do need the result of acos to be as precisely as possible because i want to color my vertices according to the angle between the triangle normal and a given vector.
float4 PS_MyPS(VS_OUTPUT input) : COLOR
{
float Light = saturate(dot(input.Normal, g_LightDir)) + saturate(dot(-input.Normal, g_LightDir)); // compute the lighting
if (dot(input.Vector, CameraDirection) < 0) // if the angle between the normal and the camera direction is greater than 90 degrees
{
input.Vector = -input.Vector; // use a mirrored normal
}
float angle = acos(0.0f) - acos(dot(input.Vector, Vector));
float4 Color;
if (angle > Angles.x) // Set the color according to the Angle
{
Color = Color1;
}
else if (angle > Angles.y)
{
Color = Color2;
}
else if (angle >= -abs(Angles.y))
{
Color = Color3;
}
else if (angle >= Angles.z)
{
Color = Color4;
}
else
{
Color = Color5;
}
return Light * Color;
}
It works fine for angles above 0.01 degrees, but gives wrong results for smaller values.
The other bugs I found are: The "length"-function in hlsl returns 1 for the vector (0, -0, -0, 0) in Pix and the HLSL function "any" on that vector returns true as well. This would mean that -0.0f != 0.0f.
Has anyone else encountered these and maybe has a workaround for my problem?
I tested it on an Intel HD Graphics 4600 and a Nvidia card with the same results.
One of the primary reasons why acos may return bad results is because always remember that acos takes values between -1.0 and 1.0.
Hence if the value exceeds even slightly(1.00001 instead of 1.0) , it may return incorrect result.
I deal with this problem by forced capping i.e putting in a check for
if(something>1.0)
something = 1.0;
else if(something<-1.0)
something = -1.0;
I am trying to create my own quaternion class and I get weird results. Either the cube I am trying to rotate is flickering like crazy, or it is getting warped.
This is my code:
void Quaternion::AddRotation(vec4 v)
{
Quaternion temp(v.x, v.y, v.z, v.w);
*this = temp * (*this);
}
mat4 Quaternion::GenerateMatrix(Quaternion &q)
{
q.Normalize();
//Row order
mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z, 2*q.x*q.y - 2*q.w*q.z, 2*q.x*q.z + 2*q.w*q.y, 0,
2*q.x*q.y + 2*q.w*q.z, 1 - 2*q.x*q.x - 2*q.z*q.z, 2*q.y*q.z + 2*q.w*q.x, 0,
2*q.x*q.z - 2*q.w*q.y, 2*q.y*q.z - 2*q.w*q.x, 1 - 2*q.x*q.x - 2*q.y*q.y, 0,
0, 0, 0, 1);
//Col order
// mat4 m( 1 - 2*q.y*q.y - 2*q.z*q.z,2*q.x*q.y + 2*q.w*q.z,2*q.x*q.z - 2*q.w*q.y,0,
// 2*q.x*q.y - 2*q.w*q.z,1 - 2*q.x*q.x - 2*q.z*q.z,2*q.y*q.z - 2*q.w*q.x,0,
// 2*q.x*q.z + 2*q.w*q.y,2*q.y*q.z + 2*q.w*q.x,1 - 2*q.x*q.x - 2*q.y*q.y,0,
// 0,0,0,1);
return m;
}
When I create the entity I give it a quaternion:
entity->Quat.AddRotation(vec4(1.0f, 1.0f, 0.0f, 45.f));
And each frame I try to rotate it additionally by a small amount:
for (int i = 0; i < Entities.size(); i++)
{
if (Entities[i] != NULL)
{
Entities[i]->Quat.AddRotation(vec4(0.5f, 0.2f, 1.0f, 0.000005f));
Entities[i]->DrawModel();
}
else
break;
}
And finally this is how I draw each cube:
void Entity::DrawModel()
{
glPushMatrix();
//Rotation
mat4 RotationMatrix;
RotationMatrix = this->Quat.GenerateMatrix(this->Quat);
//Position
mat4 TranslationMatrix = glm::translate(mat4(1.0f), this->Pos);
this->Trans = TranslationMatrix * RotationMatrix;
glMultMatrixf(value_ptr(this->Trans));
if (this->shape != NULL)
this->shape->DrawShape();
glPopMatrix();
}
EDIT: This is the tutorial I used to learn quaternions:
http://www.cprogramming.com/tutorial/3d/quaternions.html
Without studying your rotation matrix to the end, there are two possible bugs I can think of. The first one is that your rotation matrix R is not orthogonal, i.e. the inverse of R is not equal to the transposed. This could cause warping of the object. The second place to hide a bug is inside the multiplication of your quaternions.
There's a mistake in the rotation matrix. Try exchanging the element (2,3) with element (3,2).