why is the transformation matrices behaving differently - opengl

I get different results when scaling objects.
The objects have four different glm::vec3 values
1) Position , Rotation , Scaling , Center Point
This is the Transformation Matrix of the object
TransformationMatrix = PositionMatrix() * RotationMatrix() * ScalingMatrix();
The rotation and scaling Matrix looks like this.
glm::vec3 pivotVector(pivotx, pivoty, pivotz);
glm::mat4 TransPivot = glm::translate(glm::mat4x4(1.0f), pivotVector);
glm::mat4 TransPivotInverse = glm::translate(glm::mat4x4(1.0f), -pivotVector);
glm::mat4 TransformationScale = glm::scale(glm::mat4(1.0), glm::vec3(scax, scay, scaz));
return TransPivot * TransformationScale * TransPivotInverse;
In the first case.
I move the rectangle object to 200 units in x.
Than i scale the group which is at position x = 0.0
so the final matrix for the rectangle object is
finalMatrix = rectangleTransformationMatrix * groupTransformationMatrix
The result i what i expected.The rectangle scales and moves towards the center of the screen.
Now if i do the same thing with three containers.
Here i move the group container to 200 and scale the Top container which is at position 0.0
finalMatrix = rectangleTransformationMatrix * groupTransformationMatrix * TopTransformationMatrix
the rectangle scales at its own position as if the center point of the screen has also moved 200 units.
If i add -200 units to the pivot point x of the top container than i get the result what i expected.
where rectangle moves towards the center of the screen and scales.
If someone can please explain me why i need to add -200 units to the center point of the Top container.Whereas in the first case i did not need to add any value to the pivot point of the scaling container.
when both the operations are identical in nature.
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
First case
Rectangle - > position( x = 200 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
glm::mat4 PositionMatrix = glm::position( // fill the values);
glm::mat4 ScalingMatrix = glm::scaling( // fill the values);
glm::mat4 RotationMatrix = glm::rotate( // fill the values);
RectangleMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for group
froup - > position( x = 0.0 , y = 0, z = 0) , scaling( 0.5 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
final result
finalMatrix = RectangleMatrix * groupMatrix
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Second case
Rectangle - > position( x = 0 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
glm::mat4 PositionMatrix = glm::position( // fill the values);
glm::mat4 ScalingMatrix = glm::scaling( // fill the values);
glm::mat4 RotationMatrix = glm::rotate( // fill the values);
RectangleMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for group
group - > position( x = 200.0 , y = 0, z = 0) , scaling( 1.0 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
the matrix for Top
Top - > position( x = 0.0 , y = 0, z = 0) , scaling( 0.5 , 1.0 , 1.0 ) , Rotation( 0.0 , 0.0 , 0.0 )
TopMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
final result
finalMatrix = RectangleMatrix * groupMatrix * TopMatrix

Matrix operations are not Commutative. scale * translate is not the same as translate * scale
If you have a translation of 200 and and a scale of 0.2, then
translate(200) * scale(0.2)
gives object scaled by 0.2 and translated by 200. But
scale(0.2) * translate(200)
gives object scaled by 0.2 and translated by 40 (0.2*200).
If you have 2 matrices:
groupMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
TopMatrix = PositionMatrix() * RotationMtrix() * ScalingMatrix();
Then groupMatrix * TopMatrix is the same as
groupPositionMatrix * groupRotationMtrix * groupScalingMatrix * topPositionMatrix * topRotationMtrix * topScalingMatrix
The result is different if the scale is encoded in groupScalingMatrix or topScalingMatrix respectively the translation is encoded in groupPositionMatrix or topPositionMatrix.

Related

How to bevel a stretched rectangle

I am able to bevel the corner of a rectangle.
When i strech the rectangle and than try to bevel it then the result does not look smooth , it should look like the rectangle on the right side.
How do i calculate the points for the trianlgle fan when the rectangle is streched ?
currently this is the way i am calculating the points for the Quarter circle.
std::vector<float> bevelData;
bevelData.push_back(0.0); // First set the centre of the rectangle to the data
bevelData.push_back(0.0);
bevelData.push_back(0.0);
bevelData.push_back(0);
bevelData.push_back(0);
bevelData.push_back(1);
bevelData.push_back(0);
bevelData.push_back(0);
for (int i = 0; i <= segments; ++i) {
float x, y;
float angle = start_angle + 0.5 * M_PI * i / static_cast<float>(segments);
x = circX + cos(angle) * rad; // circX is the centre of the circle as marked in yellow in the first image
y = circY + sin(angle) * rad; // circY is the centre of the circle as marked in yellow in the first image , rad is the radius of the circle
bevelData.push_back(x);
bevelData.push_back(y);
bevelData.push_back(0.0);
bevelData.push_back(0);
bevelData.push_back(0);
bevelData.push_back(1);
bevelData.push_back(0);
bevelData.push_back(0);
}
After applying soultion this is the result i get.
//Bevel Bottom Right
float rightWidthBottom = (width / 2) - rightBottomBevel;
float rightHeightBottom = (height / 2) - rightBottomBevel;
std::vector<float> bottomRightBevelData = draw_bevel(rightWidthBottom, rightHeightBottom, rightBottomBevel, 1, -1, iSegmentsRightBottom);
std::vector<float> SuperRectangle::draw_bevel(float p_x, float p_y, float rad, int dir_x, int dir_y , int segments)
{
std::vector<float> bevelData;
float c_x, c_y; // the center of the circle
float start_angle; // the angle where to start the arc
bevelData.push_back(0.0);
bevelData.push_back(0.0);
bevelData.push_back(0.0);
bevelData.push_back(0);
bevelData.push_back(0);
bevelData.push_back(1);
bevelData.push_back(0);
bevelData.push_back(0);
c_x = p_x * dir_x;
c_y = p_y * dir_y;
if (dir_x == 1 && dir_y == 1)
start_angle = 0.0;
else if (dir_x == 1 && dir_y == -1)
start_angle = -M_PI * 0.5f;
else if (dir_x == -1 && dir_y == 1)
start_angle = M_PI * 0.5f;
else if (dir_x == -1 && dir_y == -1)
start_angle = M_PI;
for (int i = 0; i <= segments; ++i) {
float x, y;
float angle = start_angle + 0.5 * M_PI * i / static_cast<float>(segments);
x = c_x + cos(angle) * rad;
y = c_y + sin(angle) * rad;
float fscale = (y / (float)(height / 2.0f));
x = (x + (strech * fscale));
bevelData.push_back(x);
bevelData.push_back(y);
bevelData.push_back(0.0);
bevelData.push_back(0);
bevelData.push_back(0);
bevelData.push_back(1);
bevelData.push_back(0);
bevelData.push_back(0);
}
return bevelData;
}
//////////////////////////////////////////////////////////////////
float xWidth = width / 2;
float yHeight = height / 2;
float TriangleRight[] = {
// positions // Normals // Texture Coord
0.0f , 0.0f , 0.0f , 0.0f,0.0,1.0, 0.0,0.0,
xWidth + strech , yHeight - rightTopBevel,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
xWidth - strech , -yHeight + rightBottomBevel,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
};
float TriangleLeft[] = {
// positions
0.0f , 0.0f , 0.0f , 0.0f,0.0,1.0, 0.0,0.0,
-xWidth + strech , yHeight - leftTopBevel ,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
-xWidth - strech , -yHeight + leftBottomBevel,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
};
float TriangleTop[] = {
// positions
0.0f , 0.0f , 0.0f , 0.0f,0.0,1.0, 0.0,0.0,
xWidth - rightTopBevel + strech , yHeight ,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
-xWidth + leftTopBevel + strech , yHeight,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
};
float TriangleBottom[] = {
// positions
0.0f , 0.0f , 0.0f , 0.0f,0.0,1.0, 0.0,0.0,
xWidth - rightBottomBevel - strech , -yHeight ,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
-xWidth + leftBottomBevel - strech , -yHeight,0.0f, 0.0f,0.0,1.0 , 0.0,0.0,
};
You've a rectangle with a width w and a height h
(-w/2, h/2) (w/2, h/2)
+----------------+
| |
| |
| |
| |
+----------------+
(-w/2, -h/2) (w/2, -h/2)
The points for the rounded corner of the rectangle are calculated by:
x = circX + cos(angle) * rad;
y = circY + sin(angle) * rad;
Then the rectangle is displaced by d. At the top d is add to the x component of the corner points and at the bottom d is subtracted from the x component of the corner points:
(-w/2 + d, h/2) (w/2 + d, h/2)
+----------------+
/ /
/ /
/ /
/ /
+----------------+
(-w/2 - d, -h/2) (w/2 - d, -h/2)
You have to apply the displacement d to the points along the arc, too. The displacement has to be scaled, in relation to the y coordinate of the point.
Points near the bottom edge have to be displaced by a larger scale, than points near the center of the left edge:
x = circX + cos(angle) * rad
y = circY + sin(angle) * rad
scale = y / (h/2)
x = x - d * scale

Rubik cube quaternion rotation gives error on simultaneous rotation

I wanted to use quaternions to have rotations at a specific axis regardless of the position of the cube but when I rotate 90 degrees along x axis then try to rotate along y axis instead of rotating at y it rotates at z axis any idea how to solve this? my code is the following:
quaternion permquat;
permquat.x = 0;
permquat.y = 0;
permquat.z = 0;
permquat.w = 1;
quaternion permquat2;
permquat2.x = 0;
permquat2.y = 0;
permquat2.z = 0;
permquat2.w = 1;
quaternion local_rotation;
local_rotation.w = cosf( beta/(2.0*180.0));
local_rotation.x = 1.0 * sinf( beta/(2.0*180.0) );
local_rotation.y = 0.0 * sinf( beta/(2.0*180.0) );
local_rotation.z = 0.0 * sinf( beta/(2.0*180.0) );
quaternion local_rotation2;
local_rotation2.w = cosf( gamma/(2.0*180.0));
local_rotation2.x = 0.0 * sinf( gamma/(2.0*180.0));
local_rotation2.y = 1.0 * sinf( gamma/(2.0*180.0));
local_rotation2.z = 0.0 * sinf( gamma/(2.0*180.0));
permquat = mult(local_rotation, permquat);
normalize(permquat);
permquat2 = mult(local_rotation2, permquat2);
normalize(permquat);
matTransl3D = matrix(permquat);
glMultMatrixf(*matTransl3D);
matTransl3D = matrix(permquat2);
glMultMatrixf(*matTransl3D);
glTranslatef(-9, -9, -9); // bottom left
drawcube();
glPopMatrix();
Beta and Gamma are incremented by 2 for each key press.

Math behind finding screen coordinates in a path tracer

I have been provided with a framework where a simple path tracer is implemented. What I am trying to do so far is understanding the whole code because I'll need to put my hands on it. Unfortunately I am arrived on a step where I don't actually get what's happening and since I am a newbie in the advanced graphics field I don't manage to "decrypt" this part. The developer is trying to get the coordinates of the screen corners as for comments. What I need to understand is the math behind it and therefore some of the variables that are used. Here is the code:
// setup virtual screen plane
vec3 E( 2, 8, -26 ), V( 0, 0, 1 );
static float r = 1.85f;
mat4 M = rotate( mat4( 1 ), r, vec3( 0, 1, 0 ) );
float d = 0.5f, ratio = SCRWIDTH / SCRHEIGHT, focal = 18.0f;
vec3 p1( E + V * focal + vec3( -d * ratio * focal, d * focal, 0 ) ); // top-left screen corner
vec3 p2( E + V * focal + vec3( d * ratio * focal, d * focal, 0 ) ); // top-right screen corner
vec3 p3( E + V * focal + vec3( -d * ratio * focal, -d * focal, 0 ) ); // bottom-left screen corner
p1 = vec3( M * vec4( p1, 1.0f ) );
p2 = vec3( M * vec4( p2, 1.0f ) );
p3 = vec3( M * vec4( p3, 1.0f ) );
For example:
what is the "d" variable and why both "d" and "focal" are fixed?
is "focal" the focal length?
What do you think are the "E" and "V" vectors?
is the matrix "M" the CameraToWorldCoordinates matrix?
I need to understand every step of those formulas if possible, the variables, and the math used in those few lines of code. Thanks in advance.
My guesses:
E: eye position—position of the eye/camera in world space
V: view direction—the direction the camera is looking, in world coordinates
d: named constant for one half—corners are half the screen size away from the centre (where the camera is looking)
focal: distance of the image plane from the camera. Given its use in screen corner offsets, it also seems to be the height of the image plane in world coordinates.
M: I'd say this is the WorldToCamera matrix. It's used to transform a point which is based on E
How the points are computed:
Start at the camera: E
Move focal distance along the view direction, effectively moving to the centre of the image plane: + V * focal
Add offsets on X & Y which will move half a screen distance: + vec3( ::: )
Given that V does not figure in the vec3() arguments (nor does any up or right vector), this seems to hard-code the idea that V is collinear with the Z axis.
Finally, the points are tranformed as points (as opposed to directions, since their homogenous coordinate is 1) by M.

OpenGL: Deform (scale) stencil shadow from light source

I have a basic stencil shadow functioning in my game engine. I'm trying to deform the shadow based on the lighting direction, which I have:
/*
* #brief Applies translation, rotation and scale for the shadow of the specified
* entity. In order to reuse the vertex arrays from the primary rendering
* pass, the shadow origin must transformed into model-view space.
*/
static void R_RotateForMeshShadow_default(const r_entity_t *e) {
vec3_t origin, delta;
if (!e) {
glPopMatrix();
return;
}
R_TransformForEntity(e, e->lighting->shadow_origin, origin);
VectorSubtract(e->lighting->shadow_origin, e->origin, delta);
const vec_t scale = 1.0 + VectorLength(delta) / LIGHTING_MAX_SHADOW_DISTANCE;
/*const vec_t dot = DotProduct(e->lighting->shadow_normal, e->lighting->dir);
const vec_t sy = sin(Radians(e->angles[YAW]));
const vec_t cy = cos(Radians(e->angles[YAW]));*/
glPushMatrix();
glTranslatef(origin[0], origin[1], origin[2] + 1.0);
glRotatef(-e->angles[PITCH], 0.0, 1.0, 0.0);
glScalef(scale, scale, 0.0);
}
I've commented out the dot product of the ground plane (shadow_normal) and lighting direction, as well as the sin and cos of the yaw of the model, because while I'm pretty sure they are what I need to augment the scale of the shadow, I don't know what the correct formula is to yield a perspective-correct deformation. For someone who better understands projections, this is probably child's play.. for me, I'm stabbing in the dark.
I was eventually able to achieve the desired effect by managing my own matrices and adapting code from the SGI's OpenGL Cookbook. The code uses LordHavoc's matrix library from his DarkPlaces Quake engine. Inline comments call out the major steps. Here's the full code:
/*
* #brief Projects the model view matrix for the given entity onto the shadow
* plane. A perspective shear is then applied using the standard planar shadow
* deformation from SGI's cookbook, adjusted for Quake's negative planes:
*
* ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node192.html
*/
static void R_RotateForMeshShadow_default(const r_entity_t *e, r_shadow_t *s) {
vec4_t pos, normal;
matrix4x4_t proj, shear;
vec_t dot;
if (!e) {
glPopMatrix();
return;
}
const cm_bsp_plane_t *p = &s->plane;
// project the entity onto the shadow plane
vec3_t vx, vy, vz, t;
Matrix4x4_ToVectors(&e->matrix, vx, vy, vz, t);
dot = DotProduct(vx, p->normal);
VectorMA(vx, -dot, p->normal, vx);
dot = DotProduct(vy, p->normal);
VectorMA(vy, -dot, p->normal, vy);
dot = DotProduct(vz, p->normal);
VectorMA(vz, -dot, p->normal, vz);
dot = DotProduct(t, p->normal) - p->dist;
VectorMA(t, -dot, p->normal, t);
Matrix4x4_FromVectors(&proj, vx, vy, vz, t);
glPushMatrix();
glMultMatrixf((GLfloat *) proj.m);
// transform the light position and shadow plane into model space
Matrix4x4_Transform(&e->inverse_matrix, s->illumination->light.origin, pos);
pos[3] = 1.0;
const vec_t *n = p->normal;
Matrix4x4_TransformPositivePlane(&e->inverse_matrix, n[0], n[1], n[2], p->dist, normal);
// calculate shearing, accounting for Quake's negative plane equation
normal[3] = -normal[3];
dot = DotProduct(pos, normal) + pos[3] * normal[3];
shear.m[0][0] = dot - pos[0] * normal[0];
shear.m[1][0] = 0.0 - pos[0] * normal[1];
shear.m[2][0] = 0.0 - pos[0] * normal[2];
shear.m[3][0] = 0.0 - pos[0] * normal[3];
shear.m[0][1] = 0.0 - pos[1] * normal[0];
shear.m[1][1] = dot - pos[1] * normal[1];
shear.m[2][1] = 0.0 - pos[1] * normal[2];
shear.m[3][1] = 0.0 - pos[1] * normal[3];
shear.m[0][2] = 0.0 - pos[2] * normal[0];
shear.m[1][2] = 0.0 - pos[2] * normal[1];
shear.m[2][2] = dot - pos[2] * normal[2];
shear.m[3][2] = 0.0 - pos[2] * normal[3];
shear.m[0][3] = 0.0 - pos[3] * normal[0];
shear.m[1][3] = 0.0 - pos[3] * normal[1];
shear.m[2][3] = 0.0 - pos[3] * normal[2];
shear.m[3][3] = dot - pos[3] * normal[3];
glMultMatrixf((GLfloat *) shear.m);
Matrix4x4_Copy(&s->matrix, &proj);
}
The full implementation of this lives here:
https://github.com/jdolan/quake2world/blob/master/src/client/renderer/r_mesh_shadow.c

Trying draw cylinder in directx through D3DXCreateCylinder

I am very novice in directx and want to know more, I was trying the code from directxtutorial.comI sthere any example\sample for D3DXCreateCylinder? Thanks
Alright then,
D3DXCreateCylinder can be used as such
LPD3DXMESH cylinder; // Define a pointer to the mesh.
D3DXCreateCylinder(d3ddev, 2.0f, 0.0f, 10.0f, 10, 10, &cylinder, NULL);
So what is going on?
d3ddev should be your device context that I will assume you have created.
The radius on the Negative Z.
The radius on the Positive Z.
The length of the shape on the Z axis.
The amount of polygons (or subdivisions) around the Z.
The amount of polygons on the Z axis.
The address of the pointer which holds the created mesh.
Tinker around with the values, experimenting can't hurt.
These resources will help supplement the answer provided:
https://directxtutorial.com/Tutorial11/B-A/BA2.aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476880(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/windows/desktop/hh780339(v=vs.85).aspx
By default, D3DXCreateCylinder API don't generate the texture coordinates for mapping a texture above the created cylindrical mesh.
Alternate you can formulate your own cylindrical Geometry like below for texture mapping:
for( DWORD i = 0; i < Sides; i++ )
{
FLOAT theta = ( 2 * D3DX_PI * i ) / ( Sides - 1 );
pVertices[2 * i + 0].position = D3DXVECTOR3(radius*sinf( theta ), -height, radius*cosf( theta ) );
pVertices[2 * i + 0].color = 0xffffffff;
pVertices[2 * i + 0].tu = ( ( FLOAT )i ) / ( Sides - 1 );
pVertices[2 * i + 0].tv = 1.0f;
pVertices[2 * i + 1].position = D3DXVECTOR3( radius*sinf( theta ), height, radius*cosf( theta ) );
pVertices[2 * i + 1].color = 0xff808080;
pVertices[2 * i + 1].tu = ( ( FLOAT )i ) / ( Sides - 1 );
pVertices[2 * i + 1].tv = 0.0f;
}