Bending a wire to form circle and ellipse - opengl

I have given N points on a straight line, these are lets say- (x1,y1) , (x2, y2), .... (xn, yn) , these points represent a wire in 3D. I want this wire to bend to form shape of circle and ellipse. So these points will map to points on circle and ellipse. Tell about some mapping technique that maps points on straight line onto points on circle and ellipse.

Reduce the line points to scalar parametric coordinates 0 <= t <= 1.
Multiply the t coordinates by 2*pi (giving theta) and plug them into the parametric circle equation:
x = cos( theta )
y = sin( theta )
Example:
Given 4 points (0,0), (1,1), (5,5), and (10,10) convert to parametric coordinates like so:
length = | (10,10) - (0,0) | = sqrt( 10^2 + 10^2 ) = sqrt( 200 )
t0 = 0.0 = | (0,0) - (0,0) | / length = 0
t1 = 0.1 = | (1,1) - (0,0) | / length = sqrt( 2 ) / length
t2 = 0.5 = | (5,5) - (0,0) | / length = sqrt( 50 ) / length
t3 = 1.0 = | (10,10) - (0,0) | / length = sqrt( 200 ) / length
p0.x = cos( t0 * 2 * pi ) = 1
p0.y = sin( t0 * 2 * pi ) = 0
p1.x = cos( t1 * 2 * pi ) = 0.80901699437
p1.y = sin( t1 * 2 * pi ) = 0.58778525229
...

Related

Rotating line inside rectangle bounds

What I try to achieve is to rotate a line around rectangle center so it always stays in its bounds touching them (or having some padding).
Now I have the following routine for this, as you see I use tan calculations dividing my rectangle into 8 parts (red lines)
It works so far, but for some reason I have inconsistency using other calculation for radius drawing (green line), the lines won't always match as expected and I wonder why.
Basically the same could be achieved using just sin/cos calculations and finding cross points between lines and rect borders, but for some reason I could not get it to work.
std::pair<Point, Point>
MathUtils::calculateRotatingLine(Size size, double degrees)
{
auto width = size.width;
auto height = size.height;
double diagonalAngleTopRight = radiansToDegrees(atan((width / 2) / (height / 2)));
double diagonalAngleBottomRight = 90 + (90 - diagonalAngleTopRight);
double diagonalAngleBottomLeft = 180 + diagonalAngleTopRight;
double diagonalAngleTopLeft = 180 + diagonalAngleBottomRight;
double x, y;
/*
* *8*1*
* 7* *2
* 6* *3
* *5*4*
*/
// 1
if (degrees >= 0 && degrees <= diagonalAngleTopRight) {
x = width / 2 + height / 2 * tan(degreesToRadians(degrees));
y = 0;
}
// 2
else if (degrees > diagonalAngleTopRight && degrees <= 90) {
x = width;
y = width / 2 * tan(degreesToRadians(degrees - diagonalAngleTopRight));
}
// 3
else if (degrees > 90 && degrees <= diagonalAngleBottomRight) {
x = width;
y = height / 2 + width / 2 * tan(degreesToRadians(degrees - 90));
}
// 4
else if (degrees > diagonalAngleBottomRight && degrees <= 180) {
x = width - height / 2 * tan(degreesToRadians(degrees - diagonalAngleBottomRight));
y = height;
}
// 5
else if (degrees > 180 && degrees <= diagonalAngleBottomLeft) {
x = width / 2 - height / 2 * tan(degreesToRadians(degrees - 180));
y = height;
}
// 6
else if (degrees > diagonalAngleBottomLeft && degrees <= 270) {
x = 0;
y = height - width / 2 * tan(degreesToRadians(degrees - diagonalAngleBottomLeft));
}
// 7
else if (degrees > 270 && degrees <= diagonalAngleTopLeft) {
x = 0;
y = height / 2 - width / 2 * tan(degreesToRadians(degrees - 270));
}
// 8
else {
x = height / 2 * tan(degreesToRadians(degrees - diagonalAngleTopLeft));
y = 0;
}
return {Point{width / 2, height / 2}, Point{x, y}};
}
Green line calculation
Point
MathUtils::calculateCirclePoint(double radius, double degrees)
{
return {radius * cos(degreesToRadians(degrees)), radius * sin(degreesToRadians(degrees))};
}
EDIT
Awesome, it works thanks to #MBo
Point
MathUtils::calculateCrossPoint(Size size, double degrees)
{
auto x0 = size.width / 2;
auto y0 = size.height / 2;
auto vx = cos(degreesToRadians(degrees - 90));
auto vy = sin(degreesToRadians(degrees - 90));
//potential border positions
auto ex = vx > 0 ? size.width : 0;
auto ey = vy > 0 ? size.height : 0;
//check for horizontal/vertical directions
if (vx == 0) {
return {x0, ey};
}
if (vy == 0) {
return {ex, y0};
}
// in general case find times of intersections with horizontal and vertical edge line
auto tx = (ex - x0) / vx;
auto ty = (ey - y0) / vy;
// and get intersection for smaller parameter value
if (tx <= ty) {
return {ex, y0 + tx * vy};
}
return {x0 + ty * vx, ey};
}
Pseudocode to find intersection of ray emitted from rectangle center (with angle an in radians) with edges. (Works also for other (x0,y0) positions)
x0 = width / 2;
y0 = height / 2;
vx = cos(an);
vy = sin(an);
//potential border positions
ex = vx > 0? width: 0
ey = vy > 0? height: 0
//check for horizontal/vertical directions
if vx = 0 then
return cx = x0, cy = ey
if vy = 0 then
return cx = ex, cy = y0
//in general case find times of intersections with horizontal and vertical edge line
tx = (ex - x0) / vx
ty = (ey - y0) / vy
//and get intersection for smaller parameter value
if tx <= ty then
return cx = ex, cy = y0 + tx * vy
else
return cx = x0 + ty * vx, cy = ey

What does this shader code return for two vectors a, b, given an angle of 45 degrees between them?

What is the value returned by:
dot(normalize(a), normalize(b))
given that the angle between the vectors a and b is 45°.
0
1
sqrt(2)
1 / sqrt(2)
In general The dot product of 2 vectors is equal the cosine of the angle between the 2 vectors multiplied by the magnitude (length) of both vectors.
dot( A, B ) == | A | * | B | * cos( angle_A_B )
This follows, that the dot product of 2 unit vectors is equal the cosine of the angle between the 2 vectors, because the length of a unit vector is 1.
uA = normalize( A )
uB = normalize( B )
cos( angle_A_B ) == dot( uA, uB )
This means that, if the angle between a vector a and b is 45 degrees, then:
dot(normalize(a), normalize(b)) = cos(45°) = 1 / sqrt(2)
Note, the length of the diagonal in a square with a side length of 1, is sqrt(2). If the lenght of the diagonal is 1, then the length of one side is 1 / sqrt(2).

calculate pixel coordinates for 8 equidistant points on a circle

I have a circle centred at 0 with radius 80. How using python do I calculate the coordinates for 8 equidistant points around the circumference of the circle?
r = 80
numPoints = 8.0
points = []
for index in range(numPoints):
points.append([r*math.cos((index*2*math.pi)/numPoints),r*math.sin((index*2*math.pi)/numPoints)])
return points
you can simplify this some if you know you are always going to have only 8 points with something like:
r = 80
numPoints = 8
points = []
x = (r*math.sqrt(2))/2
points = [[0,r],[x,x],[r,0],[-x,x],[-r,0],[-x,-x],[0,-r],[x,-x]]
print points
with x being the x/y of the point 45 degrees and 80 units away from the origin
click this pic for more clarity
in the above picture.
coordinates 1,2,3,4,5,6,7,8 are equidistant points on a circumference of circle Radius R and its centre is at X (0,0)
take the triangle XLZ , its aright angled at L ,
Let LZ = H ,
LY = A
XL + LY = R => XL + A = R => XL = R-A
since XLZ is right angled , XZ square = XL square + LZ square
R square = (R-A) square + h square ————1
since these 8 points makes an octagon theta = 360 deg / 8 = 45 deg
tan 45 deg = h / XL = h / R-A => 1 = h/ R-A => h = R-A —————2
Z coordinates are (R-A, h) = > (h,h)
from the equations 1 and 2
R square = h square + h square => 2 h square = R square => h = R/ sqrt 2
so the coordinates at point 2 (Z) = (R/sqrt2, R/sqrt2)
remaining can be derived easily as they are just oppside
So all coordinates are
1 (0,R)
2 (R/sqrt2,R/sqrt2)
3 (R,0)
4 (-R/sqrt2, R/sqrt2)
5 (-R,0)
6 (-R/sqrt2,-R/sqrt2)
7 (0,-R)
8 (R/sqrt2, -R/sqrt2)

rotation matrix to quaternion (and back) what is wrong?

I copied a code for conversion of 3D roation matrix to quaternions and back. The same code is used in jMonkey (I just rewrote it into my C++ class). However, it does not work properly (at least not as I would expect.)
e.g. I made this test:
matrix (a,b,c):
a : 0.707107 0.000000 0.707107
b : 0.000000 -1.000000 0.000000
c : -0.707107 0.000000 0.707107
>>> ortonormality:
a.a b.b c.c 1.000000 1.000000 1.000000
a.b a.c b.c 0.000000 0.000000 0.000000
>>> matrix -> quat
quat: 0.000000 0.594604 0.000000 0.594604 norm(quat) 0.707107
>>> quat -> matrix
matrix (a,b,c):
a: 0.000000 0.000000 1.000000
b: 0.000000 1.000000 0.000000
c: -1.000000 0.000000 0.000000
I think the problem is in matrix -> quat because I have used quat -> matrix procedure before, and it was working fine. Also it is strange that quaternion made from orthonormal matrix is not unitary.
the matrix -> quat procedure
inline void fromMatrix( TYPE m00, TYPE m01, TYPE m02, TYPE m10, TYPE m11, TYPE m12, TYPE m20, TYPE m21, TYPE m22) {
// Use the Graphics Gems code, from
// ftp://ftp.cis.upenn.edu/pub/graphics/shoemake/quatut.ps.Z
TYPE t = m00 + m11 + m22;
// we protect the division by s by ensuring that s>=1
if (t >= 0) { // by w
TYPE s = sqrt(t + 1);
w = 0.5 * s;
s = 0.5 / s;
x = (m21 - m12) * s;
y = (m02 - m20) * s;
z = (m10 - m01) * s;
} else if ((m00 > m11) && (m00 > m22)) { // by x
TYPE s = sqrt(1 + m00 - m11 - m22);
x = s * 0.5;
s = 0.5 / s;
y = (m10 + m01) * s;
z = (m02 + m20) * s;
w = (m21 - m12) * s;
} else if (m11 > m22) { // by y
TYPE s = sqrt(1 + m11 - m00 - m22);
y = s * 0.5;
s = 0.5 / s;
x = (m10 + m01) * s;
z = (m21 + m12) * s;
w = (m02 - m20) * s;
} else { // by z
TYPE s = sqrt(1 + m22 - m00 - m11);
z = s * 0.5;
s = 0.5 / s;
x = (m02 + m20) * s;
y = (m21 + m12) * s;
w = (m10 - m01) * s;
}
}
the quat -> matrix procedure
inline void toMatrix( MAT& result) const {
TYPE r2 = w*w + x*x + y*y + z*z;
//TYPE s = (r2 > 0) ? 2d / r2 : 0;
TYPE s = 2 / r2;
// compute xs/ys/zs first to save 6 multiplications, since xs/ys/zs
// will be used 2-4 times each.
TYPE xs = x * s; TYPE ys = y * s; TYPE zs = z * s;
TYPE xx = x * xs; TYPE xy = x * ys; TYPE xz = x * zs;
TYPE xw = w * xs; TYPE yy = y * ys; TYPE yz = y * zs;
TYPE yw = w * ys; TYPE zz = z * zs; TYPE zw = w * zs;
// using s=2/norm (instead of 1/norm) saves 9 multiplications by 2 here
result.xx = 1 - (yy + zz);
result.xy = (xy - zw);
result.xz = (xz + yw);
result.yx = (xy + zw);
result.yy = 1 - (xx + zz);
result.yz = (yz - xw);
result.zx = (xz - yw);
result.zy = (yz + xw);
result.zz = 1 - (xx + yy);
};
sorry for TYPE, VEC, MAT, QUAT it is part of class tepmpltes... should be replaced by double, Vec3d, Mat3d, Quat3d or float, Vec3f, Mat3f, Quat3f.
EDIT:
I also checked if I get the same behaviour with jMonkey directly (in case I made a bug in Java to C++ conversion ). And I do - using this code:
Matrix3f Min = new Matrix3f( 0.707107f, 0.000000f, 0.707107f, 0.000000f, -1.000000f, 0.000000f, -0.707107f, 0.000000f, 0.707107f );
Matrix3f Mout = new Matrix3f( );
Quaternion q = new Quaternion();
q.fromRotationMatrix(Min);
System.out.println( q.getX()+" "+q.getY()+" "+q.getZ()+" "+q.getW() );
q.toRotationMatrix(Mout);
System.out.println( Mout.get(0,0) +" "+Mout.get(0,1)+" "+Mout.get(0,2) );
System.out.println( Mout.get(1,0) +" "+Mout.get(1,1)+" "+Mout.get(1,2) );
System.out.println( Mout.get(2,0) +" "+Mout.get(2,1)+" "+Mout.get(2,2) );
Your matrix:
matrix (a,b,c):
a : 0.707107 0.000000 0.707107
b : 0.000000 -1.000000 0.000000
c : -0.707107 0.000000 0.707107
is orthogonal but it is not a rotation matrix. A rotation matrix has determinant 1; your matrix has determinant -1 and is thus an improper rotation.
I think your code is likely correct and the issue is in your data. Try it with a real rotation matrix.

How to switch from left handed system to right handed system?

I am implementing perspective from scratch for an academic project. I am using "Computer Graphics: principles and practices", by Foley, van Dam, Feiner and Hughes (second edition in C).
I just followed the book by implementing all the matrices transformations needed to traslate, rotate, shear, scale, project, transform from perspective to parallel canonical view volumes and for clipping. The book apparently uses a right-handed coordinate system. However, I ended up with primitives appearing in a left-handed coordinate system and I cannot explain why.
Here are the matrices that I used:
Translation:
1, 0, 0, dx
0, 1, 0, dy
0, 0, 1, dz
0, 0, 0, 1
Rotation (to align a coordinate system (rx, ry, rz) to XYZ):
rx1, rx2, rx3, 0
ry1, ry2, ry3, 0
rz1, rz2, rz3, 0
0 , 0 , 0 , 1
Scale:
sx, 0 , 0 , 0
0 , sy, 0 , 0
0 , 0 , sz, 0
0 , 0 , 0 , 1
Shear XY:
1, 0, shx, 0
0, 1, shy, 0
0, 0, 1 , 0
0, 0, 0 , 1
Projecting onto a plane at z = d, with PRP at origin, looking in the positive z direction:
1, 0, 0 , 0
0, 1, 0 , 0
0, 0, 1 , 0
0, 0, 1/d, 0
Then given VRP, VPN, PRP, VUP, f and b (and the direction of projection dop), reduce the space to the canonical viewing volume for perspective using P:
rz = VPN / |VPN|
rx = (VUP x rz) / |VUP x rz|
ry = rz x rx
P = ScaleUniform(-1 / (vrp1Z + b)) *
Scale(-2 * vrp1Z / deltaU, -2 * vrp1Z / deltaV, 1) *
Shear(-dopX / dopZ, -dopY / dopZ) *
T(PRP) *
R(rx, ry, rz) *
T(-VRP)
Where vrp1 is ShearXY * T(-PRP) * (0, 0, 0, 1), deltaU and deltaV the width and height of the viewing window. dop is computed as CW - PRP, where CW is the center of the viewing window.
Then Projection(d) * P gives me the projection matrix.
I projected simple lines representing the unit vectors on x, y and z, but the representation that I obtained drawn on the screen was clearly a left handed coordinate system... Now I need to work in a right handed coordinate system, so is there a way to know where I did wrong?
Here is the code I used:
As you can see, the Z component of the scale matrix is of opposite sign, since clipping wasn't working properly because something was right-handed and something left-handed, but I couldn't discern what exactly, so I swapped the sign of the scale because it wasn't needed in a left-hand system.
Vector rz = vpn.toUnitVector();
Vector rx = vup.cross(rz).toUnitVector();
Vector ry = rz.cross(rx).toUnitVector();
Vector cw = viewWindow.getCenter();
Vector dop = cw - prp;
Matrix t1 = Matrix::traslation(-vrp[X], -vrp[Y], -vrp[Z]);
Matrix r = Matrix::rotation(rx, ry, rz);
Matrix t2 = Matrix::traslation(-prp[X], -prp[Y], -prp[Z]);
Matrix partial = t2 * r * t1;
Matrix shear = Matrix::shearXY(-dop[X] / dop[Z], -dop[Y] / dop[Z]);
Matrix inverseShear = Matrix::shearXY(dop[X] / dop[Z], dop[Y] / dop[Z]);
Vector vrp1 = shear * t2 * Vector(0, 0, 0, 1);
Matrix scale = Matrix::scale(
2 * vrp1[Z] / ((viewWindow.xMax - viewWindow.xMin) * (vrp1[Z] + b)),
2 * vrp1[Z] / ((viewWindow.yMax - viewWindow.yMin) * (vrp1[Z] + b)),
1 / (vrp1[Z] + b)); // HERE <--- WAS NEGATIVE
Matrix inverseScale = Matrix::scale(
((viewWindow.xMax - viewWindow.xMin) * (vrp1[Z] + b)) / (2 * vrp1[Z]),
((viewWindow.yMax - viewWindow.yMin) * (vrp1[Z] + b)) / (2 * vrp1[Z]),
(vrp1[Z] + b));
float zMin = -(vrp1[Z] + f) / (vrp1[Z] + b);
Matrix parallel = Perspective::toParallelCvv(zMin);
Matrix inverseParallel = Perspective::inverseToParallelCvv(zMin);
Matrix perspective = Perspective::copAtOrigin(-vrp1[Z]);
projection = perspective * shear * partial;
canonicalView = parallel * scale * shear * partial;
canonicalToProjection = perspective * inverseScale * inverseParallel;