2D Rectangle rotated in 3D coordinates - c++

I have a 2D Rectangle defined with 4 points ordered in counter clockwise fashion - e.g. point 0(x0,y0), point 1(x1, y1), etc.I would like to know how to rotate each of these points in 3D space (even though the Rectangle is 2D).
I would like to randomly choose the axis (x, y or z) to rotate around. Something along the lines of the following C++ code for each point in the Rectangle:
struct Point { float x, y; };
// Rotate around X-Axis
// pt is current point in Rectangle
// rz is randomly chosen z-coordinate value between [-1,1]
void rotateXaxis(Point &p, angle, float rz) {
float rads = PI * angle / 180.0;
float ry = p.y*cos(rads) + rz*sin(rads);
p.y = ry;
}
// Rotate around Y-Axis
// pt is current point in Rectangle
// rz is randomly chosen z-coordinate value between [-1,1]
void rotateXaxis(Point &p, angle, float rz) {
float rads = PI * angle / 180.0;
float rx = rz*sin(rads) + p.x*cos(rads);
p.x = rx;
}
// Rotate around Z-Axis
// pt is current point in Rectangle
// rz is randomly chosen z-coordinate value between [-1,1]
void rotateZaxis(Point &p, angle, float rz) {
float rads = PI * angle / 180.0;
rx = p.x*math.cos(rads) - p.y*math.sin(rads);
ry = p.x*math.sin(rads) + p.y*math.cos(rads);
p.x = rx;
p.y = ry;
}
Is the above code correct for what I would like to do?
Thank you in advanced for any help.

Your implementation doesn't look correct to me. I were you I would just write one function for rotation around any axis through the origin of the coordinate system pointing in a general direction. Then you choose the specific directions as you please. Here is code in python (it is more concise and presents the idea more clearly), you can implement it in C++.
import numpy as np
import math
def rotation(axis, angle, Vector):
'''
axis should be a unit vector!
'''
axis_X_Vector = np.cross(axis, V)
rotated_Vector = Vector
rotated_Vector = rotated_Vector + math.sin(angle)*axis_X_Vector
rotated_Vector = rotated_Vector + (1 - math.cos(angle))*np.cross(axis, axis_X_Vector)
return rotated_Vector

Related

I have a device reporting left handed coordinate angle and magnitude, how do I represent that as a line on the screen from the center?

The device I am using generates vectors like this;
How do I translate polar (angle and magnitude) from a left handed cordinate to a cartesian line, drawn on a screen where the origin point is the middle of a screen?
I am displaying the line on a wt32-sc01 screen using c++. There is a tft.drawline function but its references are normal pixel locations. In which case 0,0 is the upper left corner of the screen.
This is what I have so far (abbreviated)
....
int screen_height = tft.height();
int screen_width = tft.width();
// Device can read to 12m and reports in mm
float zoom_factor = (screen_width / 2.0) / 12000.0;
int originY = (int)(screen_height / 2);
int originX = (int)(screen_width / 2);
// Offset is for screen scrolling. No screen offset to start
int offsetX = 0;
int offsetY = 0;
...
// ld06 holds the reported angles and distances.
Coord coord = polarToCartesian(ld06.angles[i], ld06.distances[i]);
drawVector(coord, WHITE);
Coord polarToCartesian(float theta, float r) {
// cos() and sin() take radians
float rad = theta * 0.017453292519;
Coord converted = {
(int)(r * cos(rad)),
(int)(r * sin(rad))
};
return converted;
}
void drawVector(Coord coord, int color) {
// Cartesian relative the center of the screen factoring zoom and pan
int destX = (int)(zoom_factor * coord.x) + originX + offsetX;
int destY = originY - (int)(zoom_factor * coord.y) + offsetY;
// From the middle of the screen (origin X, origin Y) to destination x,y
tft.drawLine( originX, originY, destX, destY, color);
}
I have something drawing on the screen, but now I have to translate between a left handed coordinate system and the whole plane is rotated 90 degrees. How do I do that?
If I understood correctly, your coordinate system is with x pointing to the right and the y to the bottom and you used the formula for the standard math coordinate system where y is pointing up so multiplying your sin by -1 should do the trick (if it doesn't, try multiplying random things by -1, it often works for this kind of problems).
I assuming (from your image) your coordinate system has x going right y going up angle going from y axis clockwise and (0,0) is also center of your polar coordinates and your goniometrics accept radians then:
#include <math.h>
float x,y,ang,r;
const float deg = M_PI/180.0;
// ang = <0,360> // your angle
// r >= 0 // your radius (magnitude)
x = r*sin(ang*deg);
y = r*cos(ang*deg);

C++ raytacer: camera and ray

I'm coding raytracer for linux terminal on C++, first I decided to describe the sphere, here is class and algorithm:
class Sphere
{
public:
float radius;
vector3 center;
bool is_intersect(vector3 camera, vector3 ray)
{
// vector from center to camera
vector3 v = center - camera;
// module of vector
float abs_v = v.length();
// ray must be normalized (in main)
float pr_v_on_ray = ray.dot_product(v);
float l2 = abs_v * abs_v - pr_v_on_ray * pr_v_on_ray;
return l2 - radius * radius <= 0;
}
};
algorithm
vector2 and vector3 is self-written types for 2D and 3D vectors with all standard vectors operations (like normalization, module, dot product and another).
I'm creating sphere with center(0,0,0) and some Radius and all work:
// because terminal pixels not square
float distortion = (8.0 / 16) * (width / height);
Sphere sphere = {0.5, vector3(0,0,0)};
for (int i = 0; i < width; ++i)
{
for (int j = 0; j < height; ++j)
{
vector2 xy = (vector2((float)i, (float)j) / vector2(width, height))
* vector2(2,2) - vector2(1,1); // x,y Є [-1.0; 1.0]
xy.x *= distortion;
vector3 camera = vector3(0,0,1);
// ray from camera
vector3 ray = vector3(xy.x, xy.y, -1).normalize();
if (sphere.is_intersect(camera, ray)) mvaddch(j,i, '#');
result1-ok
But, when i change coordinates of center distortion appears:
Sphere sphere = {0.5, vector3(-0.5,-0.5,0)};
result2-distortion
Do I understand correctly algorithm of ray "shot"? If i need to "shot" ray from (1,2,3) to (5,2,1) point, then ray coordinates is (5-1,2-2,1-3) = (4,0,-2)?
I understand ray.x and ray.y is all pixels on screen, but what about ray.z?
I don't understand how camera's coordinates work. (x,y,z) is offset relative to the origin, and if i change z then size of sphere projection changes, its work, but if i change x or y all going bad. How to look my sphere from all 6 sides? (I will add rotation matrices if understand how to camera work)
What causes distortion when changing the coordinates of the center of the sphere?
My final target is camera which rotate around sphere. (I will add light later)
Sorry for my bad English, thank you for your patience.

Radian of vector perpendicular to vector

I have a vector between (posX, posY) and (mouseX, mouseY), and I get the mouse position as a positive integer with allegro's event library. From this vector using an arc tangent I get the radian of (deltaX, deltaY). I then plug that radian into an al_draw_rotated_bitmap function. I expect the bitmap to point towards where the mouse cursor is, but the issue I have is that the radian or vector is causing it to be rotated perpendicular to the cursor.
Here is the relevant code:
void setRotation(int dx, int dy)
{
float deltax = posX - mouseX;
float deltay = posY - mouseY;
rotation = atan2(deltay, deltax);
}
void Player::draw()
{
al_draw_rotated_bitmap(player, al_get_bitmap_width(player) / 2, al_get_bitmap_height(player) / 2, posX, posY, rotation, 0);
}
int main()
{
while(true)
{
player.setRotation(mouseX, mouseY);
player.draw();
al_flip_display();
}
}
Imagine (deltax, deltay) = (0, 100), that is the mouse is 100 pixels above the object, and so the picture (I believe) shouldn't be rotated at all. But atan2(deltay, deltax) = atan2(100, 0) = π/2, that's why your picture is rotated perpendicularly.
To fix this you should change it to atan2(deltax, deltay), possibly adding - before arguments depending on the direction of X and Y axes in Allegro, which I don't know.
In other words, atan2 measures the angle relative to X axis, but in your case the angle should be measured relative to Y axis (because alignment on Y axis means no rotation), so you should swap its arguments.

3D Coordinate System Transformation (X,Y,Z) to (X',Y',Z')

I'm working with the new Kinect v2 for reference, and I have a working coordinate system in place for a given frame, with coordinates (x,y,z) in mm. What I am trying to do is line up, transform, or relate the coordinate systems of the Kinect camera and that of an object it is looking at.
This object has it's own coordinate frame, and moves only across its x, y, and z axes. The kinect tracks the object, returning the world x,y,z coordinates with the kinect at the origin. However, I can also specify a new origin within the same coordinate frame, just by taking into account the x,y, and z offsets.
I was thinking that if I have the object starting in a position with the same origin, I could figure out how to translate its x', y', and z' movements using the kinect-given coordinates.
You can see what I'm talking about here with this (bad) drawing.
Is there a way I can set up a coordinate frame, given a new set of x', y' and z' values? Let's say I have 3 sets of coordinates in BOTH the object's frame AND the kinect's frame.
So, how can I translate (x,y,z) to the (x',y',z') frame if I KNOW the initial values of 3 pairs of (x,y,z) and (x',y',z').
I actually solved my own problem here using a simple change of basis method. Since the two coordinate frames are both orthonormal, and have the same origin, it was as simple as constructing a change-of-basis matrix, and using that to go from one coordinate system to the other.
Here is my code, in case anyone is looking how to do this with c++ / opencv. I use cv::Mat to do matrix manipulation.
// Object coordinate frame with orthonormal basis vectors u,v,w.
// Each basis vector has components x,y,z.
float U[3] = { ux, uy, uz };
float V[3] = { vx, vy, vz };
float W[3] = { wx, wy, wz };
// Create lenghts to normalize the vectors.
float ulength = sqrt(ux*ux + uy*uy + uz*uz);
float vlength = sqrt(vx*vx + vy*vy + vz*vz);
float wlength = sqrt(wx*wx + wy*wy + wz*wz);
// Setting up the change of basis matrix.
float data[3][3] = { { ux / ulength, uy / ulength, uz / ulength },
{ vx / vlength, vy / vlength, vz / vlength },
{ wx / wlength, wy / wlength, wz / wlength } };
// Store array into cv::Mat
cv::Mat M = cv::Mat(3, 3, CV_32FC1, &data);
// Create vector Mat of coordinates in kinect frame.
float kinectcoords[3] = { x, y, z};
cv::Mat D = cv::Mat(3, 1, CV_32FC1, &kinectcoords);
// Find coordinates in object frame.
// If D is the coordinate vector in the kinect frame, P is the coordinate vector
// in the object frame, and M is the change of basis matrix, then the method is
// P = Minv * D. cv::Mat objectcoords is my 'P' vector.
cv::Mat Minv = M.inv();
cv::Mat objectcoords = Minv * D;
float objx = objectcoords.at<float>(0);
float objy = objectcoords.at<float>(1);
float objz = objectcoords.at<float>(2);

C++ / OpenGL - 2D - How to clip a circle in a rectangle boundary box

I was just wondering how would I go about clipping a circle in a rectangular boundary box? I am currently using the Cohen–Sutherland algorithm for line clipping in my program and so far I've managed to get rectangles and polygons to clip. However, for circle clipping, I have no idea how I would accomplish this. I'm using the following to construct my circle:
glBegin(GL_POLYGON);
double radius = 50;
for(int angle = 0; angle <= 360; angle++ ){
float const curve = 2 * PI * (float)angle / (float)360;
glVertex2f(point.x + sin(curve) * radius, point.y + cos(curve) * radius);
}
glEnd();
My clipping algorithm is the same as the one here: http://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland_algorithm. However, it returns 2 points representing a new line to later be used to draw the clipped shape. So basically I've tried to do this:
line Lines[360] // an array with size 360 with data type line, which is a struct holding two points (x1, y1, x2, y2) of the new line returned by my clipping function.
double radius = 50;
for(int angle = 0; angle < 360; angle++){
float const currentCurve = 2 * PI * (float)angle / (float)360;
float const nextCurve = 2 * PI * (float)(angle+1) / (float)360;
int x1 = (int)(point[i].x + sin(currentCurve) * radius); // point is another struct holding only a single point.
y1 = (int)(point[i].y + cos(currentCurve) * radius);
x2 = (int)(point[i+1].x+ sin(nextCurve) * radius);
y2 = (int)(point[i+1].y + cos(nextCurve) * radius);=
// Clip the points with the clipping algorithm:
Lines[i] = Clipper(x1, y1, x2, y2);
}
// Once all lines have been clipped or not, draw:
glBegin(GL_POLYGON);
for(int i = 0; i < 360; i++){
glVertex2f(Lines[i].x1, Lines[i].y1);
glVertex2f(Lines[i].x2, Lines[i].y2);
}
glEnd();
Note that, I've drawn a circle on the screen with a mouse and and stored each 360 points into a struct array called point, which is apart of a linked list. So I have like 1 node representing one circle on the screen.
Anyway, with the above, my circle is not drawing clipped (or drawing at all for that matter) and my application crashes after a few mouse clicks.
Use the scissor test - read up on glScissor(): http://www.opengl.org/sdk/docs/man/xhtml/glScissor.xml