Let's say I have 2 coordinate systems as it is shown in image attached
How can I align this coordinate systems? I know that I need to translate second coordinate system around X with 180 grads, and then translate it to (0, 0) of the first coordinate system, but I have some troubles with doing it getting wrong results. Will really appreciate any detailed answer.
EDIT: Actually (0, 0) of second coordinate system is in the same point like Y of the first coordinate system.
The important piece of information is where is the second coordinate system's origin - namely (a,b).
Once you know that, all you need is:
QPointF pos1; // original position
QTransform t;
t.scale(1, -1);
t.translate(a, -b+1);
QPointF pos2 = pos1 * t;
You have to find correct values of a,b,c,d,Ox,Oy in:
X = a * x + b * y + Ox
Y = c * x + d * y + Oy
where x,y are the coordinates on the point in one system and X,Y in the other one.
In your case, a = 1, b = 0, c = 0, d = -1.
Ox,Oy is the offset between the Origins.
see https://en.wikipedia.org/wiki/Change_of_basis
Here's the problem with rotating 180 degrees, it not only affects the direction of your Y coordinate but also X.
Y X <--+
^ => |
| v
+--> X Y
What you probably meant to do was translate the point at Y and then invert your Y-coordinate. You can do this like so:
Translate Y to new origin
Scale (1, -1)
Y +--> X
^ => |
| v
+--> X Y
After more thought, I have to wonder, why are you doing this transformation in the first place? Is it because of differences in OpenGL coordinates and the coordinates Qt uses for its window?
If this is the case, you can actually alter your projection matrix... If you're using an orthographic matrix, for instance:
Try glOrtho (0, X, 0, Y, -1, 1); instead of the traditional glOrtho (0, X, Y, 0, -1, 1);
If you opt to do this, you will probably need to change the winding direction of your polygon "front faces". OpenGL defaults to CCW, change it to glFrontFace (GL_CW) and this should prevent weird lighting / culling behavior.
Related
I have a rotation matrix where right is +x, up is +y, and forward is -z. This is the standard OpenGL principal axes.
I need to express this rotation matrix in a new coordinate system where down is +y and forward is +z. So, the new system has axes for y and z flipped.
My current strategy was to use this formula.
rotation a = get_rotation();
rotation b = rotation(
1, 0, 0,
0, -1, 0,
0, 0, -1
);
a = b * a * transpose(b);
Though this seems to yield incorrect results.
What would be the proper way to transform a rotation matrix from one reference frame to another?
In a = b * a * transpose(b) the multiply by b and transpose(b) will cancel each other out. i.e. the center matrix element would multiply by (-1) twice.
Have you tried a = b * a instead?
I have a completely implemented, working engine in OpenGL that supports a projection camera with raycasting. Recently, I implemented an orthogonal camera type, and visually, it's working just fine. For reference, here's how I compute the orthographic matrix:
double l = -viewportSize.x / 2 * zoom_;
double r = -l;
double t = -viewportSize.y / 2 * zoom_;
double b = -t;
double n = getNear();
double f = getFar();
m = Matrix4x4(
2 / (r - l),
0,
0,
-(r + l) / (r - l),
0,
2 / (t - b),
0,
-(t + b) / (t - b),
0,
0,
-2 / (f - n),
-(f + n) / (f - n),
0,
0,
0,
1);
However, my issue now is that raycasting does not work with the orthogonal camera. The issue seems to be that the raycasting engine was coded with projection-type cameras in mind, therefore when using the orthographic matrix instead it stops functioning. For reference, here's a high-level description of how the raycasting is implemented:
Get the world-space origin vector
Get normalized screen coordinate from input screen coordinates
Build mouseVector = (normScreenCoords.x, normScreenCoords.y, 0 if "near" or 1 if "far"
Build view-projection matrix (get view and projection matrices from Camera and multiply them)
Multiply the mouseVector by the inverse of the view-projection matrix.
Get the world-space forward vector
Get mouse world coordinates (far) and subtract them from mouse world coordinates (near)
Send the world-space origin and world-space forward vectors to the raycasting engine, which handles the logic of comparing these vectors to all the visible objects in the scene efficiently by using bounding boxes.
How do I modify this algorithm to work with orthographic cameras?
Your steps are fine and should work as expected with an orthographic camera. There may be a problem with the way you are calculating the origin and direction.
1.) Get the origin vector. First calculate the mouse position in world-space units, ie float rayX = (mouseX - halfResolution) / viewport.width * (r - l) or similar. It should be offset so the center of the screen is (0, 0), and the extreme values the mouse can reach translate to the edges of the viewport l, r, t, b. Then start with the camera position in world space and add two vectors rayX * camera.local.right and rayY * camera.local.up, where right and up are unit vectors in the camera's local co-ordinate system.
2.) The world space forward vector is always the camera forward vector for any mouse position.
3.) This should work without modification as long as you have the correct vectors for 1 and 2.
Here is my reasoning:
openGL draws everything within a 2x2x2 cube
the x,y values inside this cube determine where the point is drawn on the screen. The z value is used for other stuff...
if you want the z value to have some effect on perspective you need to mutate the scene (usually with a matrix) so that it gives an illusion of distant objects being smaller.
the z values of the cube go from -1 to 1.
Now I want it so that objects that are at z = 1 are infinitely zoomed, and objects that are at z = 0 are normal size, and objects that are at z = -1 are 1/2 size.
When I say an object is zoomed, I mean that the (x,y) coordinates of its points are multiplied by scaler zoom factor, which is based on its z coordinate.
If a point lies outside the 2x2x2 cube I want the calculations to still be done on it if it is between z = 1 and z = -1. Since the z value doesn't change I don't care what happens to any points that are not within this range, as long as their z value is not changed.
Generalized point transformation:
If I have a point P = (x, y, z), and -1 <= z <= 1 then:
the Zoom Factor, S = 1 / (1 - z)
so the translation is as follows:
(x, y, z) ==> (x * S, y * S, z)
Creating the matrix?
This is where I am having issues. I don't know how to create a matrix so that it will transform a generalized point to have the desired effect.
I am considering not using a matrix and applying this transformation via a function in glsl...
If someone has insight on how to create such a matrix I would like to know.
I'm using the Left and Right audio channels to create a Lissajous Vectorscope. Left is x and Right is y, both which never goes beyond 1 and -1 values. These coordinates are also shifted at a 45 degree angle to give me the following view.
So I'm doing a very simple
// converting x and y value from (-1 - 1) to (0 - 1)
float x = LeftChannelValue/2 + 0.5
float y = RightChannelValue/2 + 0.5
// multiplying the width and height with X and Y to get a proper square
// width and height have to be the same value
float new_X = x*(width*0.5)
float new_Y = y*(height*0.5)
// doing two dimensional rotating to 45 degrees so it's easier to read
float cosVal = cos(0.25*pi)
float sinVal = sin(0.25*pi)
float finalX = (((new_X*cosVal)-(new_Y *sinVal))) + (width*0.5) //adding to translate back to origin
float finalY = ((new_X*sinVal) + (new_Y *cosVal))
This gives me the results on that picture.
How would I graph the polar coordinates so that it doesn't look like a square, it looks like a circle?
I'm trying to get this view but am absolutely confused about how that would correlate with the left and right. I'm using https://en.wikipedia.org/wiki/Polar_coordinate_system as a reference.
I figured out what I wanted.
I was trying to plot those coordinates in a polar graph. I was going about it all wrong.
I eventually realized that in order for me to convert the x,y coordinates, I needed my own definition for what a radius and an angle should represents in my x,y chart. In my case, I wanted the radius to be the largest absolute value of x and y
The only problem was trying to figure out how to calculate an angle using x and y values.
This is how I wanted my circle to work,
when x = y, the angle is 0.
when x = 1 & y = 0, then angle is 45.
when x = 1 & y = -1, then angle is 90.
when x = 0 & y = 1, then angle is -45.
when x = -1 & y = 1, then angle is -90.
given this information, you can figure out the rest of the coordinates for the circle up to 180 & - 180 degree angle.
I had to use conditions (if else statements) to properly figure out the correct angle given x and y.
And then to graph the polar coordinate, you just convert using the cos and sin conversion to x, y coordinates.
I like to program, I'm just not good with calculus.
I understand the basic concept of how to unproject:
let mut z = 0.0;
gl::ReadPixels(x as i32, y as i32, 1, 1, gl::DEPTH_COMPONENT, gl::FLOAT, &z);
// window position to screen position
let screen_position = self.to_screen_position(x, y);
// screen position to world position
let world_position = self.projection_matrix().invert() *
Vector4::new(screen_position.x, screen_position.y, z, 1.0);
But this doesn't take the W coordinate properly - when I render things from world space to screen space, they end up with a W != 1, because of the perspective transformation (https://www.opengl.org/sdk/docs/man2/xhtml/gluPerspective.xml). When I transform back from screen space to world space (with an assumption of W=1), the objects are in the wrong position.
As I understand it, W is a scaling factor for all the other coordinates. If this is the case, doesn't it mean screen vectors (0, 0, -1, 1) and (0, 0, -2, 2) will map to the same window coordinates, and that unprojecting doesn't necessarily produce unique results without further work?
Thanks!
Because of the perspective transformation, you can't really ignore W.
I would suggest looking at the source code for the gluUnproject function here: http://www.opengl.org/wiki/GluProject_and_gluUnProject_code. You'll see that what this does is:
Calculate projection*modelView matrix and invert it.
Multiply a vector made from the screen position (in the code, winZ=0 would correspond to the near plane, winZ=1 to the far plane of your perspective projection; W will always be 1).
Divide the calculated vector's X, Y and Z by W.
Note that if you do it like this, the result's W should be ignored (i.e. assumed to be 1).
You can also look here to see how the transformations in OpenGL work - if you're not familiar with this, I'd suggest reading about Clip coordinates and Normalized Device coordinates.