I am trying to understand the map from window coordinate axis (origin top-left) to OpenGL coordinate axis (origin left-bottom) when using the mouse function. In relevant book this map is described by the two following lines:
points[count].x = (float) x / (w/2) - 1.0;
points[count].y = (float) (h-y) / (h/2) - 1.0;
I suspect that these two lines depict a scale. Could you please give an intuitive-mathematical explanation of this map?
What book are you referring to? The origin in NDC-space is the center of the viewport (0,0 is the center; -1,-1 is the bottom-left; 1,1 is the top-right). Any other coordinate space is defined by your projection matrix.
I believe what the book is trying to teach you is that NDC-1,-1 is the bottom-left corner of your viewport and NDC1,1 is the top-right corner.
A more complete mapping would include the X and Y location of your viewport:
NDCX = (2.0 * (ScreenX - ViewportX) / ViewportW) - 1.0;
NDCY = (2.0 * (ScreenY - ViewportY) / ViewportH) - 1.0;
This mapping is illustrated below (the square on the right is the viewport):
You of course have one additional step necessary since the Y-axis runs the opposite direction in your mouse coordinate system. That is why you see the Y-axis flipped in your mapping h-y
Related
Suppose I have this chicken model who I want to constantly look towards the viewer (camera position), or more easily, towards the origin (0,0,0).
How do I calculate the angles for each axis so that I can rotate the object with them?
Edit:
Sorry if my question was too general. I'm still struggling with this though.
Let's say that the 3D model position is (x,y,z) in model space, and I want the model to "look" towards the origin.
My first thoughts were to begin to rotate around the x axis (rotate vertically):
Consider the yellow circle as the y plane.
So I tried the following code, which doesn't rotate the model at all.
glm::vec3 camPos = camera.GetPosition();
float value = camPos.y / glm::sqrt(glm::pow(camPos.x,2.0f) + glm::pow(camPos.y, 2.0f) + glm::pow(camPos.z, 2.0f));
float angle = glm::asin(value);
cow.SetModelMatrix(glm::translate(camPos - glm::vec3(0,0,1.5)) * //then translate so the cow will appear a little bit infront of the camera
glm::rotate(glm::radians(angle), glm::vec3(-1,0,0)) *//then rotate vertically by the angle
glm::scale(glm::vec3(0.1, 0.1, 0.1)) //first scale, cause the cow (i mean chicken) is too big
);
The camera starts at position (0, 0, 5), looking towards the negative z axis.
What am I doing wrong?
If the chicken is at the origin c=(0,0,0) and the camera is at r=(x,y,z) and ground is at y=0. Then what you want is a sequence of rotations to get the local x axis of the chicken pointed towards the camera.
First orient your x axis on the plane with a rotation about the vertical y axis with an angle φ=-ATAN(z/x) and then a rotation about the z axis with an angle ψ=ATAN(y/√(x^2+z^2))
This creates a 3×3 rotation matrix E = ROT_Y(φ)*ROT_Z(ψ)
| x/d -x*y/(d*√(x^2+z^2)) -z/√(x^2+z^2) |
E = | y/d √(x^2+z^2)/d 0 |
| z/d -y*z/(d*√(x^2+z^2)) x/√(x^2+z^2) |
where d=√(x^2+y^2+z^2). You see the local x axis (the first column of E) pointing towards (x,y,z). Also the local z axis has no component on the vertical, so it always lies on the ground plane.
But this depend on the implementation, like if you need to keep the chicken y vertical (as opposed to keeping z in the ground plane) you will need a different set of rotations and angles. So to fully answer you need to provide more information.
The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.
I have a C application where i have loaded my image(gif) object onto the screen. Now i wish the Image object to rotate on one axis along with my pointer.
Means wherever i move the pointer on the screen, my image should rotate from a fixed point...How do i do that?
I have seen formulae like
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
but it inputs angle also..but i dont have the angles... i have pointer coordinates... How do i make the object move according to the mouse pointer?
Seriously... You have learnt trigonometry in secondary school, right?
angle = arctan((pointerY - centerY) / (pointerX - centerX))
in C:
// obtain pointerX and pointerY; calculate centerX as width of the image / 2,
// centerY as heigth of the image / 2
double angle = atan2(pointerY - centerY, pointerX - centerX);
double newX = cos(angle) * oldX - sin(angle) * oldY
double newY = sin(angle) * oldX + cos(angle) * oldY
First of all, that formula is perfectly fine if your rotation is in 2D space. You cannot remove angle from your formula because rotation without an angle is meaningless!! Think about it.
What you really need is to learn more basic stuff before doing what you are trying to do. For example, you should learn about:
How to get the mouse location from your window management system (for example SDL)
How to find an angle based on the mouse location
How to draw quads with texture on them (For example using OpenGL)
How to perform transformation, either manually or for example using OpenGL itself
Update
If you have no choice but to draw straight rectangles, you need to rotate the image manually, creating a new image. This link contains all the keywords you need to lookup for doing that. However in short, it goes something like this:
for every point (dr,dc) in destination image
find inverse transform of (dr,dc) in original image, named (or, oc)
// Note that most probably or and oc are fractional numbers
from the colors of:
- (floor(or), floor(oc))
- (floor(or), ceil(oc))
- (ceil(or), floor(oc))
- (ceil(or), ceil(oc))
using bilinear interpolation, computing a color (r,g,b)
dest_image[dr][dc] = (r,g,b)
the angle you calculate between where the user clicks on the screen and the old coordinates.
e.g.
on screen you have a square
( 0,10)-----(10,10)
| |
| |
| |
( 0, 0)-----(10, 0)
and if the user clicks in say (15,5)
you can for example calculate the angle relative your square from either a corner or from the cross section of the square then just use the formulas that you already have for each coordinate of the square.
I am currently trying to use OpenGL (With SDL) to draw a cube to the location where I left click in the screen and then get it to point at the position in the screen where I right click.
I can successfully draw a cube at my desired location using gluUnproject - Meaning I already know the coordinates of which my cube is situated.
However I do not know how to calculate all of the angles required to make my cube point at the new location.
Of course I am still using gluUnproject to find the coordinates of my right click, but I only know how to rotate around the Z axis from using 2D graphics.
For example before, if I wanted to rotate a quad around the Z axis (Of course, this would be a top down view where the Z axis is still "going through" the screen) in 2D I would do something like:
angle = atan2(mouseCoordsY - quadPosY, mouseCoordsX - quadPosX);
glRotatef(angle*180/PI, 0, 0, 1);
My question is, how would I go about doing this in 3D?
Do I need to calculate the angles for each axis as I did above?
If so how do I calculate the angle for rotation around the X and Y axis?
If not, what method should I use to achieve my desired results?
Any help is greatly appreciated.
If your cube is at A = (x0,y0,z0)
If your cube is currently looking at B=(x1,y1,z1)
and if you want it to look at C=(x2,y2,z2) then;
let v1 be the vector from A to B
v1 = B - A
and v2 be the one from A to C
v2 = C - A
First normalize them.
v1 = v1 / |v1|
v2 = v2 / |v2|
then calculate the rotation angle and the rotation axis as
angle = acos(v1*v2) //dot product
axis = v1 X v2 //cross product
You can call glRotate with
glRotate(angle, axis[0], axis[1], axis[2])
I am currently dealing with several thousand boxes that i'd like to project onto the screen to determinate their sizes and distances to the camera.
My current approach is to get a sphere representing the box and project that using view and projection matrices and the viewport values.
// PSEUDOCODE
// project box center from world into viewspace
boxCenterInViewSpace = viewMatrix * boxCenter;
// get two points left and right of center
leftPoint = boxCenter - radius;
right = boxCenter + radius;
// project points from view into eye space
leftPoint = projectionMatrix * leftPoint;
rightPoint = projectionMatrix * rightPoint;
// normalize points
leftPoint /= leftPoint.w;
rightPoint /= rightPoint.w;
// move to 0..1 range
leftPoint = leftPoint * 0.5 + 0.5;
rightPoint = rightPoint * 0.5 + 0.5;
// scale to viewport
leftPoint.x = leftPoint.x * viewPort.right + viewPort.left;
leftPoint.y = leftPoint.y * viewPort.bottom + viewPort.top;
rightPoint.x = rightPoint.x * viewPort.right + viewPort.left;
rightPoint.y = rightPoint.y * viewPort.bottom + viewPort.top;
// at this point i check if the node is visible on screen by comparing the points to the viewport
// calculate size
length(rightPoint - leftPoint)
At another point i calculate the distance of the box to the camera.
The first problem is that i won't know if the box is just below the viewport as i just calculate horizontal. Is there a way to project a real sphere onto the screen somehow? Some method that looks like:
float getSizeOfSphereProjectedOnScreen(vec3 midpoint, float radius)
The other question is simpler: In with coordinate space is the z coordinate corresponding to the distance to the camera?
To sum it up i want to calculate:
Is the Box in the view frustum?
What is the size of the Box on the screen?
What is the distance from Box to camera?
To simplify calculations i'd like to use a sphere representation for this but i don't know how to project a sphere.
[Updated]
What is the distance from Box to camera?
In
[which] coordinate space is the z
coordinate corresponding to the
distance to the camera?
The answer is none of the usual spaces. The closest one would be in view space (i.e. after you apply the view matrix but not the projection matrix). In view space, the distance to the camera should be sqrt(x*x + y*y + z*z), because the camera is at the origin. (z would be a reasonable approximation only if |x| and |y| were really small relative to |z|.) This is assuming that knowing the distance from the camera to the center of the box is good enough.
I think if you really wanted a space in which the z coordinate corresponds to the distance to the camera, you'd need to map a spherical locus of points sqrt(x*x + y*y + z*z) = d to a plane z = d. I don't know that you can do that with a matrix.
Is the Box in the view frustum?
What is the size of the Box on the screen?
I think you're on the right track with this, but depending on which direction the camera is facing, your left and right points might not determine how wide the box looks or whether the box intersects the view frustum. See my answer to your other question for a long way to do this.