If I had the following figure shown below and I wanted to scale the top rectangle by some factor such that the left side will still touch the circle like it does now, how would I go about doing that? This is being done in C++ where the rectangles are represented by four vertices and the circle is represented by a center and radius.
To scale, I simply multiply all the vertices by the scale factor but then I need to translate the rectangle back so it still touches the circle. I'm not sure how to do the translation.
Thanks.
First, find the point at which the circle is touching the rectangle. You can do this by working out the angle of one of the long rectangle edges that is parallel with the line from the center of the circle to the point where it touches the rectangle. Take the x and y values of the far corner and subtract the near corner x and y from them. Then the angle is
angle = atan2(y difference, x difference).
Then use that along with the circle center and circle radius to calculate the point where they touch:
touch.x = center.x + cos(angle) * radius;
touch.y = center.y + sin(angle) * radius;
Then, for each corner point of the rectangle:
Subtract the touch point from the rectangle corner point
Multiply by the scale value
Add the touch point
This scales the rectangle around the touch point, so the touch point is itself unaffected.
Related
I am rendering a tile map to a fbo and then moving the resulted buffer to a texture and rendering it on a FSQ. Then from the mouse click events, I got the screen coordinates and move them to clip space [-1,1]:
glm::vec2 posMouseClipSpace((2.0f * myCursorPos.x) / myDeviceWidth -
1.0f, 1.0f - (2.0f * myCursorPos.y) / myDeviceHeight);
I have logic on my program that based on those coordinates, it selects a specific tile on the texture.
Now, moving to 3D, I am texturing a semi cylinder with the FBO I used in the previous step:
In this case I am using a ray-triangle intersection point that hits the cylinder with radius r and height h. The idea is moving this intersection point to space [-1,1] so I can keep the logic on my program to select tiles
I use the Möller–Trumbore algorithm to check points on the cylinder hit by a ray. Lets say the intersected point is (x,y) (not sure if the point is in triangle, object or world space. Apparently it's worldspace).
I want to translate that point to space x:[-1,1], y[-1,1].
I know the height of my cylinder, which is a quarter of the cylinder's arc length:
cylinderHeight = myRadius * (PI/2);
so the point in the Y axis can be set in [-1,1]space:
vec2.y = (2.f * (intersectedPoint.y - myCylinder->position().y) ) /
(myCylinder->height()) - 1.f
and That works perfectly.
However, How to compute the horizontal axis which depends on 2 variables x and z?
Currently, my cylinder's radius is 1, so by coincidence a semi cylinder set in the origin would go from (-1 ,1) on the X axis, which made me think it was [-1,1] space, but it turns out is not.
My next approach was using the arc length of a semi circle s =r * PI and then plug that value into the equation:
vec2.x = (2.f * (intersectedPoint.x - myCylinder->position().x) ) /
(myCylinder->arcLength()) - 1.f
but clearly it goes off by 1 unit on the negative direction.
I appreciate the help.
From your description, it seems that you want to convert the world space intersection coordinate to its corresponding normalized texture coordinate.
For this you need the Z coordinate as well, as there must be two "horizontal" coordinates. However you don't need the arc length.
Using the relative X and Z coordinates of intersectedPoint, calculate the polar angle using atan2, and divide by PI (the angular range of the semi-circle arc):
vec2.x = atan2(intersectedPoint.z - myCylinder->position().z,
myCylinder->position().x - intersectedPoint.x) / PI;
How can I pull the x and y coordinates for a rotated rectangles center?
In the documentation it says that the Rotated Rectangle: "...
returns a Box2D structure which contains following details:
( center (x,y), (width, height), angle of rotation )
I couldn't find anything out there on referencing the center, and I was unsuccessful in attempts to figure it out on my own.
Ultimately, I'm trying to pull the center points for each rotated rectangle to find clusters of similarly angled and nearby rectangles.
According to CvBox2D:
struct CvBox2D - Stores coordinates of a rotated rectangle.
CvPoint2D32f center: Center of the box
So you need CvPoint2D32f
struct CvPoint2D32f -
2D point with floating-point coordinates.
x: floating-point x-coordinate of the point.
y: floating-point y-coordinate of the point.
point: the point to convert.
to access the center's x and y by:
center.x // access rotated rectangle center's x
center.y // access rotated rectangle center's y
I need a method to find a set of homogenous transformation matrices that describes the position and orientation in a sphere.
The idea is that I have an object in the center of this sphere which has a radius of dz. Since I know the 3d coordinate of the object I know all the 3d coordinates of the sphere. Is it possible to determine the RPY of any point on the sphere such that the point always points toward the object in the center?
illustration:
At the origo of this sphere we have an object. The radius of the sphere is dz.
The red dot is a point on the sphere, and the vector from this point toward the object/origo.
The position should be relatively easy to extract, as a sphere can be described by a function, but how do I determine the vector, or rotation matrix that points such that it points toward origo.
You could, using the center of the sphere as the origin, compute the unit vector of the line formed by the origin to the point on the edge of the sphere, and then multiply that unit vector by -1 to obtain the vector pointing toward the center of the sphere from the point on the edge of the sphere.
Example:
vec pointToCenter(Point edge, Point origin) {
vec norm = edge - origin;
vec unitVec = norm / vecLength(norm);
return unitVec * -1;
}
Once you have the vector you can convert it to euler angles for the RPY, an example is here
Of the top of my head I would suggest using quaterneons to define the rotation of any point at the origin, relative to the point you want on the surface of the sphere:
Pick the desired point on the sphere's surface, say the north pole for example
Translate that point to the origin (assuming the radius of the sphere is known), using 3D Pythagorus: x_comp^2 + y_comp^2 + z_comp^2 = hypotenuse^2
Create a rotation that points an axis at the original surface point. This will just be a scaled multiple of the x, y and z components making up the hypotenuse. I would just make it into unit components. Capture the resulting axis and rotation in a quaterneon (q, x, y, z), where x, y, z are the components of your axis and q is the rotation about that axis. Hard code q to one. You want to use quaterneons because it will make your resulting rotation matricies easier to work with
Translate the point back to the sphere's surface and negate the values of the components of your axis, to get (q, -x, -y, -z).
This will give you your point on the surface of the sphere, with an axis pointing back to the origin. With the north pole as an example, you would have a quaternion of (1, 0, -1, 0) at point (0, radius_length, 0) on the sphere's surface. See quatrotation.c in my below github repository for the resulting rotation matrix.
I don't have time to write code for this but I wrote a little tutorial with compilable code examples in a github repository a while back, which should get you started:
https://github.com/brownwa/opengl
Do the mat_rotation tutorial first, then do the quatereons one. It's doable in a weekend, a day if you're focused.
I have two points on circle. I know degree from center and coordinates of one point. I want find coordinate of the other point. I think need multiply by rotation matrix to find point. How can i do in c++? Is there any function for it?
you can calculate it directly using
x cos(angle) - y sin (angle )
x sin(angle) + y cos (angle )
the cos and sin functions are available in math.h
note that the rotation will be in anti clockwise direction
and the rotation will be about the origin. 'angle' should be in radians.
if the center of the circle is not located at origin then you'll have to first shift the origin to the center of the circle , apply rotation and shift the origin back again to get the other point
I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet.
Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200) and (width/2, -height/2, 200) [200 is just a fixed number of z, can be changed].
Addition to that, I use the camera mostly on e(0, 0, 1000), and I use a perspective projection.
I send rays from point e to pixels, and print it to image's corresponding pixel after calculating the pixel color.
Here is a image I created. Hopefully you can guess where eye and view plane are by looking at the image.
My question starts from here. It's time to move my camera around, but I don't know how to map 2D view plane coordinates to the canonical coordinates. Is there a transformation matrix for that?
The method I think requires to know the 3D coordinates of pixels on view plane. I am not sure it's the right method to use. So, what do you suggest?
There are a variety of ways to do it. Here's what I do:
Choose a point to represent the camera location (camera_position).
Choose a vector that indicates the direction the camera is looking (camera_direction). (If you know a point the camera is looking at, you can compute this direction vector by subtracting camera_position from that point.) You probably want to normalize (camera_direction), in which case it's also the normal vector of the image plane.
Choose another normalized vector that's (approximately) "up" from the camera's point of view (camera_up).
camera_right = Cross(camera_direction, camera_up)
camera_up = Cross(camera_right, camera_direction) (This corrects for any slop in the choice of "up".)
Visualize the "center" of the image plane at camera_position + camera_direction. The up and right vectors lie in the image plane.
You can choose a rectangular section of the image plane to correspond to your screen. The ratio of the width or height of this rectangular section to the length of camera_direction determines the field of view. To zoom in you can increase camera_direction or decrease the width and height. Do the opposite to zoom out.
So given a pixel position (i, j), you want the (x, y, z) of that pixel on the image plane. From that you can subtract camera_position to get a ray vector (which then needs to be normalized).
Ray ComputeCameraRay(int i, int j) {
const float width = 512.0; // pixels across
const float height = 512.0; // pixels high
double normalized_i = (i / width) - 0.5;
double normalized_j = (j / height) - 0.5;
Vector3 image_point = normalized_i * camera_right +
normalized_j * camera_up +
camera_position + camera_direction;
Vector3 ray_direction = image_point - camera_position;
return Ray(camera_position, ray_direction);
}
This is meant to be illustrative, so it is not optimized.
For rasterising renderers, you tend to need a transformation matrix because that's how you map directly from 3D coordinates to screen 2D coordinates.
For ray tracing, it's not necessary because you're typically starting from a known pixel coordinate in 2D space.
Given the eye position, a point in 3-space that's in the center of the screen, and vectors for "up" and "right", it's quite easy to calculate the 3D "ray" that goes from the eye position and through the specified pixel.
I've previously posted some sample code from my own ray tracer at https://stackoverflow.com/a/12892966/6782