Orientation of figures in space - c++

I have a sphere in my program and I intend to draw some rectangles over at a distance x from the centre of this sphere. The figure looks something below:
The rectangles are drawn at (x,y,z) points that I have already have in a vector of 3d points.
Let's say the distance x from centre is 10. Notice the orientation of these rectangles and these are tangential to an imaginary sphere of radius 10 (perpendicular to an imaginary line from the centre of sphere to the centre of rectangle)
Currently, I do something like the following:
For n points vector<vec3f> pointsInSpace where the rectnagles have to be plotted
for(int i=0;i<pointsInSpace.size();++i){
//draw rectnagle at (x,y,z)
}
which does not have this kind of tangential orientation that I am looking for.
It looked to me of applying roll,pitch,yaw rotations for each of these rectangles and using quaternions somehow to make them tangential as to what I am looking for.
However, it looked a bit complex to me and I wanted to ask about some better method to do this.
Also, the rectangle in future might change to some other shape, so a kind of generic solution would be appreciated.

I think you essentially want the same transformation as would be accomplished with a LookAt() function (you want the rectangle to 'look at' the sphere, along a vector from the rectangle's center, to the sphere's origin).
If your rectangle is formed of the points:
(-1, -1, 0)
(-1, 1, 0)
( 1, -1, 0)
( 1, 1, 0)
Then the rectangle's normal will be pointing along Z. This axis needs to be oriented towards the sphere.
So the normalised vector from your point to the center of the sphere is the Z-axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross of the 'up' and 'z' axes gives the x axis, and then the cross of the 'x' and 'z' axes gives the 'y' axis.
These three axes (x,y,z) directly form a rotation matrix.
This resulting transformation matrix will orient the rectangle appropriately. Either use GL's fixed function pipeline (yuk), in which case you can just use gluLookAt(), or build and use the matrix above in whatever fashion is appropriate in your own code.

Personally I think the answer of JasonD is enough. But here is some info of the calculation involved.
Mathematically speaking this is a rather simple problem, What you have is a 2 known vectors. You know the position vector and the spheres normal vector. Since the square can be rotated arbitrarily along around the vector from center of your sphere you need to define one more vector, the up vector. Without defining up vector it becomes a impossible solution.
Once you define a up vector vector, the problem becomes simple. Assuming your square is on the XY-plane as JasonD suggest above. Then your matrix becomes:
up_dot_n_dot_n.X up_dot_n_dot_n.Y up_dot_n_dot_n.Z 0
n.X n.y n.z 0
up_dot_n.x up_dot_n.x up_dot_n.z 0
p.x p.y p.z 1
Where n is the normal unit vector of p - center of sphere (which is trivial if sphere is in the center of the coordinate system), up is a arbitrary unit vector vector. The p follows form definition and is the position.
The solution has a bit of a singularity at the up direction of the sphere. An alternate solution is to rotate first 360 around up, the 180 around rotated axis dot up. Produces same thing different approach no singularity problem.

Related

Rotating plane such that it has a certain normal vector

I've got the following problem:
In 3D there's a vector from fixed the center of a plane to the origin. This plane has arbitrary coordinates around this center thus its normal vector is not necessarily the mentioned vector. Therefore I have to rotate the plane around this fixed center such that the mentioned vector is the plane's normal vector.
My first idea was to compute the angle between the vector and the normal vector, but the problem then is how to rotate the plane.
Any ideas?
A plane is a mathematical entity which satisfies the following equation:
Where n is the normal, and a is any point on the plane (in this case the center point as above). It makes no sense to "rotate" this equation - if you want the plane to face a certain direction, just make the normal equal to that direction (i.e. the "mentioned" vector).
You later mentioned in the comments that the "plane" is an OpenGL quad, in which case you can use Quaternions to compute the rotation.
This Stackoverflow post tells you how to compute the rotation quaternion from your current normal vector to the "mentioned" vector. This site tells you how to convert a quaternion into a rotation matrix (whose dimensions are 3x3).
Let's suppose the center point is called q, and that the rotation matrix you obtain has the following form:
This can only rotate geometry about the origin. A rotation about a general point requires a 4x4 matrix (what OpenGL uses), which can be constructed as follows:

How to check if a point is inside a quad in perspective projection?

I want to test if any given point in the world is on a quad/plane? The quad/plane can be translated/rotated/scaled by any values but it still should be able to detect if the given point is on it. I also need to get the location where the point should have been, if the quad was not applied any rotation/scale/translation.
For example, consider a quad at 0, 0, 0 with size 100x100, rotated at an angle of 45 degrees along z axis. If my mouse location in the world is at ( x, y, 0, ), I need to know if that point falls on that quad in its current transformation? If yes, then I need to know if no transformations were applied to the quad, where that point would have been on it? Any code sample would be of great help
A ray-casting approach is probably simplest:
Use gluUnProject() to get the world-space direction of the ray to cast into the scene. The ray's origin is the camera position.
Put this ray into object space by transforming it by the inverse of your rectangle's transform. Note that you need to transform both the ray's origin point and direction vector.
Compute the intersection point between this ray and the XY plane with a standard ray-plane intersection test.
Check that the intersection point's x and y values are within your rectangle's bounds, if they are then that's your desired result.
A math library such as GLM will be very helpful if you aren't confident about some of the math involved here, it has corresponding functions such as glm::unProject() as well as functions to invert matrices and do all the other transformations you'd need.

C++ raytracer and normalizing vectors

So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.

projected surface normal vector onto xy plane

i have a plane whose equation is ax+by-z+d=0. hence, its normal vector (a,b-1). Now, i need to project my vector to xy plane in order to compute direction of it from the North axis (i guess it is Y axis in here. please help me to get projected vector. thank you.
I think what you're looking for is the dot product. Finding the direction the plane is facing is pretty easy that way.
// generic code, actual code depends on your engine.
// BasePlane.GetNormal() would equal to (0,0,1) for the X/Y plane
float dir = YourPlane.GetNormal().Dot(BasePlane.GetNormal());
If it equals to 1, your plane faces the same direction as the plane you're testing against. If it equals to -1, it's facing the the plane. Equalling to 0 would mean the plane stands orthogonal to the tested plane. Hope this helps.

3d Camera Position given some points

Heyo,
I'm currently working on a project where I need to place the camera such that the full motion of a character would be viewable without moving the camera. I have the position where the character starts, as well as the maximum distance that the character will travel in all three directions (X,Y, & Z). I also have the field of view (which is 90 degrees).
Is there an equation that'll figure out where I need to place the camera so it won't have to move to see the full motion?
Note: this is using OpenGL.
Clarification: The camera should be "in front" of the character that's in the motion, not above.
It'll also be moving along a ground plane.
If you make a bounding sphere of the points, all you need to do is keep the camera at a distance greater than or equal to the radius of the bounding sphere / sin(FOV/2).
For example, if you have a bounding sphere with radius Radius, and a specified Field of View FOV, your camera just needs to be at a point "Dist" away, pointing towards the center of the bounding sphere.
The equation for calculating the distance is:
Dist = Radius / sin( FOV/2 );
This will work in 3D, for a camera at any orientation.
Simply having the maximum range of (X, Y, Z) is not on its own sufficient, because the viewing port is essentially pyramid shaped, with the apex of the pyramid being at the eye position.
For the sake of argument, let's assume that all movement is in the (X, Z) plane (i.e. the ground), and the eye is directly above the origin 10m along the Y axis.
Assuming a square viewport, with your 90˚ field of view you'd be able to see from ±10m along both the X and Z axis, but only for objects who are on the ground (Y = 0). As soon as they come off the ground your view is reduced. If it's 1m of the ground then your (X, Z) extent is only ±9m.
Clearly a real camera could be placed anyway in the scene, facing any direction. Even the "roll" angle of the camera could change how much is visible. There are actually infinitely many such camera points, so you will need to constrain your criteria somewhat.
Take the line segment from the startpoint to the endpoint. Construct a plane orthogonal to this line segment through the midpoint of the line segment. Then position the camera somewhere in this plane at an distance of more than the following from the intersection point of plane and line looking at the intersection point. The up vector of the camera must be in the plane and the horizontal field of view must be 90 degrees.
distance = sqrt(dx^2 + dy^2 + dz^2) / 2
This camera positions will all have the startpoint and the endpoint on the left or right border of the view port and verticaly centered.
Another solution might be to write a function that takes the startpoint, the endpoint, and the desired position of both points on the screen. Then just solve the projection equation for the camera transformation.
It depends, for example, if the object is gonna move in a plane, you can just place the camera outside a ball circumscribed its movement area (this depends on the fact that FOV is 90, which is a fortunate angle).
If the object is gonna move in 3D, it's much more difficult. It would help if you'd specify the region where the object moves (cube vs. ball...) and the direction you want to see it from.