i have a plane whose equation is ax+by-z+d=0. hence, its normal vector (a,b-1). Now, i need to project my vector to xy plane in order to compute direction of it from the North axis (i guess it is Y axis in here. please help me to get projected vector. thank you.
I think what you're looking for is the dot product. Finding the direction the plane is facing is pretty easy that way.
// generic code, actual code depends on your engine.
// BasePlane.GetNormal() would equal to (0,0,1) for the X/Y plane
float dir = YourPlane.GetNormal().Dot(BasePlane.GetNormal());
If it equals to 1, your plane faces the same direction as the plane you're testing against. If it equals to -1, it's facing the the plane. Equalling to 0 would mean the plane stands orthogonal to the tested plane. Hope this helps.
Related
I am having a constructor for a Plane using vector3d and position3d. I want to get horizontal plane at a desired height (say z1). So, I think my plane normal should be (0,0,1). I don't have any other information.
Plane::Plane(const position3d &point, const vector3d &normal)
I am now really confusing what would be my plane as I am thinking how should I give the position3d only with that Z1.
quick help soon. thanks..
Your position needs to be a point in the plane, no matter which.
Since you said its parallel to XY, you can choose x and y in the position3d arbitrarily.
position3d(0,0,z1);
normal(0,0,1);
would do the job just fine. Note that you can choose n and m randomly to create position3d(n,m,z1), and still get the same plane.
point can be any point on the plane, for instance (0,0,Z1).
A plane can be determined by either 3 points in the space, or a point in the space and a normal (a normalized vector) indicate the direction perpendicular to the plane. In your Plane function, it uses the later definition. So you need to give the point (for example, a point at (0,0,z1)) and the vector (0,0,1) for the Z-axis.
I want to test if any given point in the world is on a quad/plane? The quad/plane can be translated/rotated/scaled by any values but it still should be able to detect if the given point is on it. I also need to get the location where the point should have been, if the quad was not applied any rotation/scale/translation.
For example, consider a quad at 0, 0, 0 with size 100x100, rotated at an angle of 45 degrees along z axis. If my mouse location in the world is at ( x, y, 0, ), I need to know if that point falls on that quad in its current transformation? If yes, then I need to know if no transformations were applied to the quad, where that point would have been on it? Any code sample would be of great help
A ray-casting approach is probably simplest:
Use gluUnProject() to get the world-space direction of the ray to cast into the scene. The ray's origin is the camera position.
Put this ray into object space by transforming it by the inverse of your rectangle's transform. Note that you need to transform both the ray's origin point and direction vector.
Compute the intersection point between this ray and the XY plane with a standard ray-plane intersection test.
Check that the intersection point's x and y values are within your rectangle's bounds, if they are then that's your desired result.
A math library such as GLM will be very helpful if you aren't confident about some of the math involved here, it has corresponding functions such as glm::unProject() as well as functions to invert matrices and do all the other transformations you'd need.
I have a sphere in my program and I intend to draw some rectangles over at a distance x from the centre of this sphere. The figure looks something below:
The rectangles are drawn at (x,y,z) points that I have already have in a vector of 3d points.
Let's say the distance x from centre is 10. Notice the orientation of these rectangles and these are tangential to an imaginary sphere of radius 10 (perpendicular to an imaginary line from the centre of sphere to the centre of rectangle)
Currently, I do something like the following:
For n points vector<vec3f> pointsInSpace where the rectnagles have to be plotted
for(int i=0;i<pointsInSpace.size();++i){
//draw rectnagle at (x,y,z)
}
which does not have this kind of tangential orientation that I am looking for.
It looked to me of applying roll,pitch,yaw rotations for each of these rectangles and using quaternions somehow to make them tangential as to what I am looking for.
However, it looked a bit complex to me and I wanted to ask about some better method to do this.
Also, the rectangle in future might change to some other shape, so a kind of generic solution would be appreciated.
I think you essentially want the same transformation as would be accomplished with a LookAt() function (you want the rectangle to 'look at' the sphere, along a vector from the rectangle's center, to the sphere's origin).
If your rectangle is formed of the points:
(-1, -1, 0)
(-1, 1, 0)
( 1, -1, 0)
( 1, 1, 0)
Then the rectangle's normal will be pointing along Z. This axis needs to be oriented towards the sphere.
So the normalised vector from your point to the center of the sphere is the Z-axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross of the 'up' and 'z' axes gives the x axis, and then the cross of the 'x' and 'z' axes gives the 'y' axis.
These three axes (x,y,z) directly form a rotation matrix.
This resulting transformation matrix will orient the rectangle appropriately. Either use GL's fixed function pipeline (yuk), in which case you can just use gluLookAt(), or build and use the matrix above in whatever fashion is appropriate in your own code.
Personally I think the answer of JasonD is enough. But here is some info of the calculation involved.
Mathematically speaking this is a rather simple problem, What you have is a 2 known vectors. You know the position vector and the spheres normal vector. Since the square can be rotated arbitrarily along around the vector from center of your sphere you need to define one more vector, the up vector. Without defining up vector it becomes a impossible solution.
Once you define a up vector vector, the problem becomes simple. Assuming your square is on the XY-plane as JasonD suggest above. Then your matrix becomes:
up_dot_n_dot_n.X up_dot_n_dot_n.Y up_dot_n_dot_n.Z 0
n.X n.y n.z 0
up_dot_n.x up_dot_n.x up_dot_n.z 0
p.x p.y p.z 1
Where n is the normal unit vector of p - center of sphere (which is trivial if sphere is in the center of the coordinate system), up is a arbitrary unit vector vector. The p follows form definition and is the position.
The solution has a bit of a singularity at the up direction of the sphere. An alternate solution is to rotate first 360 around up, the 180 around rotated axis dot up. Produces same thing different approach no singularity problem.
So far my raytracer:
Sends out a ray and returns a new vector if collision with sphere
was made
Pixel color is then added based on the color of the sphere[id] it collided with.
repeats for all spheres in scene description.
For this example, lets say:
sphere[0] = Light source
sphere[1] = My actual sphere
So now, inside my nested resolution for loops, I have a returned vector that gives me the xyz coordinates of the current ray's collision with sphere[1].
I now want to send a new ray from this collision vector position to the vector position of the light source sphere[0] so I can update the pixel's color based off this light's color / emission.
I have read that I should normalize the two points, and first check if they point in opposite directions. If so, don't worry about this calculation because it's in the light's shadow.
So my question is, given two un-normalized vectors, how can I detect if their normalized unit's are pointing in opposite directions? And with a point light like this, how could that works since each point on the light sphere has a different normal direction? This concept makes much more sense with a directional light.
Also, after I run this check, should I do my shading calculations based off the two normal angles in relationship to each other, or should I send out a new ray towards the lighsource and continue from there?
You can use the dot product of the 2 vectors, that would be negative if they are in the opposite direction, ie the projection of one vector onto another is going in the opposite direction
For question 1, I think you want the dot product between the vectors.
u.v = x1*x2 + y1*y2 + z1*z2
If u.v > 0 then the angle between them is acute.
if u.v < 0 then the angle between them is obtuse.
if 0.v == 0 they point at exactly 90 degree angle.
But what I think you really mean is not to normalize the vectors, but to compute the dot product between the angle of the normal of the surface of the sphere at your collision xyz to the angle from your light source to the same xyz.
So if the sphere has center at xs, ys, zs, and the light source is at xl, yl, zl, and the collision is at xyz then
vector 1 is x-xs, y-ys, z-zs and
vector 2 is xl-x, yl-y, zl-z
if the dot product between these is < 0 then the light ray hit the opposite side of the sphere and can be discarded.
Once you know this light ray hit the sphere on the non-shadowed side, I think you need to do the same calculation for the eye point, depending on the location of the light source and the viewpoint. If the eye point and the light source are the same point, then the value of that dot product can be used in the shading calculation.
If the eye and light are at different positions the light could hit a point the eye can't see (and will be in shadow and thus ambient illumination if any), so you need to do the same vector calculation replacing the light source coordinate with the eye point coordinate, and once again if the dot product is < 0 it is visible.
Then, compute the shading based on the dot product of the vector from eye, to surface, and surface to light.
OK, someone else came along and edited the question while I was writing this, I hope the answer is still clear.
I have many 3D planes. The thing that I need to know is the way of computing aspect angle.
I hope, I can compute the aspect angle by using the projected normal vector of each plane (my plane equation is ax+by-z+c=0; so normal vector of this plane is a,b,-1) to the XY plane. Then, from the Y axis I can compute the aspect angle. But, I don’t know how to get the projected Normal vector after I projected to XY plane. Then, can I apply the equation which gives angle between two vectors to compute angle of my desired vector from the y axis.
On the other hand, I found, aspect angle is defined as the angle between any line which passes along the steepest slope of the plane and north direction (here, Y axis). Does this definition will follow, with my proposed way that is taking normal vectors? I mean, does the projected normal vector always given along the steepest slope of the plane? Also, some one told me, that this problem should consider as a 2D problem.
Please comment me and send me the relevant formulae in order to compute aspect angle. Thank you.
Some quick googling reveals the definition of the aspect angle.
http://www.answers.com/topic/aspect-angle
It's the angle between the geographic north on the northern hemisphere and the geographic south on the southern hemisphere. So basically it's a measure how much a slope faces the closest pole.
If your world is planar as opposed to spherical it will simplify things, so yes - A 2D problem. I'll make this assumption having the following implications:
In a spherical world the north pole is a point on the sphere. In a planar world the "pole" is a plane at infinity. Think about a plane somewhere far away in your world denoting "north". Only the normal of this plane is important in this task. The unit normal of this plane is N(nz,ny,nz).
Up is a vector pointing up U(ux,uy,yz). This is the unit normal vector of the ground plane.
The unit normal vector of the plane V(a,b,c) can now be projected onto a vector P on the ground plane as usual: P = V - (V dot U) U
Now it's easy to measure the aspect angle of the plane - It's the angle between the "pole"-plane N and the projected plane normal P given by acos(P dot N).
Since north is positive Y-axis for you we have N = (0, 1, 0). And then I guess you have up is U = (0, 0, 1), positive Z. This will simplify things even more - To project on the ground plane we just strip the Z-part. The aspect angle is then the angle between (a,b) and (0,1).
aspectAngle = acos(b / sqrt(a*a + b*b))
Note that planes parallell with the ground plane does not have a well-defined aspect angle since there is no slope to measure the aspect angle from.
What kind of surfaces are you working with? TINS (Triangular Irregular Networks) or DEMs (Digital Elevation Models)?
If you are using raster imagery to create your surfaces, the algorithm for calculating aspect is basically a moving window, which checks a central pixel plus the 8 neighbors.
Compare the central one with each neighbor and check for difference in elevation over distance (rise over run). You can parametrize the distance checks (north, south, east and west neighbors are at distance = 1 and northwest, southwest, southeast and northeast are at distance = sqrt(2)) to make it faster.
You can ask this question on gis.stackexchange also. Many people will be able to help you there.
Edit:
http://blog.geoprocessamento.net/2010/03/modelos-digitais-de-elevacao-e-hidrologia/
this website, altought in portuguese, will help you visualize the algorithm. After calculating the highest slope between a central cell and it's eight neighbors, you assign 0, 2, 4, 8, 16, 32, 64 or 128, depending on the location of the cell that presented highest slope between center and neighboors.