Screen Projection and Culling united - opengl

I am currently dealing with several thousand boxes that i'd like to project onto the screen to determinate their sizes and distances to the camera.
My current approach is to get a sphere representing the box and project that using view and projection matrices and the viewport values.
// PSEUDOCODE
// project box center from world into viewspace
boxCenterInViewSpace = viewMatrix * boxCenter;
// get two points left and right of center
leftPoint = boxCenter - radius;
right = boxCenter + radius;
// project points from view into eye space
leftPoint = projectionMatrix * leftPoint;
rightPoint = projectionMatrix * rightPoint;
// normalize points
leftPoint /= leftPoint.w;
rightPoint /= rightPoint.w;
// move to 0..1 range
leftPoint = leftPoint * 0.5 + 0.5;
rightPoint = rightPoint * 0.5 + 0.5;
// scale to viewport
leftPoint.x = leftPoint.x * viewPort.right + viewPort.left;
leftPoint.y = leftPoint.y * viewPort.bottom + viewPort.top;
rightPoint.x = rightPoint.x * viewPort.right + viewPort.left;
rightPoint.y = rightPoint.y * viewPort.bottom + viewPort.top;
// at this point i check if the node is visible on screen by comparing the points to the viewport
// calculate size
length(rightPoint - leftPoint)
At another point i calculate the distance of the box to the camera.
The first problem is that i won't know if the box is just below the viewport as i just calculate horizontal. Is there a way to project a real sphere onto the screen somehow? Some method that looks like:
float getSizeOfSphereProjectedOnScreen(vec3 midpoint, float radius)
The other question is simpler: In with coordinate space is the z coordinate corresponding to the distance to the camera?
To sum it up i want to calculate:
Is the Box in the view frustum?
What is the size of the Box on the screen?
What is the distance from Box to camera?
To simplify calculations i'd like to use a sphere representation for this but i don't know how to project a sphere.

[Updated]
What is the distance from Box to camera?
In
[which] coordinate space is the z
coordinate corresponding to the
distance to the camera?
The answer is none of the usual spaces. The closest one would be in view space (i.e. after you apply the view matrix but not the projection matrix). In view space, the distance to the camera should be sqrt(x*x + y*y + z*z), because the camera is at the origin. (z would be a reasonable approximation only if |x| and |y| were really small relative to |z|.) This is assuming that knowing the distance from the camera to the center of the box is good enough.
I think if you really wanted a space in which the z coordinate corresponds to the distance to the camera, you'd need to map a spherical locus of points sqrt(x*x + y*y + z*z) = d to a plane z = d. I don't know that you can do that with a matrix.
Is the Box in the view frustum?
What is the size of the Box on the screen?
I think you're on the right track with this, but depending on which direction the camera is facing, your left and right points might not determine how wide the box looks or whether the box intersects the view frustum. See my answer to your other question for a long way to do this.

Related

Trying to deform any mesh into a sphere. How to translate the vertex position to lie on a sphere?

I am trying to write a deformer script for maya, using the maya API which deforms any mesh into a sphere by translating it's vertices.
What i already have is a deformer which translates every vertex of mesh in the direction of it's normal with the amount specified. This is done using the below equation.
point += normals[itGeo.index()] * bulgeAmount * w * env;
Where, point is the vertex on the mesh. normals[itGeo.index()] is a vector array which represents the normals of each vertex. w and env are to control the weights of the deformation and the envelope.
What this code basically does is, it translates the vertex in the direction of the normal with the amount specified. While this works for a sphere, because a sphere's vertex normals would point at the center. It would not work for other meshes as the normals would not point at the center of the mesh.
float bulgeAmount = data.inputValue(aBulgeAmount).asFloat();
float env = data.inputValue(envelope).asFloat();
MPoint point;
float w;
for (; !itGeo.isDone(); itGeo.next())
{
w = weightValue(data, geomIndex, itGeo.index());
point = itGeo.position();
point += normals[itGeo.index()] * bulgeAmount * w * env;
itGeo.setPosition(point);
}
I initially thought changing the direction of translation would solve the problem. As in, if we can find the vector in the direction from the center of the mesh to each vertex and translate it along that direction for an amount specified would solve it. Like so :
point += (Center - point) * bulgeAmount * w * env;
Where, Center is the center of the mesh. But this does not give the desired result. I also would want the deformer to be setup in such a way that the user can input radius "r" value and can also change the amount attribute from 0 to 1 to deform the mesh from it's original state to a spherical one. So that he can choose a value in between if her desires and the mesh would be something between a sphere and it's original shape.
This is my very first post in stackOverflow. I apologize if the format does not follow the community expectations. Any help on this will be greatly appreciated.
Thank You.
About the direction:
I think your line :
point += (Center - point) * bulgeAmount * w * env;
is a good starting point.
But instead of using (Center - point), you should use its opposite, (point-Center) and normalize it before using it. If you don't use a normalized version of this (point-Center) vector, every vertex will be translated to a wrong position.
About your variation between 0.0 (original) to 1.0 (sphere):
If Po is the original position
If Pf is the final position
If d is the original distance between the point Po and the Center C:
d=norm(Center - point) = norm(C-Po)
If Direction is (Center - point)/d (so normalized, as explained above)
What we want:
At r=0.0 your vertex must stay at its original position: Pf = Center + Direction * d
At r=1.0 your vertex must stick to the sphere of radius R: Pf = Center + Direction * R
And if we generalize:
Pf = C + Direction * ( r*R + (1-r)*d )
With d = norm(C-Po)
Direction = (C-Po)/d
R the radius of your sphere
and r a user param between [0.0; 1.0]
Not sure I am clear enough, I'm not used neither to answer here :)
Best

Rotating a matrix in the direction of a vector?

I have a player in the shape of a sphere that can move around freely in the directions x and z.
The players current speed is stored in a vector that is added to the players position on every frame:
m_position += m_speed;
I also have a rotation matrix that I'd like to rotate in the direction that the player's moving in (imagine how a ball would rotate if it rolled on the floor).
Here's a short video to help visualise the problem: http://imgur.com/YrTG2al
Notice in the video when I start moving up and down (Z) as opposed to left and right (X) the rotation axis no longer matches the player's movement.
Code used to produce the results:
glm::vec3 UP = glm::vec3(0, 1, 0);
float rollSpeed = fabs(m_Speed.x + m_Speed.z);
if (rollSpeed > 0.0f) {
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, glm::cross(UP, glm::normalize(m_Speed)));
}
Thankful for help
Your rollSpeed computation is wrong -- e.g., if the signs of m_Speed.x and m_Speed.z speed are different, they will subtract. You need to use the norm of the speed in the plane:
float rollSpeed = sqrt(m_Speed.x * m_Speed.x + m_Speed.y * m_Speed.y);
To be more general about it, you can re-use your cross product instead. That way, your math is less likely to get out of sync -- something like:
glm::vec3 rollAxis = glm::cross(UP, m_speed);
float rollSpeed = glm::length(rollAxis);
m_RotationMatrix = glm::rotate(m_RotationMatrix, rollSpeed, rollAxis);
rollSpeed should be the size of the speed vector.
float rollSpeed = glm::length(m_Speed);
The matrix transform expects an angle. The angle of rotation will depend on the size of your ball. But say it's radius r then the angle (in radians) you need is
angle = rollSpeed/r;
If I understood correctly you need a matrix rotation which would work in any axis direction(x,y,z).
I think you should write a rotate() method per axis (x, y, z), also you should point to direction on which axis your direction points, you should write direction.x or direction.y or direction.z and rotation matrix will understand to where the direction vector is being point.

Need rotation matrix for opengl 3D transformation

The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.

3D coordinate of 2D point given camera and view plane

I wish to generate rays from the camera through the viewing plane. In order to do this, I need my camera position ("eye"), the up, right, and towards vectors (where towards is the vector from the camera in the direction of the object that the camera is looking at) and P, the point on the viewing plane. Once I have these, the ray that's generated is:
ray = camera_eye + t*(P-camera_eye);
where t is the distance along the ray (assume t = 1 for now).
My question is, how do I obtain the 3D coordinates of point P given that it is located at position (i,j) on the viewing plane? Assume that the upper left and lower right corners of the viewing plane are given.
NOTE: The viewing plane is not actually a plane in the sense that it doesn't extend infinitely in all directions. Rather, one may think of this plane as a widthxheight image. In the x direction, the range is 0-->width and in the y direction the range is 0-->height. I wish to find the 3D coordinate of the (i,j)th element, 0
General solution of the itnersection of a line and a plane see http://local.wasp.uwa.edu.au/~pbourke/geometry/planeline/
Your particular graphics lib (OpenGL/DirectcX etc) may have an standard way to do this
edit: You are trying to find the 3d intersection of a screen point (eg a mouse cursor) with a 3d object in you scene?
To work out P, you need the distance from the camera to the near clipping plane (the screen), the size of the window on the near clipping plane (or the view angle, you can work out the window size from the view angle) and the size of the rendered window.
Scale the screen position to the range -1 < x < +1 and -1 < y < +1 where +1 is the top/right and -1 is the bottom/left
Scale normalised x,y by the view window size
Scale by the right and up vectors of the camera and sum the results
Add the look at vector scaled by the clipping plane distance
In effect, you get:
p = at * near_clip_dist + x * right + y * up
where x and y are:
x = (screen_x - screen_centre_x) / (width / 2) * view_width
y = (screen_y - screen_centre_y) / (height / 2) * view_height
When I directly plugged in suggested formulas into my program, I didn't obtain correct results (maybe some debugging needed to be done). My initial problem seemed to be in the misunderstanding of the (x,y,z) coordinates of the interpolating corner points. I was treating x,y,z-coordinates separately, where I should not (and this may be specific to the application, since the camera can be oriented in any direction). Instead, the solution turned out to be a simple interpolation of the corner points of the viewing plane:
interpolate the bottom corner points in the i direction to get P1
interpolate the top corner points in the i direction to get P2
interpolate P1 and P2 in the j direction to get the world coordinates of the final point

c++ opengl converting model coordinates to world coordinates for collision detection

(This is all in ortho mode, origin is in the top left corner, x is positive to the right, y is positive down the y axis)
I have a rectangle in world space, which can have a rotation m_rotation (in degrees).
I can work with the rectangle fine, it rotates, scales, everything you could want it to do.
The part that I am getting really confused on is calculating the rectangles world coordinates from its local coordinates.
I've been trying to use the formula:
x' = x*cos(t) - y*sin(t)
y' = x*sin(t) + y*cos(t)
where (x, y) are the original points,
(x', y') are the rotated coordinates,
and t is the angle measured in radians
from the x-axis. The rotation is
counter-clockwise as written.
-credits duffymo
I tried implementing the formula like this:
//GLfloat Ax = getLocalVertices()[BOTTOM_LEFT].x * cosf(DEG_TO_RAD( m_orientation )) - getLocalVertices()[BOTTOM_LEFT].y * sinf(DEG_TO_RAD( m_orientation ));
//GLfloat Ay = getLocalVertices()[BOTTOM_LEFT].x * sinf(DEG_TO_RAD( m_orientation )) + getLocalVertices()[BOTTOM_LEFT].y * cosf(DEG_TO_RAD( m_orientation ));
//Vector3D BL = Vector3D(Ax,Ay,0);
I create a vector to the translated point, store it in the rectangles world_vertice member variable. That's fine. However, in my main draw loop, I draw a line from (0,0,0) to the vector BL, and it seems as if the line is going in a circle from the point on the rectangle (the rectangles bottom left corner) around the origin of the world coordinates.
Basically, as m_orientation gets bigger it draws a huge circle around the (0,0,0) world coordinate system origin. edit: when m_orientation = 360, it gets set back to 0.
I feel like I am doing this part wrong:
and t is the angle measured in radians
from the x-axis.
Possibly I am not supposed to use m_orientation (the rectangles rotation angle) in this formula?
Thanks!
edit: the reason I am doing this is for collision detection. I need to know where the coordinates of the rectangles (soon to be rigid bodies) lie in the world coordinate place for collision detection.
What you do is rotation [ special linear transformation] of a vector with angle Q on 2d.It keeps vector length and change its direction around the origin.
[linear transformation : additive L(m + n) = L(m) + L(n) where {m, n} € vector , homogeneous L(k.m) = k.L(m) where m € vector and k € scalar ] So:
You divide your vector into two pieces. Like m[1, 0] + n[0, 1] = your vector.
Then as you see in the image, rotation is made on these two pieces, after that your vector take
the form:
m[cosQ, sinQ] + n[-sinQ, cosQ] = [mcosQ - nsinQ, msinQ + ncosQ]
you can also look at Wiki Rotation
If you try to obtain eye coordinates corresponding to your object coordinates, you should multiply your object coordinates by model-view matrix in opengl.
For M => model view matrix and transpose of [x y z w] is your object coordinates you do:
M[x y z w]T = Eye Coordinate of [x y z w]T
This seems to be overcomplicating things somewhat: typically you would store an object's world position and orientation separately from its set of own local coordinates. Rotating the object is done in model space and therefore the position is unchanged. The world position of each coordinate is the same whether you do a rotation or not - add the world position to the local position to translate the local coordinates to world space.
Any rotation occurs around a specific origin, and the typical sin/cos formula presumes (0,0) is your origin. If the coordinate system in use doesn't currently have (0,0) as the origin, you must translate it to one that does, perform the rotation, then transform back. Usually model space is defined so that (0,0) is the origin for the model, making this step trivial.