Suppose I would like to draw the following lines:
const GLfloat lineX[] = {
FrustumData.left * FrustumData.ratio , (FrustumData.top + FrustumData.bottom) / 2 * FrustumData.ratio, FrustumData.zFar, //point A
FrustumData.right * FrustumData.ratio , (FrustumData.top + FrustumData.bottom) / 2 * FrustumData.ratio, FrustumData.zFar //point B
};
const GLfloat lineY[] = {
(FrustumData.left + FrustumData.right) / 2 * FrustumData.ratio , FrustumData.bottom * FrustumData.ratio, FrustumData.zFar, //point A
(FrustumData.left + FrustumData.right) / 2 * FrustumData.ratio , FrustumData.top * FrustumData.ratio, FrustumData.zFar //point B
};
const GLfloat lineZ[] = {
(FrustumData.left + FrustumData.right) / 2 * FrustumData.ratio, (FrustumData.top + FrustumData.bottom) / 2 * FrustumData.ratio, FrustumData.zFar, //point A
(FrustumData.left + FrustumData.right) / 2 , (FrustumData.top + FrustumData.bottom) / 2 , FrustumData.zNear //point B
};
where ratio = zFar/zNear and everything else is glFrustum parameters.
Shouldn't I see the lines for any selection of Frustum parameters or it depends somehow on glViewport?
because I don't see them right now and I can't understand why.
Thank you.
There's no direct connection between the two. But they both have a direct influence on where things will appear on the screen.
Essentially glFrustum is a function to create a projection transformation of a certain kind, namely one, that creates perspective. I recommend reading another answer written by me, which explains the transformation stages of OpenGL.
In that answer I wrote:
This transformation maps some volume in eye space to a specific volume with certain boundaries, to which the geometry is clipped.
glFrustum is defining such a volume. It looks like a pyramid with its tip cut of (a frustum), where the tip would be at the eye space origin, the "cut" was the near clip plane and the pyramid's base the far clip plane. The parameters left, right, bottom and top define the extents of the near clipping plane in eye space units.
Related
I'm having some issues with my camera, where the near plane seems to be too far even when I have it set to 0.1 or lower. It seems like there is already some offset value. So you can't really get close enough an arbitrary object in the scene.
Below is a visual appearance of the bug.
The black triangle shown is the clipping.
Currently I'm using a perspective matrix and here is the code for that.
Matrix4x4 Matrix4x4::Perspective(Float fov, Float aspectRatio, Float near, Float far)
{
Matrix4x4 result(1.0f);
Float q = 1.0f / tan(toRadians(0.5f * fov));
Float a = q / aspectRatio;
Float b = (near + far) / (near - far);
Float c = (2.0f * near * far) / (near - far);
result.elements[0 + 0 * 4] = a;
result.elements[1 + 1 * 4] = q;
result.elements[2 + 2 * 4] = b;
result.elements[2 + 3 * 4] = -1.0f;
result.elements[3 + 2 * 4] = c;
return result;
}
I don't feel that the bug is from the maths class. This is because the maths code are mainly from another project that I've been working on. And works perfectly fine from there.
I also don't suspect that its the way that I render it (forward renderer). I believe my pointers for that are good since I am able to move and rotate the camera via mouse and keyboard.
But! What I suspect is the buffers. The OpenGL buffers. But I'm not entirely sure.
I hope someone gives me some advice as to how I can hunt and tackle this bug down.
Thank you in advance.
Well one image is worth more then 1000 words so:
As I mentioned in my comments if your cube is half size r=1.0 and you set up your camera in distance r from cube center you still get cut by z_near as the cube edges are distant from center in range <sqrt(2),sqrt(3)> so all edges turned towards camera will get cut off ...
But also as mentioned this is just guess because you did not provide any test data relevant for this (matrices and mesh content)...
PS on the right side is the cube side view of your camera settings I am guessing you have set. The green (+/-)Z is your viewing direction.
"glEnable(GL_DEPTH_CLAMP)" Solved The Issue
After reading datenwolf's 2011 answer concerning tile-based render setup in OpenGL, I attempted to implement his solution. The source image looks like this (at 800 x 600)
The resulting image with 2x2 tiles, each tile at 800 x 600 per tile looks like this.
As you can see they don't exactly match, though I can see something vaguely interesting has happened. I'm sure I've made an elementary error somewhere but I can't quite see it.
I'm doing 4 passes where:
w, h are 2,2 (2x2 tiles)
x, y are (0,0) (1,0) (0,1) and (1,1) in each of the 4 passes
MyFov is 1.30899692 (75 degrees)
MyWindowWidth, MyWindowHeight are 800, 600
MyNearPlane, MyFarPlane are 0.1, 200.0
The algorithm to calculate the frustum for each tile is:
auto aspect = static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight);
auto right = -0.5f * Math::Tan(MyFov) * MyShaderData.Camera_NearPlane;
auto left = -right;
auto top = aspect * right;
auto bottom = -top;
auto shift_X = (right - left) / static_cast<float>(w);
auto shift_Y = (top - bottom) / static_cast<float>(h);
auto frustum = Math::Frustum(left + shift_X * static_cast<float>(x),
left + shift_X * static_cast<float>(x + 1),
bottom + shift_Y * static_cast<float>(y),
bottom + shift_Y * static_cast<float>(y + 1),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
, where Math::Frustum is:
template<class T>
Matrix4x4<T> Frustum(T left, T right, T bottom, T top, T nearPlane, T farPlane)
{
Matrix4x4<T> r(InitialiseAs::InitialiseZero);
r.m11 = (static_cast<T>(2) * nearPlane) / (right - left);
r.m22 = (static_cast<T>(2) * nearPlane) / (top - bottom);
r.m31 = (right + left) / (right - left);
r.m32 = (top + bottom) / (top - bottom);
r.m33 = -(farPlane + nearPlane) / (farPlane - nearPlane);
r.m34 = static_cast<T>(-1);
r.m43 = -(static_cast<T>(2) * farPlane * nearPlane) / (farPlane - nearPlane);
return r;
}
For completeness, my Matrx4x4 layout is:
struct
{
T m11, m12, m13, m14;
T m21, m22, m23, m24;
T m31, m32, m33, m34;
T m41, m42, m43, m44;
};
Can anyone spot my error?
Edit:
So derhass explained it to me - a much easier way of doing things is to simply scale and translate the projection matrix. For testing I modified my translation matrix, scaled up by 2x, as follows (changing translate for each tile):
auto scale = Math::Scale(2.f, 2.f, 1.f);
auto translate = Math::Translate(0.5f, 0.5f, 0.f);
auto projection = Math::Perspective(MyFov,
static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
MyShaderData.Camera_Projection = scale * translate * projection;
The resulting image is below (stitching 4 together) - the discontinuities in the image are caused by the post processing I think, so that's another issue I might have to deal with at some point.
This isn't a real answer for the question, but it might be an useful alternative approach to what you are trying to solve here. In my opinion, datenwolf's solution in his answer to the stackoverflow question you are referring to is more comlicated that it needs to be. So I'm presenting my alternative here.
Forword: I assume standard OpenGL matrix conventions, so that the vertex transformation with matrix M is done as v'= M *v (like the fixed-function pipeline did).
When a scene is rendered with some projection matrix P, you can extract any axis-aligned sub-rectangle of said scene by applying a scale and transformation operation after the projection matrix is applied.
The key point is that the viewing volume is defined as the [-1,1]^3 cube in NDC space. The clip space (which is what P transforms the data to) is just the homogenous represenation of that volume. As the typical 4x4 transformation matrices are all working in homogenous space, we don't really need to care about w at all and simply can define the transformations as if we were in NDC space.
Since you only need some 2D tiling, z should be left as-is, and only some scale and translation in x and y is required. When composing transformations A and B into a single Matrix C as C=A*B, following the aforementioned conventions this results in B being applied first, and A last (since C*v == A*B*v == A*(B*v)). So to modify the result after projection, we have to pre-multiply some transformations to P and we are done:
P'=S(sx,sy,1) * T(tx,ty,0) * P
The construction of P' will work with any valid projection matrix P, no matter if it is a perspective or ortho transform. In the ortho case, what this does is quite clear. In the perspective case, this actually modifies both the field of view and also shifts the frustum to an asymmetric one.
When you want to tile the image into a grid of m times n segments. it is clear that sx=m and sy=n. As I did use the S * T order (by choice), T is applied before the scale, so for each tile, (tx,ty) is just the vector moving the center of the tile to the new center (which will be the origin). As NDC space is 2 units wide and tall, for a tile x,y, the transformation is
tx= - (-1 + 2/(2*m) + (2/m) * x)
ty= - (-1 + 2/(2*n) + (2/n) * y)
// ^ ^ ^
// | | |
// | | +- size of of each tile in NDC space
// | |
// | +- half the size (as the center offset)
// |
// +- left/bottom border of NDC space
The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.
I am building a ray Tracer from scratch. My question is:
When I change camera coordinates the Sphere changes to ellipse. I don't understand why it's happening.
Here are some images to show the artifacts:
Sphere: 1 1 -1 1.0 (Center, radius)
Camera: 0 0 5 0 0 0 0 1 0 45.0 1.0 (eyepos, lookat, up, foy, aspect)
But when I changed camera coordinate, the sphere looks distorted as shown below:
Camera: -2 -2 2 0 0 0 0 1 0 45.0 1.0
I don't understand what is wrong. If someone can help that would be great!
I set my imagePlane as follows:
//Computing u,v,w axes coordinates of Camera as follows:
{
Vector a = Normalize(eye - lookat); //Camera_eye - Camera_lookAt
Vector b = up; //Camera Up Vector
m_w = a;
m_u = b.cross(m_w);
m_u.normalize();
m_v = m_w.cross(m_u);
}
After that I compute directions for each pixel from the Camera position (eye) as mentioned below:
//Then Computing direction as follows:
int half_w = m_width * 0.5;
int half_h = m_height * 0.5;
double half_fy = fovy() * 0.5;
double angle = tan( ( M_PI * half_fy) / (double)180.0 );
for(int k=0; k<pixels.size(); k++){
double j = pixels[k].x(); //width
double i = pixels[k].y(); //height
double XX = aspect() * angle * ( (j - half_w ) / (double)half_w );
double YY = angle * ( (half_h - i ) / (double)half_h );
Vector dir = (m_u * XX + m_v * YY) - m_w ;
directions.push_back(dir);
}
After that:
for each dir:
Ray ray(eye, dir);
int depth = 0;
t_color += Trace(g_primitive, ray, depth);
After playing a lot and with the help of the comments of all you guys I was able to create successfully my rayTracer properly. Sorry for answering late, but I would like to close this thread with few remarks.
So, the above mentioned code is perfectly correct. Based on my own assumptions (as mentioned in above comments) I have decided to set my Camera parameters like that.
The problem I mentioned above is a normal behaviour of the camera (as also mentioned above in the comments).
I have got good results now but there are few things to check while coding a rayTracer:
Always make sure to take care of Radians to Degrees (or vice versa) conversion while computing FOV and ASPECT RATIO. I did it as follows:
double angle = tan((M_PI * 0.5 * fovy) / 180.0);
double y = angle;
double x = aspect * angle;
2) While computing Triangle intersections, make sure to implement cross product properly.
3) While using intersections of different objects make sure to find the intersection which is at a minimum distance from the camera.
Here's the result I got:
Above is a very simple model (courtesy UCBerkeley), which I rayTraced.
This is the correct behavior. Get a camera with a wide angle lens, put the sphere near the edge of the field of view and take a picture. Then in a photo app draw a circle on top of the photo of the sphere and you will see that it's not a circular projection.
This effect will be magnified by the fact that you set aspect to 1.0 but your image is not square.
A few things to fix:
A direction vector is (to - from). You have (from - to), so a is pointing backward. You'll want to add m_w at the end, rather than subtract it. Also, this fix will rotate your m_u,m_v by 180 degrees, which will make you about to change (j - half_w) to (half_w - j).
Also, putting all the pixels and all the directions in lists is not as efficient as just looping over x,y values.
I have a class tetronimo (a tetris block) that has four QRect types (named first, second, third, fourth respectively). I draw each tetronimo using a build_tetronimo_L type functions.
These build the tetronimo in a certain direction, but as in tetris you're supposed to be able to rotate the tetronimo's, I'm trying to rotate a tetronimo by rotating each individual square of the tetronimo.
I have found the following formula to apply to each (x, y) coordinate of a particular square.
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
Now, the QRect type of Qt, does only seem to have a setCoords function that takes the (x, y) coordinates of top-left and bottom-right points of the respective square.
I have here an example (which doesn't seem to produce the correct result) of rotating the first two squares in my tetronimo.
Can anyone tell me how I'm supposed to rotate these squares correctly, using runtime rotation calculation?
void tetromino::rotate(double angle) // angle in degrees
{
std::map<std::string, rect_coords> coords = get_coordinates();
// FIRST SQUARE
rect_coords first_coords = coords["first"];
//top left x and y
int newx_first_tl = (cos(to_radians(angle)) * first_coords.top_left_x) - (sin(to_radians(angle)) * first_coords.top_left_y);
int newy_first_tl = (sin(to_radians(angle)) * first_coords.top_left_x) + (cos(to_radians(angle)) * first_coords.top_left_y);
//bottom right x and y
int newx_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) - (sin(to_radians(angle)) * first_coords.bottom_right_y);
int newy_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) + (sin(to_radians(angle)) * first_coords.bottom_right_y);
//CHANGE COORDINATES
first->setCoords( newx_first_tl, newy_first_tl, newx_first_tl + tetro_size,newy_first_tl - tetro_size);
//SECOND SQUARE
rect_coords second_coords = coords["second"];
int newx_second_tl = (cos(to_radians(angle)) * second_coords.top_left_x) - (sin(to_radians(angle)) * second_coords.top_left_y);
int newy_second_tl = (sin(to_radians(angle)) * second_coords.top_left_x) + (cos(to_radians(angle)) * second_coords.top_left_y);
//CHANGE COORDINATES
second->setCoords(newx_second_tl, newy_second_tl, newx_second_tl - tetro_size, newy_second_tl + tetro_size);
first and second are QRect types. rect_coords is just a struct with four ints in it, that store the coordinates of the squares.
The first square and second square calculations are different, as I was playing around trying to figure it out.
I hope someone can help me figure this out?
(Yes, I can do this much simpler, but I'm trying to learn from this)
It seems more like a math question than a programming question. Just plug in values like 90 degrees for the angle to figure this out. For 90 degrees, a point (x,y) is mapped to (-y, x). You probably don't want to rotate around the origin but around a certain pivot point c.x, c.y. For that you need to translate first, then rotate, then translate back:
(x,y) := (x-c.x, y-c.y) // translate into coo system w/ origin at c
(x,y) := (-y, x) // rotate
(x,y) := (x+c.x, y+c.y) // translate into original coo system
Before rotating you have to translate so that the piece is centered in the origin:
Translate your block centering it to 0, 0
Rotate the block
Translate again the center of the block to x, y
If you rotate without translating you will rotate always around 0, 0 but since the block is not centered it will be rotated around the center. To center your block is quite simple:
For each point, compute the median of X and Y, let's call it m
Subtract m.X and m.Y to the coordinates of all points
Rotate
Add again m.X and m.Y to points.
Of course you can use linear algebra and vector * matrix multiplication but maybe it is too much :)
Translation
Let's say we have a segment with coordinates A(3,5) B(10,15).
If you want to rotate it around its center, we first translate it to our origin. Let's compute mx and my:
mx = (10 - 3) / 2
my = (15 - 5) / 2
Now we compute points A1 and B1 translating the segment so it is centered to the origin:
A1(A.X - mx, A.Y - my)
B1(B.X - mx, B.Y - my)
Now we can perform our rotation of A1 and B1 (you know how).
Then we have to translate again to the original position:
A = (rotatedA1.X + mx, rotatedA1.y + my)
B = (rotatedB1.X + mx, rotatedB1.y + my)
If instead of having two points you have n points you have of course do everything for n points.
You could use Qt Graphics View which does all the geometric calculations for you.
Or are you just wanting to learn basic linear geometrical transformations? Then reading a math textbook would probably be more appropriate than coding.