Perspective Projection, Canonical Viewing Volume - opengl

Vector3d nearC(0,0,0 -w);
Vector3d farC(0,0,0-x);
double width = y/2;
double height = z/2;
double angleOfHeight = atan(height/w);
double angleOfWidth = atan(width/w);
double adjustedHeight = tan(angleOfHeight) * x;
double adjustedWidth = tan(angleOfWidth) * x;
nearC[0] - width, nearC[1] - height, nearC[2]
nearC[0] - width, nearC[1] + height, nearC[2]
nearC[0] + width, nearC[1] + height, nearC[2]
nearC[0] + width, nearC[1] - height, nearC[2]
farC[0] - adjustedWidth, farC[1] - adjustedHeight, farC[2]
farC[0] - adjustedWidth, farC[1] + adjustedHeight, farC[2]
farC[0] + adjustedWidth, farC[1] + adjustedHeight, farC[2]
farC[0] + adjustedWidth, farC[1] - adjustedHeight, farC[2]
Above is my frustum in view coordinates. View Matrix is:
0 0 -1 0
0 1 0 -1
1 0 0 -10
0 0 0 1
All of it is right, we have a sheet.
I can't for the life of me figure how to get that frustum in canonical viewing volume. I've run through every perspective projection I could find. Current is this:
s, 0, 0, 0,
0, s, 0, 0,
0, 0, -(f+ne)/(f-ne), 2*f*ne/(f-ne),
0, 0, 1, 0;
double s = 1/tan(angleOfView * 0.5 * M_PI / 180);
I'm missing a step or something, right? Or a few steps?
Sorry to sound so hopeless now, been spinning wheels a while on this.
Any help appreciated.

Lest start with the perspective projection. The common way in old GL is to use gluPerspective.
for that we need znear,zfar,FOV and aspect ratio of view. For more info see:
Calculating the perspective projection matrix according to the view plane
I am used to use FOVx (viewing angle in x axis). To compute that you need to look at your frustrum from above looking at xz plane (in camera space):
so:
tan(FOVx/2) = znear_width / 2*focal_length
FOVx = 2*atan(znear_width / 2*focal_length)
the focal length can be computed by computing the intersection of frustrum edge lines. Or by using triangle similarity. The second is easier to write:
zfar_width/2*(|zfar-znear|+focal_length) = znear_width/2*(focal_length)
zfar_width/(|zfar-znear|+focal_length) = znear_width/(focal_length)
focal_length = (|zfar-znear|+focal_length)*znear_width/zfar_width
focal_length - focal_length*znear_width/zfar_width = |zfar-znear|*znear_width/zfar_width
focal_length*(1-(znear_width/zfar_width)) = |zfar-znear|*znear_width/zfar_width
focal_length = (|zfar-znear|*znear_width/zfar_width) / (1-(znear_width/zfar_width))
and that is all we need so:
focal_length = (|zfar-znear|*znear_width/zfar_width) / (1-(znear_width/zfar_width))
FOVx = 2*atan(znear_width / 2*focal_length)
FOVx*=180.0/M_PI; // convert to degrees
aspect=znear_width/znear_height;
gluPerspective(FOVx/aspect,aspect,znear,zfar);
just be aware of that |zfar-znear| is perpendicular distance between the planes !!! So if you do not have axis aligned ones then you need to compute that using dot product and normal ...

Related

PTB-OpengGL stereo rendering and eye seperation value

I tried to draw a 3D dot cloud using OpenGL asymmetric frustum parallel axis projection. The general principle can be found on this website(http://paulbourke.net/stereographics/stereorender/#). But the problem now is that when I use real eye separation(0.06m), my eyes do not fuse well. When using eye separation = 1/30 * focal length, there is no pressure. I don’t know if there is a problem with the calculation, or there is a problem with the parameters? Part of the code is posted below. Thank you all.
for view = 0:stereoViews
% Select 'view' to render (left- or right-eye):
Screen('SelectStereoDrawbuffer', win, view);
% Manually reenable 3D mode in preparation of eye draw cycle:
Screen('BeginOpenGL', win);
% Set the eye seperation:
eye = 0.06; % in meter
% Caculate the frustum shift at the near plane:
fshift = 0.5 * eye * depthrangen/(vdist/100); % vdist is the focal length, 56cm, 0.56m
right_near = depthrangen * tand(FOV/2); % depthrangen is the depth of the near plane, 0.4. %FOV is the field of view, 18°
left_near = -right_near;
top_near = right_near* aspectr;
bottom_near = -top_near;
% Setup frustum projection for this eyes 'view':
glMatrixMode(GL.PROJECTION)
glLoadIdentity;
eyeside = 1+(-2*view); % 1 for left eye, -1 for right eye
glFrustum(left_near + eyeside * fshift, right_near + eyeside * fshift, bottom_near, top_near, %depthrangen, depthrangefObj);
% Setup camera for this eyes 'view':
glMatrixMode(GL.MODELVIEW);
glLoadIdentity;
gluLookAt(0 - eyeside * 0.5 * eye, 0, 0, 0 - eyeside * 0.5 * eye, 0, -1, 0, 1, 0);
% Clear color and depths buffers:
glClear;
moglDrawDots3D(win, xyz(:,:,iframe), 10, [], [], 1);
moglDrawDots3D(win, xyzObj(:,:,iframe), 10, [], [], 1);
% Manually disable 3D mode before calling Screen('Flip')!
Screen('EndOpenGL', win);
% Repeat for other eyes view if in stereo presentation mode...
end

2D rotation of a quad

I am working on a basic simulation program using C++ and I have a renderer that uses OpenGL. I am rendering quads on the screen which have a dynamic location in the simulation. My goal is to change the orientation of the quad when it is moving in the simulation. For each quad, I have a variable (m_Rotation) which holds the current rotation of it and I calculate the required rotation using trigonometry and put the value in a variable (m_ProjectedRotation). In the render loop, I use the following code to change the orientation in the movement:
if(abs(m_ProjectedRotation - m_Rotation)>5.0f)
{
if ((360.0f - m_Rotation + m_ProjectedRotation) > (m_ProjectedRotation - m_Rotation))
{
m_Rotation += 5.0f;
if (m_Rotation > 360)
{
m_Rotation = fmod(m_Rotation, 360);
}
}
else
{
m_Rotation -= 5.0f;
}
}
I want the quad to rotate itself according to the closest angle(e.g. if the current angle is 330, and the destination angle is 30, the quad should increase its angle until it reaches 30 not decreasing the angle since it reaches 30. Because it has a smaller angle to rotate). In some conditions, my quad rotates itself counterclockwise even tough the clockwise rotation has shorter rotation and vice versa. I believe the condition for rotation:
(360.0f - m_Rotation + m_ProjectedRotation) > (m_ProjectedRotation - m_Rotation)
should be something different to show the required behavior. However, I couldn't figure it out. How should I update this code to get what I want?
I believe the correct solution should be as follows:
Let's call the two angles from and to. I assume both are in positive degrees as per your question. There are two cases:
the absolute distance |to - from| is less than 180.
This means there is less degrees to travel by to - from than in the other direction, and that is the way you should choose.
In this case, you should rotate by sign(to-from) * deltaRotation, where sign(x) = 1 if x > 0 and -1 otherwise. To see the need of the sign function, look at the following 2 examples, where |to - from| < 180:
from = 10, to = 20. to - from = 10 > 0, so you should increase rotation.
from = 20, to = 10. to - from = -10 < 0, you should decrease rotation.
|to - from| is more than 180. In this case, the direction should be the inverse, and you should rotate by - sign(to-form) * deltaRotation, note the minus sign. You could also express this as sign(from-to) * deltaRotation, swapping from and to, but I left them as before for explicitness.
from = 310, to = 10. Then, to - from = -300 < 0, you should increase rotation (Formally, -sign(to-from) = -sign(-300) = -(-1) = 1)
from = 10, to = 310. Then, to - from = 300 > 0, you should decrease rotation (Formally, -sign(to-from) = -sign(300) = -1)
Writing this in C++ you can encapsulate this logic in such a function:
int shorterRotationSign(float from, float to) {
if(fabs(to - from) <= 180) {
return to - from > 0 ? 1 : -1;
}
return to - from > 0 ? -1 : 1;
}
Which you will use like this:
m_Rotation += 5.0f * shorterRotationSign(m_Rotation, m_ProjectedRotation);
m_Rotation = fmod(m_Rotation + 360, 360);
The goal of the last line is to normalize negative angles and ones greater than 360.
(IMO This is more of a mathematical question than one about opengl.)
I would do something like that:
auto diff = abs(m_ProjectedRotation - m_Rotation);
if(diff > 5.0f)
{
diff = fmod(diff, 360.0f);
auto should_rotate_forward = (diff > 180.0f) ^ (m_ProjectedRotation > m_Rotation);
auto offset = 360.0f + 5.0f * (should_rotate_forward ? 1.0f : -1.0f);
m_Rotation = fmod(m_Rotation + offset, 360.0f);
}
diff is the absolute angle of your rotation. You then make it in a [0; 360) range by doing diff = fmod(diff, 360.0f).
should_rotate_forward determines whether you should decrease or increase the current angle. Note the ^ is a XOR operation
offset is basically either -5.0 or 5.0 depending on condition, but there's also +360.0f so that if for example m_Rotation == 1.0 and offset == -5.0 so fmod(m_Rotation + offset, 360.0f) would be -4.0 while you want 356.0, so you add full 360 rotation and after fmod everything is positive and in [0; 360) range

Implement an Ellipse Structural Element

I want to implement the following using OpenCV (I'll post my attempt at the bottom of the post). I am aware that OpenCV has a function for something like this, but I want to try to write my own.
In an image (Mat) (the coordinate system is at the top left, since it is an image) of width width and height height, I want to display a filled ellipsewith the following properties:
it should be centered at (width/2, height/2)
the image should be binary, so the points corresponding to the ellipse should have a value of 1 and others should be 0
the ellipse should be rotated by angle radians around the origin (or degrees, this does not matter all that much, I can convert)
ellipse: semi-major axis parameter is a and semi-minor axis parameter is b and these two parameters also represent the size of these axes in the picture, so "no matter" the width and height, the ellipse should have a major axis of size 2*a and a minor axis of size 2*b
Ok, so I've found an equation similar to this (https://math.stackexchange.com/a/434482/403961) for my purpose. My code is as follows.. it does seem to do pretty well on the rotation side, but, sadly, depending on the rotation angle, the SIZE (major axis, not sure about the minor) visibly increases/decreases, which is not normal, since I want it to have the same size, independent of the rotation angle.
NOTE The biggest size is seemingly achieved when the angle is 45 or -45 degrees and the smallest for angles like -90, 0, 90.
Code:
inline double sqr(double x)
{
return x * x;
}
Mat ellipticalElement(double a, double b, double angle, int width, int height)
{
// just to make sure I don't use some bad values for my parameters
assert(2 * a < width);
assert(2 * b < height);
Mat element = Mat::zeros(height, width, CV_8UC1);
Point center(width / 2, height / 2);
for(int x = 0 ; x < width ; x++)
for(int y = 0 ; y < height ; y++)
{
if (sqr((x - center.x) * cos(angle) - (y - center.y) * sin(angle)) / sqr(a) + sqr((x - center.x) * sin(angle) - (y - center.y) * cos(angle)) / sqr(b) <= 1)
element.at<uchar>(y, x) = 1;
}
return element;
}
A pesky typo sneaked in your inequality. The first summand must be
sqr((x - center.x) * cos(angle) + (y - center.y) * sin(angle)) / sqr(a)
Note the plus sign instead of minus.

Determining coordinates for mandelbrot zoom

I got a mandelbrot set I want to zoom in. The mandelbrot is calculated around a center coordinate, mandelbrot size and a zoom-level. The original mandelbrot is centered around
real=-0.6 and im=0.4 with a size of 2 in both real and im.
I want to be able to click on a point in the image and calculate a new one, zoomed in around that point
The window containing it is 800x800px, so I figured this would make a click in the lower right corner be equal to a center of real=0.4 and im=-0.6, and a click in the upper left corner be real=-1.6 and im=1.4
I calculated it with:
for the real values
800a+b=0.4 => a=0.0025
0a+b=-1.6 => b=-1.6
for imaginary values
800c+d=-0.6 => c=-0.0025
0c+d=1.4 => d=1.4
However, this does not work if I continue with mandelbrot size of 2 and zoom-level of 2. Am I missing something concerning the coordinates with the zoom-levels?
I had similar problems zooming in my C# Mandelbrot. My solution was to calculate the difference from the click position to the center in percents, multiply this with the maximum of units (width / zoom * 0.5, width = height, zoom = n * 100) from the center and add this to your current value. So My code was this (assuming I get sx and sy as parameters from the click):
double[] o = new double[2];
double digressLRUD = width / zoom * 0.5; //max way up or down from the center in coordinates
double shiftCenterCursor_X = sx - width/2.0; //shift of cursor to center
double shiftCenterCursor_X_percentage = shiftCenterCursor_X / width/2.0; //shift in percentage
o[0] = x + digressLRUD * shiftCenterCursor_X_percentage; //new position
double shiftCenterCursor_Y = sy - width/2.0;
double shiftCenterCursor_Y_percentage = shiftCenterCursor_Y / width/2.0;
o[1] = y - digressLRUD * shiftCenterCursor_Y_percentage;
This works, but you'll have to update the zoom (I use to multiply it with 2).
Another point is to move the selected center to the center of the image. I did this using some calculations:
double maxRe = width / zoom;
double centerRe = reC - maxRe * 0.5;
double maxIm = height / zoom;
double centerIm = -imC - maxIm * 0.5;
This will bring you the coordinates you have to pass your algorithm so it'll render the selected place.

Ray Tracing: Sphere distortion due to Camera Movement

I am building a ray Tracer from scratch. My question is:
When I change camera coordinates the Sphere changes to ellipse. I don't understand why it's happening.
Here are some images to show the artifacts:
Sphere: 1 1 -1 1.0 (Center, radius)
Camera: 0 0 5 0 0 0 0 1 0 45.0 1.0 (eyepos, lookat, up, foy, aspect)
But when I changed camera coordinate, the sphere looks distorted as shown below:
Camera: -2 -2 2 0 0 0 0 1 0 45.0 1.0
I don't understand what is wrong. If someone can help that would be great!
I set my imagePlane as follows:
//Computing u,v,w axes coordinates of Camera as follows:
{
Vector a = Normalize(eye - lookat); //Camera_eye - Camera_lookAt
Vector b = up; //Camera Up Vector
m_w = a;
m_u = b.cross(m_w);
m_u.normalize();
m_v = m_w.cross(m_u);
}
After that I compute directions for each pixel from the Camera position (eye) as mentioned below:
//Then Computing direction as follows:
int half_w = m_width * 0.5;
int half_h = m_height * 0.5;
double half_fy = fovy() * 0.5;
double angle = tan( ( M_PI * half_fy) / (double)180.0 );
for(int k=0; k<pixels.size(); k++){
double j = pixels[k].x(); //width
double i = pixels[k].y(); //height
double XX = aspect() * angle * ( (j - half_w ) / (double)half_w );
double YY = angle * ( (half_h - i ) / (double)half_h );
Vector dir = (m_u * XX + m_v * YY) - m_w ;
directions.push_back(dir);
}
After that:
for each dir:
Ray ray(eye, dir);
int depth = 0;
t_color += Trace(g_primitive, ray, depth);
After playing a lot and with the help of the comments of all you guys I was able to create successfully my rayTracer properly. Sorry for answering late, but I would like to close this thread with few remarks.
So, the above mentioned code is perfectly correct. Based on my own assumptions (as mentioned in above comments) I have decided to set my Camera parameters like that.
The problem I mentioned above is a normal behaviour of the camera (as also mentioned above in the comments).
I have got good results now but there are few things to check while coding a rayTracer:
Always make sure to take care of Radians to Degrees (or vice versa) conversion while computing FOV and ASPECT RATIO. I did it as follows:
double angle = tan((M_PI * 0.5 * fovy) / 180.0);
double y = angle;
double x = aspect * angle;
2) While computing Triangle intersections, make sure to implement cross product properly.
3) While using intersections of different objects make sure to find the intersection which is at a minimum distance from the camera.
Here's the result I got:
Above is a very simple model (courtesy UCBerkeley), which I rayTraced.
This is the correct behavior. Get a camera with a wide angle lens, put the sphere near the edge of the field of view and take a picture. Then in a photo app draw a circle on top of the photo of the sphere and you will see that it's not a circular projection.
This effect will be magnified by the fact that you set aspect to 1.0 but your image is not square.
A few things to fix:
A direction vector is (to - from). You have (from - to), so a is pointing backward. You'll want to add m_w at the end, rather than subtract it. Also, this fix will rotate your m_u,m_v by 180 degrees, which will make you about to change (j - half_w) to (half_w - j).
Also, putting all the pixels and all the directions in lists is not as efficient as just looping over x,y values.