I tried to draw a 3D dot cloud using OpenGL asymmetric frustum parallel axis projection. The general principle can be found on this website(http://paulbourke.net/stereographics/stereorender/#). But the problem now is that when I use real eye separation(0.06m), my eyes do not fuse well. When using eye separation = 1/30 * focal length, there is no pressure. I don’t know if there is a problem with the calculation, or there is a problem with the parameters? Part of the code is posted below. Thank you all.
for view = 0:stereoViews
% Select 'view' to render (left- or right-eye):
Screen('SelectStereoDrawbuffer', win, view);
% Manually reenable 3D mode in preparation of eye draw cycle:
Screen('BeginOpenGL', win);
% Set the eye seperation:
eye = 0.06; % in meter
% Caculate the frustum shift at the near plane:
fshift = 0.5 * eye * depthrangen/(vdist/100); % vdist is the focal length, 56cm, 0.56m
right_near = depthrangen * tand(FOV/2); % depthrangen is the depth of the near plane, 0.4. %FOV is the field of view, 18°
left_near = -right_near;
top_near = right_near* aspectr;
bottom_near = -top_near;
% Setup frustum projection for this eyes 'view':
glMatrixMode(GL.PROJECTION)
glLoadIdentity;
eyeside = 1+(-2*view); % 1 for left eye, -1 for right eye
glFrustum(left_near + eyeside * fshift, right_near + eyeside * fshift, bottom_near, top_near, %depthrangen, depthrangefObj);
% Setup camera for this eyes 'view':
glMatrixMode(GL.MODELVIEW);
glLoadIdentity;
gluLookAt(0 - eyeside * 0.5 * eye, 0, 0, 0 - eyeside * 0.5 * eye, 0, -1, 0, 1, 0);
% Clear color and depths buffers:
glClear;
moglDrawDots3D(win, xyz(:,:,iframe), 10, [], [], 1);
moglDrawDots3D(win, xyzObj(:,:,iframe), 10, [], [], 1);
% Manually disable 3D mode before calling Screen('Flip')!
Screen('EndOpenGL', win);
% Repeat for other eyes view if in stereo presentation mode...
end
Related
Vector3d nearC(0,0,0 -w);
Vector3d farC(0,0,0-x);
double width = y/2;
double height = z/2;
double angleOfHeight = atan(height/w);
double angleOfWidth = atan(width/w);
double adjustedHeight = tan(angleOfHeight) * x;
double adjustedWidth = tan(angleOfWidth) * x;
nearC[0] - width, nearC[1] - height, nearC[2]
nearC[0] - width, nearC[1] + height, nearC[2]
nearC[0] + width, nearC[1] + height, nearC[2]
nearC[0] + width, nearC[1] - height, nearC[2]
farC[0] - adjustedWidth, farC[1] - adjustedHeight, farC[2]
farC[0] - adjustedWidth, farC[1] + adjustedHeight, farC[2]
farC[0] + adjustedWidth, farC[1] + adjustedHeight, farC[2]
farC[0] + adjustedWidth, farC[1] - adjustedHeight, farC[2]
Above is my frustum in view coordinates. View Matrix is:
0 0 -1 0
0 1 0 -1
1 0 0 -10
0 0 0 1
All of it is right, we have a sheet.
I can't for the life of me figure how to get that frustum in canonical viewing volume. I've run through every perspective projection I could find. Current is this:
s, 0, 0, 0,
0, s, 0, 0,
0, 0, -(f+ne)/(f-ne), 2*f*ne/(f-ne),
0, 0, 1, 0;
double s = 1/tan(angleOfView * 0.5 * M_PI / 180);
I'm missing a step or something, right? Or a few steps?
Sorry to sound so hopeless now, been spinning wheels a while on this.
Any help appreciated.
Lest start with the perspective projection. The common way in old GL is to use gluPerspective.
for that we need znear,zfar,FOV and aspect ratio of view. For more info see:
Calculating the perspective projection matrix according to the view plane
I am used to use FOVx (viewing angle in x axis). To compute that you need to look at your frustrum from above looking at xz plane (in camera space):
so:
tan(FOVx/2) = znear_width / 2*focal_length
FOVx = 2*atan(znear_width / 2*focal_length)
the focal length can be computed by computing the intersection of frustrum edge lines. Or by using triangle similarity. The second is easier to write:
zfar_width/2*(|zfar-znear|+focal_length) = znear_width/2*(focal_length)
zfar_width/(|zfar-znear|+focal_length) = znear_width/(focal_length)
focal_length = (|zfar-znear|+focal_length)*znear_width/zfar_width
focal_length - focal_length*znear_width/zfar_width = |zfar-znear|*znear_width/zfar_width
focal_length*(1-(znear_width/zfar_width)) = |zfar-znear|*znear_width/zfar_width
focal_length = (|zfar-znear|*znear_width/zfar_width) / (1-(znear_width/zfar_width))
and that is all we need so:
focal_length = (|zfar-znear|*znear_width/zfar_width) / (1-(znear_width/zfar_width))
FOVx = 2*atan(znear_width / 2*focal_length)
FOVx*=180.0/M_PI; // convert to degrees
aspect=znear_width/znear_height;
gluPerspective(FOVx/aspect,aspect,znear,zfar);
just be aware of that |zfar-znear| is perpendicular distance between the planes !!! So if you do not have axis aligned ones then you need to compute that using dot product and normal ...
I have a completely implemented, working engine in OpenGL that supports a projection camera with raycasting. Recently, I implemented an orthogonal camera type, and visually, it's working just fine. For reference, here's how I compute the orthographic matrix:
double l = -viewportSize.x / 2 * zoom_;
double r = -l;
double t = -viewportSize.y / 2 * zoom_;
double b = -t;
double n = getNear();
double f = getFar();
m = Matrix4x4(
2 / (r - l),
0,
0,
-(r + l) / (r - l),
0,
2 / (t - b),
0,
-(t + b) / (t - b),
0,
0,
-2 / (f - n),
-(f + n) / (f - n),
0,
0,
0,
1);
However, my issue now is that raycasting does not work with the orthogonal camera. The issue seems to be that the raycasting engine was coded with projection-type cameras in mind, therefore when using the orthographic matrix instead it stops functioning. For reference, here's a high-level description of how the raycasting is implemented:
Get the world-space origin vector
Get normalized screen coordinate from input screen coordinates
Build mouseVector = (normScreenCoords.x, normScreenCoords.y, 0 if "near" or 1 if "far"
Build view-projection matrix (get view and projection matrices from Camera and multiply them)
Multiply the mouseVector by the inverse of the view-projection matrix.
Get the world-space forward vector
Get mouse world coordinates (far) and subtract them from mouse world coordinates (near)
Send the world-space origin and world-space forward vectors to the raycasting engine, which handles the logic of comparing these vectors to all the visible objects in the scene efficiently by using bounding boxes.
How do I modify this algorithm to work with orthographic cameras?
Your steps are fine and should work as expected with an orthographic camera. There may be a problem with the way you are calculating the origin and direction.
1.) Get the origin vector. First calculate the mouse position in world-space units, ie float rayX = (mouseX - halfResolution) / viewport.width * (r - l) or similar. It should be offset so the center of the screen is (0, 0), and the extreme values the mouse can reach translate to the edges of the viewport l, r, t, b. Then start with the camera position in world space and add two vectors rayX * camera.local.right and rayY * camera.local.up, where right and up are unit vectors in the camera's local co-ordinate system.
2.) The world space forward vector is always the camera forward vector for any mouse position.
3.) This should work without modification as long as you have the correct vectors for 1 and 2.
I've been trying for some time now to get a screen-space pixel (provided by a deferred HLSL shader) to convert to light space. The results have been surprising to me as my light rendering seems to be tiling the depth buffer.
Importantly, the scene camera (or eye) and the light being rendered from start in the same position.
First, I extract the world position of the pixel using the code below:
float3 eye = Eye;
float4 position = {
IN.texCoord.x * 2 - 1,
(1 - IN.texCoord.y) * 2 - 1,
zbuffer.r,
1
};
float4 hposition = mul(position, EyeViewProjectionInverse);
position = float4(hposition.xyz / hposition.w, hposition.w);
float3 eyeDirection = normalize(eye - position.xyz);
The result seems to be correct as rendering the XYZ position as RGB respectively yields this (apparently correct) result:
The red component seems to be correctly outputting X as it moves to the right, and blue shows Z moving forward. The Y factor also looks correct as the ground is slightly below the Y axis.
Next (and to be sure I'm not going crazy), I decided to output the original depth buffer. Normally I keep the depth buffer in a Texture2D called DepthMap passed to the shader as input. In this case, however, I try to undo the pixel transformation by offsetting it back into the proper position and multiplying it by the eye's view-projection matrix:
float4 cpos = mul(position, EyeViewProjection);
cpos.xyz = cpos.xyz / cpos.w;
cpos.x = cpos.x * 0.5f + 0.5f;
cpos.y = 1 - (cpos.y * 0.5f + 0.5f);
float camera_depth = pow(DepthMap.Sample(Sampler, cpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(camera_depth, camera_depth, camera_depth, 1);
This yields a correct looking result as well (though I'm not 100% sure about the Z value). Also note that I've made the results exponential to better visualize the depth information (this is not done when attempting live comparisons):
So theoretically, I can use the same code to convert that pixel world position to light space by multiplying by the light's view-projection matrix. Correct? Here's what I tried:
float4 lpos = mul(position, ShadowLightViewProjection[0]);
lpos.xyz = lpos.xyz / lpos.w;
lpos.x = lpos.x * 0.5f + 0.5f;
lpos.y = 1 - (lpos.y * 0.5f + 0.5f);
float shadow_map_depth = pow(ShadowLightMap[0].Sample(Sampler, lpos.xy).r, 100); // Power 100 just to visualize the map since scales are really tiny
return float4(shadow_map_depth, shadow_map_depth, shadow_map_depth, 1);
And here's the result:
And another to show better how it's mapping to the world:
I don't understand what is going on here. It seems it might have something to do with the projection matrix, but I'm not that good with math to know for sure what is happening. It's definitely not the width/height of the light map as I've tried multiple map sizes and the projection matrix is calculated using FOV and aspect ratios never inputing width/height ever.
Finally, here's some C++ code showing how my perspective matrix (used for both eye and light) is calculated:
const auto ys = std::tan((T)1.57079632679f - (fov / (T)2.0));
const auto xs = ys / aspect;
const auto& zf = view_far;
const auto& zn = view_near;
const auto zfn = zf - zn;
row1(xs, 0, 0, 0);
row2(0, ys, 0, 0);
row3(0, 0, zf / zfn, 1);
row4(0, 0, -zn * zf / zfn, 0);
return *this;
I'm completely at a loss here. Any guidance or recommendations would be greatly appreciated!
EDIT - I also forgot to mention that the tiled image is upside down as if the y flip broke it. That's strange to me as it's required to get it back to eye texture space correctly.
I did some tweaking and fixed things here and there. Ultimately, my biggest issue was an unexpectedly transposed matrix. It's a bit complicated as to how the matrix got transposed, but that's why things were flipped. I also changed to D32 depth buffers (though I'm not sure that helped any) and made sure that any positions divided by their W affected all component (including W).
So code like this: hposition.xyz = hposition.xyz / hposition.w
became this: hposition = hposition / hposition.w
After all this tweaking, it's starting to look more like a shadow map.
Oh and the transposed matrix was the ViewProjection of the light.
After reading datenwolf's 2011 answer concerning tile-based render setup in OpenGL, I attempted to implement his solution. The source image looks like this (at 800 x 600)
The resulting image with 2x2 tiles, each tile at 800 x 600 per tile looks like this.
As you can see they don't exactly match, though I can see something vaguely interesting has happened. I'm sure I've made an elementary error somewhere but I can't quite see it.
I'm doing 4 passes where:
w, h are 2,2 (2x2 tiles)
x, y are (0,0) (1,0) (0,1) and (1,1) in each of the 4 passes
MyFov is 1.30899692 (75 degrees)
MyWindowWidth, MyWindowHeight are 800, 600
MyNearPlane, MyFarPlane are 0.1, 200.0
The algorithm to calculate the frustum for each tile is:
auto aspect = static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight);
auto right = -0.5f * Math::Tan(MyFov) * MyShaderData.Camera_NearPlane;
auto left = -right;
auto top = aspect * right;
auto bottom = -top;
auto shift_X = (right - left) / static_cast<float>(w);
auto shift_Y = (top - bottom) / static_cast<float>(h);
auto frustum = Math::Frustum(left + shift_X * static_cast<float>(x),
left + shift_X * static_cast<float>(x + 1),
bottom + shift_Y * static_cast<float>(y),
bottom + shift_Y * static_cast<float>(y + 1),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
, where Math::Frustum is:
template<class T>
Matrix4x4<T> Frustum(T left, T right, T bottom, T top, T nearPlane, T farPlane)
{
Matrix4x4<T> r(InitialiseAs::InitialiseZero);
r.m11 = (static_cast<T>(2) * nearPlane) / (right - left);
r.m22 = (static_cast<T>(2) * nearPlane) / (top - bottom);
r.m31 = (right + left) / (right - left);
r.m32 = (top + bottom) / (top - bottom);
r.m33 = -(farPlane + nearPlane) / (farPlane - nearPlane);
r.m34 = static_cast<T>(-1);
r.m43 = -(static_cast<T>(2) * farPlane * nearPlane) / (farPlane - nearPlane);
return r;
}
For completeness, my Matrx4x4 layout is:
struct
{
T m11, m12, m13, m14;
T m21, m22, m23, m24;
T m31, m32, m33, m34;
T m41, m42, m43, m44;
};
Can anyone spot my error?
Edit:
So derhass explained it to me - a much easier way of doing things is to simply scale and translate the projection matrix. For testing I modified my translation matrix, scaled up by 2x, as follows (changing translate for each tile):
auto scale = Math::Scale(2.f, 2.f, 1.f);
auto translate = Math::Translate(0.5f, 0.5f, 0.f);
auto projection = Math::Perspective(MyFov,
static_cast<float>(MyWindowWidth) / static_cast<float>(MyWindowHeight),
MyShaderData.Camera_NearPlane,
MyShaderData.Camera_FarPlane);
MyShaderData.Camera_Projection = scale * translate * projection;
The resulting image is below (stitching 4 together) - the discontinuities in the image are caused by the post processing I think, so that's another issue I might have to deal with at some point.
This isn't a real answer for the question, but it might be an useful alternative approach to what you are trying to solve here. In my opinion, datenwolf's solution in his answer to the stackoverflow question you are referring to is more comlicated that it needs to be. So I'm presenting my alternative here.
Forword: I assume standard OpenGL matrix conventions, so that the vertex transformation with matrix M is done as v'= M *v (like the fixed-function pipeline did).
When a scene is rendered with some projection matrix P, you can extract any axis-aligned sub-rectangle of said scene by applying a scale and transformation operation after the projection matrix is applied.
The key point is that the viewing volume is defined as the [-1,1]^3 cube in NDC space. The clip space (which is what P transforms the data to) is just the homogenous represenation of that volume. As the typical 4x4 transformation matrices are all working in homogenous space, we don't really need to care about w at all and simply can define the transformations as if we were in NDC space.
Since you only need some 2D tiling, z should be left as-is, and only some scale and translation in x and y is required. When composing transformations A and B into a single Matrix C as C=A*B, following the aforementioned conventions this results in B being applied first, and A last (since C*v == A*B*v == A*(B*v)). So to modify the result after projection, we have to pre-multiply some transformations to P and we are done:
P'=S(sx,sy,1) * T(tx,ty,0) * P
The construction of P' will work with any valid projection matrix P, no matter if it is a perspective or ortho transform. In the ortho case, what this does is quite clear. In the perspective case, this actually modifies both the field of view and also shifts the frustum to an asymmetric one.
When you want to tile the image into a grid of m times n segments. it is clear that sx=m and sy=n. As I did use the S * T order (by choice), T is applied before the scale, so for each tile, (tx,ty) is just the vector moving the center of the tile to the new center (which will be the origin). As NDC space is 2 units wide and tall, for a tile x,y, the transformation is
tx= - (-1 + 2/(2*m) + (2/m) * x)
ty= - (-1 + 2/(2*n) + (2/n) * y)
// ^ ^ ^
// | | |
// | | +- size of of each tile in NDC space
// | |
// | +- half the size (as the center offset)
// |
// +- left/bottom border of NDC space
I'm essentially trying to mimic the way the camera rotates in Maya. The arcball in Maya is always aligned with the with the y-axis. So no matter where the up-vector is pointing, it's still rotated or registered with it's up-vector along the y-axis.
I've been able to implement is arcball in OpenGL using C++ and Qt. But I can't figure out how to keep it's up-vector aligned. I've been able to keep it aligned at times by my code below:
void ArcCamera::setPos (Vector3 np)
{
Vector3 up(0, 1, 0);
Position = np;
ViewDir = (ViewPoint - Position); ViewDir.normalize();
RightVector = ViewDir ^ up; RightVector.normalize();
UpVector = RightVector ^ ViewDir; UpVector.normalize();
}
This works up until the position is at 90-degrees, then the right vector changes and everything is inverted.
So instead I've been maintaining the total rotation (in quaternions) and rotating the original positions (up, right, pos) by it. This works best to keep everything coherent, but now I simply can't align the up-vector to the y-axis. Below is the function for the rotation.
void CCamera::setRot (QQuaternion q)
{
tot = tot * q;
Position = tot.rotatedVector(PositionOriginal);
UpVector = tot.rotatedVector(UpVectorOriginal);
UpVector.normalize();
RightVector = tot.rotatedVector(RightVectorOriginal);
RightVector.normalize();
}
The QQuaternion q is generated from the axis-angle pair derived from the mouse drag. I'm confident this is done correctly. The rotation itself is fine, it just doesn't keep the orientation aligned.
I've noticed in my chosen implementation, dragging in the corners provides a rotation around my view direction, and I can always realign the up-vector to straighten out to the world's y-axis direction. So If I could figure out how much to roll I could probably do two rotations each time to make sure it's all straight. However, I'm not sure how to go about this.
The reason this isn't working is because Maya's camera manipulation in the viewport does not use an arcball interface. What you want to do is Maya's tumble command. The best resource I've found for explaining this is this document from Professor Orr's Computer Graphics class.
Moving the mouse left and right corresponds to the azimuth angle, and specifies a rotation around the world space Y axis. Moving the mouse up and down corresponds to the elevation angle, and specifies a rotation around the view space X axis. The goal is to generate the new world-to-view matrix, then extract the new camera orientation and eye position from that matrix, based on however you've parameterized your camera.
Start with the current world-to-view matrix. Next, we need to define the pivot point in world space. Any pivot point will work to begin with, and it can be simplest to use the world origin.
Recall that pure rotation matrices generate rotations centered around the origin. This means that to rotate around an arbitrary pivot point, you first translate to the origin, perform the rotation, and translate back. Remember also that transformation composition happens from right to left, so the negative translation to get to the origin goes on the far right:
translate(pivotPosition) * rotate(angleX, angleY, angleZ) * translate(-pivotPosition)
We can use this to calculate the azimuth rotation component, which is a rotation around the world Y axis:
azimuthRotation = translate(pivotPosition) * rotateY(angleY) * translate(-pivotPosition)
We have to do a little additional work for the elevation rotation component, because it happens in view space, around the view space X axis:
elevationRotation = translate(worldToViewMatrix * pivotPosition) * rotateX(angleX) * translate(worldToViewMatrix * -pivotPosition)
We can then get the new view matrix with:
newWorldToViewMatrix = elevationRotation * worldToViewMatrix * azimuthRotation
Now that we have the new worldToView matrix, we're left with having to extract the new world space position and orientation from the view matrix. To do this, we want the viewToWorld matrix, which is the inverse of the worldToView matrix.
newOrientation = transpose(mat3(newWorldToViewMatrix))
newPosition = -((newOrientation * newWorldToViewMatrix).column(3))
At this point, we have the elements separated. If your camera is parameterized so that you're only storing a quaternion for your orientation, you just need to do the rotation matrix -> quaternion conversion. Of course, Maya is going to convert to Euler angles for display in the channel box, which will be dependent on the camera's rotation order (note that the math for tumbling doesn't change when the rotation order changes, just the way that the rotation matrix -> Euler angles conversion is done).
Here's a sample implementation in Python:
#!/usr/bin/env python
import numpy as np
from math import *
def translate(amount):
'Make a translation matrix, to move by `amount`'
t = np.matrix(np.eye(4))
t[3] = amount.T
t[3, 3] = 1
return t.T
def rotateX(amount):
'Make a rotation matrix, that rotates around the X axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[1, 0, 0, 0],
[0, c,-s, 0],
[0, s, c, 0],
[0, 0, 0, 1],
])
def rotateY(amount):
'Make a rotation matrix, that rotates around the Y axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c, 0, s, 0],
[0, 1, 0, 0],
[-s, 0, c, 0],
[0, 0, 0, 1],
])
def rotateZ(amount):
'Make a rotation matrix, that rotates around the Z axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c,-s, 0, 0],
[s, c, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
])
def rotate(x, y, z, pivot):
'Make a XYZ rotation matrix, with `pivot` as the center of the rotation'
m = rotateX(x) * rotateY(y) * rotateZ(z)
I = np.matrix(np.eye(4))
t = (I-m) * pivot
m[0, 3] = t[0, 0]
m[1, 3] = t[1, 0]
m[2, 3] = t[2, 0]
return m
def eulerAnglesZYX(matrix):
'Extract the Euler angles from an ZYX rotation matrix'
x = atan2(-matrix[1, 2], matrix[2, 2])
cy = sqrt(1 - matrix[0, 2]**2)
y = atan2(matrix[0, 2], cy)
sx = sin(x)
cx = cos(x)
sz = cx * matrix[1, 0] + sx * matrix[2, 0]
cz = cx * matrix[1, 1] + sx * matrix[2, 1]
z = atan2(sz, cz)
return np.array((x, y, z),)
def eulerAnglesXYZ(matrix):
'Extract the Euler angles from an XYZ rotation matrix'
z = atan2(matrix[1, 0], matrix[0, 0])
cy = sqrt(1 - matrix[2, 0]**2)
y = atan2(-matrix[2, 0], cy)
sz = sin(z)
cz = cos(z)
sx = sz * matrix[0, 2] - cz * matrix[1, 2]
cx = cz * matrix[1, 1] - sz * matrix[0, 1]
x = atan2(sx, cx)
return np.array((x, y, z),)
class Camera(object):
def __init__(self, worldPos, rx, ry, rz, coi):
# Initialize the camera orientation. In this case the original
# orientation is built from XYZ Euler angles. orientation is the top
# 3x3 XYZ rotation matrix for the view-to-world matrix, and can more
# easily be thought of as the world space orientation.
self.orientation = \
(rotateZ(rz) * rotateY(ry) * rotateX(rx))
# position is a point in world space for the camera.
self.position = worldPos
# Construct the world-to-view matrix, which is the inverse of the
# view-to-world matrix.
self.view = self.orientation.T * translate(-self.position)
# coi is the "center of interest". It defines a point that is coi
# units in front of the camera, which is the pivot for the tumble
# operation.
self.coi = coi
def tumble(self, azimuth, elevation):
'''Tumble the camera around the center of interest.
Azimuth is the number of radians to rotate around the world-space Y axis.
Elevation is the number of radians to rotate around the view-space X axis.
'''
# Find the world space pivot point. This is the view position in world
# space minus the view direction vector scaled by the center of
# interest distance.
pivotPos = self.position - (self.coi * self.orientation.T[2]).T
# Construct the azimuth and elevation transformation matrices
azimuthMatrix = rotate(0, -azimuth, 0, pivotPos)
elevationMatrix = rotate(elevation, 0, 0, self.view * pivotPos)
# Get the new view matrix
self.view = elevationMatrix * self.view * azimuthMatrix
# Extract the orientation from the new view matrix
self.orientation = np.matrix(self.view).T
self.orientation.T[3] = [0, 0, 0, 1]
# Now extract the new view position
negEye = self.orientation * self.view
self.position = -(negEye.T[3]).T
self.position[3, 0] = 1
np.set_printoptions(precision=3)
pos = np.matrix([[5.321, 5.866, 4.383, 1]]).T
orientation = radians(-60), radians(40), 0
coi = 1
camera = Camera(pos, *orientation, coi=coi)
print 'Initial attributes:'
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
print 'Attributes after tumbling:'
camera.tumble(azimuth=radians(-40), elevation=radians(-60))
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
Keep track of you view and right vectors, from the beginning and update them with the rotation matrix. Then calculate your up vector.