Would it be possible to represent a 3D Camera using only a Quaternion? I know that most cameras use an Up vector and a Forward vector to represent it's rotation, but couldn't the same rotation be represented as a single Quaternion with the rotation axis being forward, and the w component being the amount from the Y Axis that the camera was rotated. If there is a way to do this, any resources would be appreciated. Thanks in advance.
In general, no, it's not possible to represent a 3D camera using only a quaternion. This is because a 3D camera not only has an orientation in space, but also a position, and a projection. The quaternion only describes an orientation.
If you're actually asking whether the rotation component of the camera object could be represented as a quaternion, then the answer in general is yes. Quaternions can be easily converted into rotation matrices (Convert Quaternion rotation to rotation matrix?), and back, so anywhere a rotation matrix is used, a quaternion could also be used (and converted to a rotation matrix where appropriate).
Related
I've got the following problem:
In 3D there's a vector from fixed the center of a plane to the origin. This plane has arbitrary coordinates around this center thus its normal vector is not necessarily the mentioned vector. Therefore I have to rotate the plane around this fixed center such that the mentioned vector is the plane's normal vector.
My first idea was to compute the angle between the vector and the normal vector, but the problem then is how to rotate the plane.
Any ideas?
A plane is a mathematical entity which satisfies the following equation:
Where n is the normal, and a is any point on the plane (in this case the center point as above). It makes no sense to "rotate" this equation - if you want the plane to face a certain direction, just make the normal equal to that direction (i.e. the "mentioned" vector).
You later mentioned in the comments that the "plane" is an OpenGL quad, in which case you can use Quaternions to compute the rotation.
This Stackoverflow post tells you how to compute the rotation quaternion from your current normal vector to the "mentioned" vector. This site tells you how to convert a quaternion into a rotation matrix (whose dimensions are 3x3).
Let's suppose the center point is called q, and that the rotation matrix you obtain has the following form:
This can only rotate geometry about the origin. A rotation about a general point requires a 4x4 matrix (what OpenGL uses), which can be constructed as follows:
Assuming I have a camera mounted in a rail, I can move it back and forth to take photos of my scene.
Can I assume I have a Rotation Matrix equal to zero?
It depends on the coordinate system you choose. Assuming it's aligned with your camera rotation (e.g. negative Z-axis into viewing direction of the camera and positive y-axis points upwards) and you only move your camera without rotating it, then the rotation matrix, which is used to transform between these coordinate system is the Identity matrix.
A zero matrix makes no sense.
If you assume no rotation, then the rotation matrix is a 3x3 identity matrix, not zero.
Also, this may or may not be a good assumption depending on how accurate you want to be. Even if the camera is moving on a rail, there will be some small rotation.
So I was reading this tutorial's "Inverting the Camera Orientation Matrix" section and I don't understand why, when calculating the camera's up direction, I need to multiply the inverse of orientation by the up direction vector, and not just orientation.
I drew the following image to illustrate my insight of the tutorial I read.
What did I get wrong?
Well, that tutorial explicitely states:
The way we calculate the up direction of the camera is by taking the
"directly upwards" unit vector (0,1,0) and "unrotate" it by using the
inverse of the camera's orientation matrix. Or, to explain it
differently, the up direction is always (0,1,0) after the camera
rotation has been applied, so we multiply (0,1,0) by the inverse
rotation, which gives us the up direction before the camera rotation
was applied.
The up direction which is calculated here is the up direction in world space. In eye space, the up vector is (0,1,0) (by convention, one could define it differently). As the view matrix will transform coordinates from world space to eye space, we need to use the inverse to transform that up vector from eye space to the world space. Your image is wrong as it does not correctly relate to eye and world space.
A vector can be rotated and scaled, since it has direction and scale. But does it mean by plotting a point. Point can only be translated. But wikipedia says "For example the matrix
R = [ cos0,-sin0]
[ sin0,cos0]
rotates points in the xy-Cartesian plane counter-clockwise through an angle θ about the origin of the Cartesian coordinate system.
Also what does it mean by "since matrix multiplication has no effect on the zero vector (the coordinates of the origin), rotation matrices can only be used to describe rotations about the origin of the coordinate system."? Does this mean I cannot perform rotation around any point other than the origin?
Absolutely, to rotate about another point than the origin, you have to create a matrix that translates your vertices from your rotation center to the origin, rotates, then translates back from the origin to your rotation center.
When describing transformations, Wikipedia and other sites often refer to the effect on "points"; however this implicitly applies to every point in the coordinate system (with explicit exceptions like rotation of the origin.) These transformations - typically rotating, translating, and scaling - apply to the entire frame of reference and any derivative frames of reference. Use of the word 'point' is in the mathematical sense, a choice of coordinates within a coordinate system, and doesn't imply anything about a point in the graphical sense, like a "plotted" or "drawn" point (although graphing a point is just a visualization of this concept, so the distinction is moot.)
While it's true that a rotation has no effect on the origin, you are free to translate the origin itself, or equivalently to translate your models relative to the origin. Once you have applied the rotation, you can reverse the translation to restore the original origin.
I am trying to create a dataset of images of objects at different poses, where each image is annotated with camera pose (or object pose).
If, for example, I have a world coordinate system and I place the object of interest at the origin and place the camera at a known position (x,y,z) and make it face the origin. Given this information, how can I calculate the pose (rotation matrix) for the camera or for the object.
I had one idea, which was to have a reference coordinate i.e. (0,0,z') where I can define the rotation of the object. i.e. its tilt, pitch and yaw. Then I can calculate the rotation from (0,0,z') and (x,y,z) to give me a rotation matrix. The problem is, how to now combine the two rotation matrices?
BTW, I know the world position of the camera as I am rendering these with OpenGL from a CAD model as opposed to physically moving a camera around.
The homography matrix maps between homogeneous screen coordinates (i,j) to homogeneous world coordinates (x,y,z).
homogeneous coordinates are normal coordinates with a 1 appended. So (3,4) in screen coordinates is (3,4,1) as homogeneous screen coordinates.
If you have a set of homogeneous screen coordinates, S and their associated homogeneous world locations, W. The 4x4 homography matrix satisfies
S * H = transpose(W)
So it boils down to finding several features in world coordinates you can also identify the i,j position in screen coordinates, then doing a "best fit" homography matrix (openCV has a function findHomography)
Whilst knowing the camera's xyz provides helpful info, its not enough to fully constrain the equation and you will have to generate more screen-world pairs anyway. Thus I don't think its worth your time integrating the cameras position into the mix.
I have done a similar experiment here: http://edinburghhacklab.com/2012/05/optical-localization-to-0-1mm-no-problemo/