I am trying to understand the pinhole camera model and the geometry behind some computer vision and camera calibration stuff that I am looking at.
So, if I understand correctly, the pinhole camera model maps the pixel coordinates to 3D real world coordinates. So, the model looks as:
y = K [R|T]x
Here y is pixel coordinates in homogeneous coordinates, R|T represent the extrinsic transformation matrix and x is the 3D world coordinates also in homogeneous coordinates.
Now, I am looking at a presentation which says
project the center of the focus region onto the ground plane using [R|T]
Now the center of the focus region is just taken to be the center of the image. I am not sure how I can estimate the ground plane? Assuming, the point to be projected is in input space, the projection should that be computed by inverting the [R|T] matrix and multiplying that point by the inverted matrix?
EDIT
Source here on page 29: http://romilbhardwaj.github.io/static/BuildSys_v1.pdf
I have a geometry with its vertices in cartesian coordinates. These cartesian coordinates are the ECEF(Earth centred earth fixed) coordinates. This geometry is actually present on an ellipsoidal model of the earth using wgs84 corrdinates.The cartesian coordinates were actually obtained by converting the set of latitudes and longitudes along which the geomtries lie but i no longer have access to them. What i have is an axis aligned bounding box with xmax, ymax, zmax and xmin,ymin,zmin obtained by parsing the cartesian coordinates (There is no obviously no cartesian point of the geometry at xmax,ymax,zmax or xmin,ymin,zmin. The bounding box is just a cuboid enclosing the geometry).
What i want to do is to calculate the camera distance in an overview mode such that this geometry's bounding box perfectly fits the camera frustum.
I am not very clear with the approach to take here. A method like using a local to world matrix comes to mind but its not very clear.
#Specktre I referred to your suggestions on shifting points in 3D and that led me to another improved solution, nevertheless not perfect.
Compute a matrix that can transfer from ECEF to ENU. Refer this - http://www.navipedia.net/index.php/Transformations_between_ECEF_and_ENU_coordinates
Rotate all eight corners of my original bounding box using this matrix.
Compute a new bounding box by finding the min and max of x,y,z of these rotated points
compute distance
cameraDistance1 = ((newbb.ymax - newbb.ymin)/2)/tan(fov/2)
cameraDistance2 = ((newbb.xmax - newbb.xmin)/2)/(tan(fov/2)xaspectRatio)
cameraDistance = max(cameraDistance1, cameraDistance2)
This time i had to use the aspect ratio along x as i had previously expected since in my application fov is along y. Although this works almost accurately, there is still a small bug i guess. I am not very sure if it a good idea to generate a new bounding box. May be it is more accurate to identify 2 points point1(xmax, ymin, zmax) and point(xmax, ymax, zmax) in the original bounding box, find their values after multiplying with matrix and then do (point2 - point1).length(). Similarly for y. Would that be more accurate?
transform matrix
first thing is to understand that transform matrix represents coordinate system. Look here Transform matrix anatomy for another example.
In standard OpenGL notation If you use direct matrix then you are converting from matrix local space (LCS) to world global space (GCS). If you use inverse matrix then you converting coordinates from GCS to LCS
camera matrix
camera matrix converts to camera space so you need the inverse matrix. You get camera matrix like this:
camera=inverse(camera_space_matrix)
now for info on how to construct your camera_space_matrix so it fits the bounding box look here:
Frustrum distance computation
so compute midpoint of the top rectangle of your box compute camera distance as max of distance computed from all vertexes of box so
camera position = midpoint + distance*midpoint_normal
orientation depends on your projection matrix. If you use gluPerspective then you are viewing -Z or +Z according selected glDepthFunc. So set Z axis of matrix to normal and Y,X vectors can be aligned to North/South and East/West so for example
Y=Z x (1,0,0)
X = Z x Y
now put position, and axis vectors X,Y,Z inside matrix, compute inverse matrix and that it is.
[Notes]
Do not forget that FOV can have different angles for X and Y axis (aspect ratio).
Normal is just midpoint - Earth center which is (0,0,0) so normal is also the midpoint. Just normalize it to size 1.0.
For all computations use cartesian world GCS (global coordinate system).
I am trying to create a dataset of images of objects at different poses, where each image is annotated with camera pose (or object pose).
If, for example, I have a world coordinate system and I place the object of interest at the origin and place the camera at a known position (x,y,z) and make it face the origin. Given this information, how can I calculate the pose (rotation matrix) for the camera or for the object.
I had one idea, which was to have a reference coordinate i.e. (0,0,z') where I can define the rotation of the object. i.e. its tilt, pitch and yaw. Then I can calculate the rotation from (0,0,z') and (x,y,z) to give me a rotation matrix. The problem is, how to now combine the two rotation matrices?
BTW, I know the world position of the camera as I am rendering these with OpenGL from a CAD model as opposed to physically moving a camera around.
The homography matrix maps between homogeneous screen coordinates (i,j) to homogeneous world coordinates (x,y,z).
homogeneous coordinates are normal coordinates with a 1 appended. So (3,4) in screen coordinates is (3,4,1) as homogeneous screen coordinates.
If you have a set of homogeneous screen coordinates, S and their associated homogeneous world locations, W. The 4x4 homography matrix satisfies
S * H = transpose(W)
So it boils down to finding several features in world coordinates you can also identify the i,j position in screen coordinates, then doing a "best fit" homography matrix (openCV has a function findHomography)
Whilst knowing the camera's xyz provides helpful info, its not enough to fully constrain the equation and you will have to generate more screen-world pairs anyway. Thus I don't think its worth your time integrating the cameras position into the mix.
I have done a similar experiment here: http://edinburghhacklab.com/2012/05/optical-localization-to-0-1mm-no-problemo/
As I am learning OpenGL I often stumble upon so-called eye space coordinates.
If I am right, you typically have three matrices. Model matrix, view matrix and projection matrix. Though I am not entirely sure how the mathematics behind that works, I do know that the convert coordinates to world space, view space and screen space.
But where is the eye space, and which matrices do I need to convert something to eye space?
Perhaps the following illustration showing the relationship between the various spaces will help:
Depending if you're using the fixed-function pipeline (you are if you call glMatrixMode(), for example), or using shaders, the operations are identical - it's just a matter of whether you code them directly in a shader, or the OpenGL pipeline aids in your work.
While there's distaste in discussing things in terms of the fixed-function pipeline, it makes the conversation simpler, so I'll start there.
In legacy OpenGL (i.e., versions before OpenGL 3.1, or using compatibility profiles), two matrix stacks are defined: model-view, and projection, and when an application starts the matrix at the top of each stack is an identity matrix (1.0 on the diagonal, 0.0 for all other elements). If you draw coordinates in that space, you're effectively rendering in normalized device coordinates(NDCs), which clips out any vertices outside of the range [-1,1] in both X, Y, and Z. The viewport transform (as set by calling glViewport()) is what maps NDCs into window coordinates (well, viewport coordinates, really, but most often the viewport and the window are the same size and location), and the depth value to the depth range (which is [0,1] by default).
Now, in most applications, the first transformation that's specified is the projection transform, which come in two varieties: orthographic and perspective projections. An orthographic projection preserves angles, and is usually used in scientific and engineering applications, since it doesn't distort the relative lengths of line segments. In legacy OpenGL, orthographic projections are specified by either glOrtho or gluOrtho2D. More commonly used are perspective transforms, which mimic how the eye works (i.e., objects far from the eye are smaller than those close), and are specified by either glFrustum or gluPerspective. For perspective projections, they defined a viewing frustum, which is a truncated pyramid anchored at the eye's location, which are specified in eye coordinates. In eye coordinates, the "eye" is located at the origin, and looking down the -Z axis. Your near and far clipping planes are specified as distances along the -Z axis. If you render in eye coordinates, any geometry specified between the near and far clipping planes, and inside of the viewing frustum will not be culled, and will be transformed to appear in the viewport. Here's a diagram of a perspective projection, and its relationship to the image plane .
The eye is located at the apex of the viewing frustum.
The last transformation to discuss is the model-view transform, which is responsible for moving coordinate systems (and not objects; more on that in a moment) such that they are well position relative to the eye and the viewing frustum. Common modeling transforms are translations, scales, rotations, and shears (of which there's no native support in OpenGL).
Generally speaking, 3D models are modeled around a local coordinate system (e.g., specifying a sphere's coordinates with the origin at the center). Modeling transforms are used to move the "current" coordinate system to a new location so that when you render your locally-modeled object, it's positioned in the right place.
There's no mathematical difference between a modeling transform and a viewing transform. It's just usually, modeling transforms are used to specific models and are controlled by glPushMatrix() and glPopMatrix() operations, which a viewing transformation is usually specified first, and affects all of the subsequent modeling operations.
Now, if you're doing this modern OpenGL (core profile versions 3.1 and forward), you have to do all these operations logically yourself (you might only specify one transform folding both the model-view and projection transformations into a single matrix multiply). Matrices are specified usually as shader uniforms. There are no matrix stacks, separation of model-view and projection transformations, and you need to get your math correct to emulate the pipeline. (BTW, the perspective division and viewport transform steps are performed by OpenGL after the completion of your vertex shader - you don't need to do the math [you can, it doesn't hurt anything unless you fail to set w to 1.0 in your gl_Position vertex shader output).
Eye space, view space, and camera space are all synonyms for the same thing: the world relative to the camera.
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix. Finally the projected scene is mapped to the viewport.
The projection, view and model matrix interact together to present the objects (meshes) of a scene on the viewport.
The model matrix defines the position orientation and scale of a single object (mesh) in the world space of the scene.
The view matrix defines the position and viewing direction of the observer (viewer) within the scene.
The projection matrix defines the area (volume) with respect to the observer (viewer) which is projected onto the viewport.
Coordinate Systems:
Model coordinates (Object coordinates)
The model space is the coordinates system, which is used to define or modulate a mesh. The vertex coordinates are defined in model space.
World coordinates
The world space is the coordinate system of the scene. Different models (objects) can be placed multiple times in the world space to form a scene, in together.
The model matrix defines the location, orientation and the relative size of a model (object, mesh) in the scene. The model matrix transforms the vertex positions of a single mesh to world space for a single specific positioning. There are different model matrices, one for each combination of a model (object) and a location of the object in the world space.
View space (Eye coordinates)
The view space is the local system which is defined by the point of view onto the scene.
The position of the view, the line of sight and the upwards direction of the view, define a coordinate system relative to the world coordinate system. The objects of a scene have to be drawn in relation to the view coordinate system, to be "seen" from the viewing position. The inverse matrix of the view coordinate system is named the view matrix. This matrix transforms from world coordinates to view coordinates.
In general world coordinates and view coordinates are Cartesian coordinates
The view coordinates system describes the direction and position from which the scene is looked at. The view matrix transforms from the world space to the view (eye) space.
If the coordinate system of the view space is a Right-handed system, where the X-axis points to the right and the Y-axis points up, then the Z-axis points out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Clip space coordinates are Homogeneous coordinates. In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w.
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space. The coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
At orthographic projection, this area (volume) is defined by 6 distances (left, right, bottom, top, near and far) to the viewer's position.
If the left, bottom and near distance are negative and the right, top and far distance are positive (as in normalized device space), this can be imagined as box around the viewer.
All the objects (meshes) which are in the space (volume) are "visible" on the viewport. All the objects (meshes) which are out (or partly out) of this space are clipped at the borders of the volume.
This means at orthographic projection, the objects "behind" the viewer are possibly "visible". This may seem unnatural, but this is how orthographic projection works.
At perspective projection the viewing volume is a frustum (a truncated pyramid), where the top of the pyramid is the viewing position.
The direction of view (line of sight) and the near and the far distance define the planes which truncated the pyramid to a frustum (the direction of view is the normal vector of this planes).
The left, right, bottom, top distance define the distance from the intersection of the line of sight and the near plane, with the side faces of the frustum (on the near plane).
This causes that the scene looks like, as it would be seen from of a pinhole camera.
One of the most common mistakes, when an object is not visible on the viewport (screen is all "black"), is that the mesh is not within the view volume which is defined by the projection and view matrix.
Normalized device coordinates
The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
The normalized device coordinates are the clip space coordinates divide by the w component of the clip coordinates. This is called Perspective divide
Window coordinates (Screen coordinates)
The window coordinates are the coordinates of the viewport rectangle. The window coordinates are decisive for the rasterization process.
The normalized device coordinates are linearly mapped to the viewport rectangle (Window Coordinates / Screen Coordinates) and to the depth for the depth buffer.
The viewport rectangle is defined by glViewport. The depth range is set by glDepthRange and is by default [0, 1].
http://www.cs.uaf.edu/2007/spring/cs481/lecture/01_23_matrices.html
I have just finished reading this, but i have 2 questions about multiplications.
gl_Position = gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex
This is the final screen coordinates (x,y) when i multiply them all. What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ? And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate in world space.
vertices and the eye may be arbitrarily placed and oriented in world space.
After multiplication with ModelViewMatrix, you get a vertex coordinate in 'eye-space', a coordinate system with the eye at 0,0,0. Multiplying by the projection matrix(and doing that homogenous coordinate system division thingy) gives you coordinates in screen space. Those are not pixel-coordinates yet, but some normalized coordinate system with 0,0,0 in the center of the screen/window. Viewport transform is last. It maps the image to window (pixel?) coordinates.
An explanation is given in:
http://www.srk.fer.hr/~unreal/theredbook/
chapter 3.
What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ?
Then the projection matrix (the 1st case) or model-view matrix (the 2nd case) will not take effect. If they are identity matrices, you will not see effect. If they are not, then your view angle or position might be wrong.
And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate that is passed to the input of the pipeline.