DirectxTK SpriteBatch and Viewport Centering - c++

I am using DirectXTK for C++, and am making heavy use of the sprite batch function.
I am trying to develop an isometric rendering engine. I've already gotten the isometric rendering to work properly. Along with offsetting. However I run into one major problem.
I can't for the life of me figure out how to center the camera's center to the world's (0,0) or any other coord.
And after injecting this transformation matrix generated directly from the camera into SpriteBatch.Begin()
Matrix Camera::getTransformMatrix() {
Matrix Transform = Transform.CreateTranslation(Vector3(World_Pos.x, World_Pos.y, 0));
Matrix Rotation = Rotation.CreateRotationZ(0);
Matrix Scale = Scale.CreateScale(ZoomFactor, ZoomFactor, 1);
return Transform * Rotation * Scale;
}
Nothing seems to happen. This is with the Camera's world position set to (0,0) and the sprite's world position set to (0,0).
This is the resulting image.
http://i.imgur.com/Bep7l5a.png
I turned off alpha blending, and sprite offsets for debugging reasons.
Can anyone help me please? Also, if anyone can tell me how to flip the y that would be great as well.

Remember that the default transformation for the screen viewport is still applied if you provide a custom transformation to SpriteBatch. The default transformation makes SpriteBatch work with pixel coordinates starting at the upper-left corner of the screen.
If you want the custom transformation to be used "as is" without concatenating with the final viewport transformation, you can use this special case for the orientation handling:
spriteBatch->SetRotation(DXGI_MODE_ROTATION_UNSPECIFIED);
Be sure to read the wiki.

I found my answer. To those who run into this problem as well. you'll need to compensate for the fact that the view port needs to be translated to center (0,0) to the center of the screen.
That is to say that the matrix you need is...
View * Translation(width * .05, height * .05)
Where View is the inverse matrix of the product of your world scale, world rotation, and world transformation.
Such that the product of the final world transformation matrix F, and view matrix V satisfies...
I=F*V
I is the identity matrix.

Related

how do I maintain relative transformation b/w 2 objects after changing transformation of any one without a scenegraph?

Say I have 2 objects, a camera and a cube, both on XZ plane, the cube has some arbitrary rotation, and camera is facing the cube.
now if a transformation R is applied to the camera such that it has a new rotation and position.
I want to move the cube in front of the camera using transformation R1, such that in the view it looks exactly as before R was applied, meaning relative distance, rotation and scale b/w the 2 objects remain same after both R and R1.
Following image gives a gist of the problem.
Assume that there's no scenegraph that we can use.
I've posed the problem mainly in 2D but I'm trying to solve it in 3D, so rotations can have all yaw, pitch and roll, and translations can be anywhere in 3D space.
EDIT:
I forgot to add what I have done so far.
I figured out how to maintain relative distance b/w camera and cube, I can project cube's position to get world to screen point, then unproject the screen point in new camera position to get new world position.
However for rotation, I have tried this
I thought I can apply same rotation as R in R1, this didn't work, it appears to work if rotation happens only in one axis, if rotation happens in more than one axes, it does not work.
I thought I can take delta rotation b/w camera and cube, and simply apply camera's rotation to the cube and then multiply delta rotation, this also didn't work
Let M and V be the model and view matrices before you move the camera, M2 and V2 be the matrices after you move the camera. To be clear: model matrix transforms the coordinates from object local coordinates into world coordinates; a view matrix transforms from world coordinates into clip-space camera coordinates. Consequently V*M*p transforms the position p into clip-space.
For the position on the screen to stay constant, we need V*M*p = V2*M2*p to be true for all p (that's assuming that the FOV doesn't change). Therefore V*M = V2*M2, or
M2 = inverse(V2)*V*M
If you apply the camera transformation on the right (V2 = V*R) then the above expression for M2 can be simplified:
M2 = inverse(R)*M
(that is you apply inverse(R) on the left of the model matrix to compensate).
Alternatively, ask yourself if you really need to keep the object coordinates in the world reference frame. It may be easier to not to apply the view matrix when rendering that object at all; that would effectively keep it relative to the camera at all times without any additional tweaks. That would have better numerical stability too.

OpenGL Stereo View -- How to use GLM Math library for horizontal offset

I'm trying to use the GLM mathematics library to warp the projection matrix such that multiple cameras (e.g. a left eye and a right eye) are viewing the same view, but the camera frustum is distorted.
This is what I am trying to do with my cameras using GLM:
image
Note: One of the challenges I have for my camera, is that I only have in my OpenGL Graphics API a read-only view matrix and projection matrix. So even once I have the view and projection matrix, I can only move my camera using x,y,z for eye position, and then yaw,pitch,roll for view direction. Then I have standard OpenGL calls, but I am not sure how to shift or warp the projection matrix as shown below.
I think this second picture also accurately shows what I am trying to achieve with OpenGL.
Any tips on how to approach this problem? Do I need to implement a special lookAt function, or is there something built into GLM that can help me compute the right view?
A possible way to do this is:
Assume that X is the position of the "nose" (the half point between the 2 eyes).
Let V be the direction orthogonal to the left eye - right eye line.
The function X + t * V describes the "line of sight" in your last
picture.
Take t > 0 and set that as your look at point P/
Create 2 cameras with centers at the eyes, both with the exact same up direction, then use glm::lookat to look at P.

Implementing arcball rotation axis without projection matrix?

I have a question about implementing the arcball in opengl es, using Android Studio.
After calculating the rotation axis, I should reverse the axis through the rendering pipeline back to the object space, so that the rotation could be applied in the object space.
This part would be written like:
obj_rotateAxis = normalize(vec3(inverse(mat3(camera->projMatrix) * mat3(camera->viewMatrix) * mat3(teapot->worldMatrix)) * rotateAxis));
However, I heard that the correct form should be like:
obj_rotateAxis = normalize(vec3(inverse(mat3(camera->viewMatrix) * mat3(teapot->worldMatrix)) * rotateAxis));
where projMatrix is discarded. Why do we not consider the projection matrix when we implement the arcball, although projection transform is done for the object?
As far as I know, you use the arcBall to compute an angle of rotation you will apply to your object. When you want to rotate an object you want to make it rotate from the origin of the world (world matrice) or from the viewpoint (view matrice).
The projection matrice doesn 't represent the actual location of your object. The farest you are located the smaller you will get you don't want the depth to have an effect on your rotation.
So you compute the rotation from the view point or the origin and then you let the projection matrice do its job at render.

Camera to world transformation and world to camera transformation

I am kind of confused by the camera to world vs. world to camera transformation.
In the openGL rendering pipeline, The transformation is from the world to the camera, right?
Basically, a view coordinate frame is constructed at the camera, then the object in the world is first translated relative to the camera and then rotated with the camera coordinate frame. is this what gluLookat is performing?
If I want to go from camera view back to world view, how it should be done?
Mathematically, I am thinking finding the inverse of the translate and rotate matrices, then apply the rotation before the translate, right?
Usually there are several transformations that map world positions to the screen. The following ones are the most common ones:
World transformation: Can be applied to objects in order to realign them relatively to other objects.
View transformation: After this transformation the camera is at O and looks in the z direction.
Projection transformation: Performs e.g. perspective transformations to simulate a real camera
Viewport adaption: This is basically a scaling and translation that maps the positions from range [-1, 1] to the viewport. This is usually the screen width and height.
gluLookAt is used to create a view transformation. You can imagine it as follows: Place the camera somewhere in your scene. Now transform the whole scene (with the camera) so that the camera is at the origin, it faces in the z direction and the y axis represents the up direction. This is a simple rigid body transformation that can be represented as an arbitrary rotation (with three degrees of freedom) and an arbitrary translation (with another three degrees of freedom). Every rigid body transformation can be split into separate rotations and translation. Even the sequence of evaluation can vary, if you choose the correct values. Transformations can be interpreted in different ways. I wrote a blog entry on that topic a while ago. If you're interested, take a look at it. Although it is for DirectX, the maths is pretty much the same for OpenGL. You just have to watch out for transposed matrices.
For the second question: Yes, you are right. You need to find the inverse transformation. This can be easily achieved with the inverse matrix. If you specified the view matrix V as follows:
V = R_xyz * T_xyz
then the inverse transformation V^-1 is
V^-1 = T_xyz^-1 * R_xyz^-1
However, this does not map screen positions to world positions because there is more transformation going on. I hope, that answers your questions.
Here is another interesting point. The view matrix is the inverse of the transformation that would align a camera model (at the origin, facing in z direction) at the specified position. This relation is called system transformation vs. model transformation.

OpenGL define vertex position in pixels

I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!
This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:
The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world
Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.
Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.
After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.
In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.
When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.
The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.
So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.
These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.
EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.
It is called transformation.
Vertices are set in 3D coordinates which is transformed into a viewport coordinates (into your window view). This transformation can be set in various ways. Orthogonal transformation can be easiest to understand as a starter.
http://www.songho.ca/opengl/gl_transform.html
http://www.opengl.org/wiki/Vertex_Transformation
http://www.falloutsoftware.com/tutorials/gl/gl5.htm
Firstly be aware that OpenGL not uses standard pixel coordinates. I mean by that for particular resolution, ie. 800x600 you dont have horizontal coordinates in range 0-799 or 1-800 stepped by one. You rather have coordinates ranged from -1 to 1 later send to graphic card rasterizing unit and after that matched to particular resolution.
I ommited one step here - before all that you have an ModelViewProjection matrix (or viewProjection matrix in some simple cases) which before all that will cast coordinates you use to an projection plane. Default use of that is to implement a camera which converts 3D space of world (View for placing an camera into right position and Projection for casting 3d coordinates into screen plane. In ModelViewProjection it's also step of placing a model into right place in world).
Another case (and you can use Projection matrix this way to achieve what you want) is to use these matrixes to convert one range of resolutions to another.
And there's a trick you will need. You should read about modelViewProjection matrix and camera in openGL if you want to go serious. But for now I will tell you that with proper matrix you can just cast your own coordinate system (and ie. use ranges 0-799 horizontaly and 0-599 verticaly) to standarized -1:1 range. That way you will not see that underlying openGL api uses his own -1 to 1 system.
The easiest way to achieve this is glOrtho function. Here's the link to documentation:
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
This is example of proper usage:
glMatrixMode (GL_PROJECTION)
glLoadIdentity ();
glOrtho (0, 800, 600, 0, 0, 1)
glMatrixMode (GL_MODELVIEW)
Now you can use own modelView matrix ie. for translation (moving) objects but don't touch your projection example. This code should be executed before any drawing commands. (Can be after initializing opengl in fact if you wont use 3d graphics).
And here's working example: http://nehe.gamedev.net/tutorial/2d_texture_font/18002/
Just draw your figures instead of drawing text. And there is another thing - glPushMatrix and glPopMatrix for choosen matrix (in this example projection matrix) - you wont use that until you combining 3d with 2d rendering.
And you can still use model matrix (ie. for placing tiles somewhere in world) and view matrix (in example for zooming view, or scrolling through world - in this case your world can be larger than resolution and you could crop view by simple translations)
After looking at my answer I see it's a little chaotic but If you confused - just read about Model, View, and Projection matixes and try example with glOrtho. If you're still confused feel free to ask.
MSDN has a great explanation. It may be in terms of DirectX but OpenGL is more-or-less the same.
Google for "opengl rendering pipeline". The first five articles all provide good expositions.
The key transition from vertices to pixels (actually, fragments, but you won't be too far off if you think "pixels") is in the rasterization stage, which occurs after all vertices have been transformed from world-coordinates to screen coordinates and clipped.