Aspect ratios - how to go about them? (D3D viewport setup) - c++

Allright - seems my question was as cloudy as my head. Lets try again.
I have 3 properties while configuring viewports for a D3D device:
- The resolution the device is running in (full-screen).
- The physical aspect ratio of the monitor (as fraction and float:1, so for ex. 4:3 & 1.33).
- The aspect ratio of the source resolution (source resolution itself is kind of moot and tells us little more than the aspect ratio the rendering wants and the kind of resolution that would be ideal to run in).
Then we run into this:
// -- figure out aspect ratio adjusted VPs --
m_nativeVP.Width = xRes;
m_nativeVP.Height = yRes;
m_nativeVP.X = 0;
m_nativeVP.Y = 0;
m_nativeVP.MaxZ = 1.f;
m_nativeVP.MinZ = 0.f;
FIX_ME // this does not cover all bases -- fix!
uint xResAdj, yResAdj;
if (g_displayAspectRatio.Get() < g_renderAspectRatio.Get())
{
xResAdj = xRes;
yResAdj = (uint) ((float) xRes / g_renderAspectRatio.Get());
}
else if (g_displayAspectRatio.Get() > g_renderAspectRatio.Get())
{
xResAdj = (uint) ((float) yRes * g_renderAspectRatio.Get());
yResAdj = yRes;
}
else // ==
{
xResAdj = xRes;
yResAdj = yRes;
}
m_fullVP.Width = xResAdj;
m_fullVP.Height = yResAdj;
m_fullVP.X = (xRes - xResAdj) >> 1;
m_fullVP.Y = (yRes - yResAdj) >> 1;
m_fullVP.MaxZ = 1.f;
m_fullVP.MinZ = 0.f;
Now as long as g_displayAspectRatio equals the ratio of xRes/yRes (= adapted from device resolution), all is well and this code will do what's expected of it. But as soon as those 2 values are no longer related (for example, someone runs a 4:3 resolution on a 16:10 screen, hardware-stretched) another step is required to compensate, and I've got trouble figuring out how exactly.
(and p.s I use C-style casts on atomic types, live with it :-) )

I'm assuming what you want to achieve is a "square" projection, e.g. when you draw a circle you want it to look like a circle rather than an ellipse.
The only thing you should play with is your projection (camera) aspect ratio. In normal cases, monitors keep pixels square and all you have to do is set your camera aspect ratio equal to your viewport's aspect ratio:
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio;
In the stretched case you describe (4:3 image stretched on a 16:10 screen for example), pixels are not square anymore and you have to take that into account in your camera aspect ratio:
stretch_factor_x = screen_size_x / viewport_res_x;
stretch_factor_y = screen_size_y / viewport_res_y;
pixel_aspect_ratio = stretch_factor_x / stretch_factor_y;
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio * pixel_aspect_ratio;
Where screen_size_x and screen_size_y are multiples of the real size of the monitor (e.g. 16:10).
However, you should simply assume square pixels (unless you have a specific reason no to), as the monitor may report incorrect physical size informations to the system, or no informations at all. Also monitors don't always stretch, mine for example keeps 1:1 pixels aspect ratio and adds black borders for lower resolutions.
Edit
If you want to adjust your viewport to some aspect ratio and fit it on an arbitrary resolution then you could do like that :
viewport_aspect_ratio = 16.0 / 10.0; // The aspect ratio you want your viewport to have
screen_aspect_ratio = screen_res_x / screen_res_y;
if (viewport_aspect_ratio > screen_aspect_ratio) {
// Viewport is wider than screen, fit on X
viewport_res_x = screen_res_x;
viewport_res_y = viewport_res_x / viewport_aspect_ratio;
} else {
// Screen is wider than viewport, fit on Y
viewport_res_y = screen_res_y;
viewport_res_x = viewport_res_y * viewport_aspect_ratio;
}
camera_aspect_ratio = viewport_res_x / viewport_res_y;

Related

Getting absolute rectangle coordinates after direct2d translation and scale

I'm using direct2d to draw a bitmap (play a video) in a window, and I want to get the absolute coordinates for any position in the playing space, whether transformations are applied or not. So if the resolution is 1280x720, then by hovering the cursor over the image, I should get values like x = 0 ... 1280, y = 0 ... 720.
The positions of the total video area are in the variable m_rcLiveWindowPos, while the variable m_rcDstVideoRect contains the positions of the actual video after adjusting for the aspect ratio. Finally, m_rcSrcVideoRect is just the video resolution (ex: left=0, top=0, right=1280, bottom=720).
Below, I applied a translation and then a scale to the renderTarget. The rawScaleFactor is a number representing the amount to scale the video: if rawScaleFactor=1, then the video should be played at 100%. If 2, then at 200%.
This all works great -- the video zooms in properly and I can click and drag the video around. The problem is that I want to get the absolute x and y coordinates of the video resolution while my cursor is hovering over the video. The first definitions of mousePosInImage work for videos with no zoom/panning with the m_rcDstVideoRect sitting in a "fitted" position, but the values are incorrect for a zoomed-in video.
if (rawScaleFactor != 0)
{
// Make the dragging more precise based on the scaling factor.
float dragPosX = (float)m_rawScaleOffsetX / (rawScaleFactor * 2.0f);
float dragPosY = (float)m_rawScaleOffsetY / (rawScaleFactor * 2.0f);
D2D1_MATRIX_3X2_F translation = D2D1::Matrix3x2F::Translation(dragPosX, dragPosY);
// Get the center point of the current image.
float centerPointX = float(m_rcLiveWindowPos.Width()) / 2;
float centerPointY = float(m_rcLiveWindowPos.Height()) / 2;
// Calculate the amount that the image must scaled by.
D2D1ScaleFactor = ((float)m_videoResolution.width / (float)(m_rcDstVideoRect.right - m_rcDstVideoRect.left)) * (float)rawScaleFactor;
D2D1_MATRIX_3X2_F scale = D2D1::Matrix3x2F::Scale(D2D1::Size(D2D1ScaleFactor, D2D1ScaleFactor),
D2D1::Point2F(centerPointX, centerPointY));
// First translate the image, then scale it.
m_pJRenderTarget->SetTransform(translation * scale);
int32_t width = ((int32_t)m_videoResolution.width);
int32_t height = ((int32_t)m_videoResolution.height);
// This works for non-zoomed in video:
m_mousePosInImageX = int32_t(width * (rawMousePosX - m_rcDstVideoRect.left) / (m_rcDstVideoRect.right - m_rcDstVideoRect.left));
m_mousePosInImageY = int32_t(height * (rawMousePosY - m_rcDstVideoRect.top) / (m_rcDstVideoRect.bottom - m_rcDstVideoRect.top));
// Does not work for all cases...
m_mousePosInImageX = int32_t((centerPointX * D2D1ScaleFactor) - (centerPointX) + (m_mousePosInImageX / D2D1ScaleFactor));
m_mousePosInImageY = int32_t((centerPointY * D2D1ScaleFactor) - (centerPointY) + (m_mousePosInImageY / D2D1ScaleFactor));
}
m_pJRenderTarget-> DrawBitmap(m_pJVideoBitmap,
m_rcDstVideoRect,
1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR,
m_rcSrcVideoRect);
I need a way to "reflect" the changes that SetTransform() did in the mousePosInImage variables.

How to zoom in on cursor point in Mandelbrot Set?

I'm currently trying to implement a zoom feature for the Mandelbrot Set code I've been working on. The idea is to zoom in/out where I left/right click. So far whenever I click the screen, the fractal is indeed zoomed in. The issue is that the fractal is rendered not at the origin-- in other words, it's not zoomed in on the point I want. I was hoping through here I can get both a code review and conceptual understanding of how to zoom in on a point in general.
Here's how I transformed the pixel coordinate before I used escape algorithm:
MandelBrot.Frag
vec2 normalizedFragPos = (gl_FragCoord.xy/windowSize); //normalize fragment position
dvec2 scaledFragPos = normalizedFragPos*aspectRatio;
scaledFragPos -= aspectRatio/2; //Render the fractal at center of window
scaledFragPos /= scale; //Factor to zoom in or out coordinates.
scaledFragPos -= translation; //Translate coordinate
//Escape Algorithm Below
On my left-click handle, I thought I should convert the cursor position to the same coordinate range as the Mandelbrot Range. So I basically did the same thing I did in the fragment shader:
Window.cpp
float x_coord{ float(GET_X_LPARAM(informaton_long))/size.x }; // normalized mouse x-coordinate
float y_coord{ float(GET_Y_LPARAM(informaton_long))/size.y }; // normalized mouse y-coordinate
x_coord *= aspectRatio[0]; //move point based of relative position to length of window.
y_coord *= aspectRatio[1]; //move point based of relative position to width of window.
x_coord /= scale; //Scale point to match previous zoom factor
y_coord /= scale; //Scale point to match previous zoom factor
translation[0] = x_coord;
translation[1] = y_coord;
//increment scale
scale += .15f;
Lets apply some algebra. Your shader does the following transformation:
mandelbrotCoord = aspectRatio * (gl_FragCoord / windowSize - 0.5) / scale - translation
When we zoom in on mouseCoord, we want to change the scale and adjust the translation such that the madelbrotCoord under the mouse stays the same. To do that we first calculate the mandelbrotCoord under the mouse using the old scale:
mandelbrotCoord = aspectRatio * (mouseCoord / windowSize - 0.5) / scale - translation
Then change the scale (which should be changed exponentially BTW):
scale *= 1.1;
Then solve for the new translation:
translation = aspectRatio * (mouseCoord / windowSize - 0.5) / scale - mandelbrotCoord
Also notice that your system probably reports the mouse coordinate with the y coordinate increasing downwards, whereas OpenGL has its window y coordinate increasing upwards (unless you override it with glClipControl). Therefore you're likely to need to flip the y coordinate of the mouseCoord too.
mouseCoord[1] = windowSize[1] - mouseCoord[1];
For best result I would also adjust the mouse coordinates to be in the middle of the pixel (+0.5, +0.5).
Putting it all together:
float mouseCoord[] = {
GET_X_LPARAM(informaton_long) + 0.5,
GET_Y_LPARAM(informaton_long) + 0.5
};
mouseCoord[1] = size[1] - mouseCoord[1];
float anchor[] = {
aspectRatio[0] * (mouseCoord[0] / size[0] - 0.5) / scale - translation[0],
aspectRatio[1] * (mouseCoord[1] / size[1] - 0.5) / scale - translation[1]
};
scale *= 1.1;
translation[0] = aspectRatio[0] * (mouseCoord[0] / size[0] - 0.5) / scale - anchor[0];
translation[1] = aspectRatio[1] * (mouseCoord[1] / size[1] - 0.5) / scale - anchor[1];
Note: some of the math above might be canceled away. However, if you want to implement a proper pan&zoom functionality (when you can zoom with the mouse wheel while you are panning) then you'll need to store the initial mandelbrotCoord of where the panning started, and then reuse it on subsequent motion and wheel events till the mouse is released. Surprisingly large amount of image viewers get this part wrong!

Isometric Collision - 'Diamond' shape detection

My project uses an isometric perspective for the time being I am showing the co-ordinates in grid-format above them for debugging. However, when it comes to collision/grid-locking of the player, I have an issue.
Due to the nature of sprite drawing, my maths is creating some issues with the 'triangular' corner empty areas of the textures. I think that the issue is something like below (blue is what I think is the way my tiles are being detected, whereas the red is how they ideally should be detected for accurate roaming movement on the tiles:
As you can see, the boolean that checks the tile I am stood on (which takes the pixel central to the player's feet, the player will later be a car and take a pixel based on the direction of movement) is returning false and denying movement in several scenarios, as well as letting the player move in some places that shouldn't be allowed.
I think that it's because the cutoff areas of each texture are (I think) being considered part of the grid area, so when the player is in one of these corner areas it is not truly checking the correct tile, and so returning the wrong results.
The code I'm using for creating the grid is this:
int VisualComponent::TileConversion(Tile* tileToConvert, bool xOrY)
{
int X = (tileToConvert->x - tileToConvert->y) * 64; //change 64 to TILE_WIDTH_HALF
int Y = (tileToConvert->x + tileToConvert->y) * 25;
/*int X = (tileToConvert->x * 128 / 2) + (tileToConvert->y * 128 / 2) + 100;
int Y = (tileToConvert->y * 50 / 2) - (tileToConvert->x * 50 / 2) + 100;*/
if (xOrY)
{
return X;
}
else
{
return Y;
}
}
and the code for checking the player's movement is:
bool Clsentity::CheckMovementTile(int xpos, int ypos, ClsMapData* mapData) //check if the movement will end on a legitimate road tile UNOPTIMISED AS RUNS EVERY FRAME FOR EVERY TILE
{
int x = xpos + 7; //get the center bottom pixel as this is more suitable than the first on an iso grid (more realistic 'foot' placement)
int y = ypos + 45;
int mapX = (x / 64 + y / 25) / 2; //64 is TILE-WIDTH HALF and 25 is TILE HEIGHT
int mapY = (y / 25 - (x / 64)) / 2;
for (int i = 0; i < mapData->tilesList.size(); i++) //for each tile of the map
{
if (mapData->tilesList[i]->x == mapX && mapData->tilesList[i]->y == mapY) //if there is an existing tile that will be entered
{
if (mapData->tilesList[i]->movementTile)
{
HAPI->DebugText(std::to_string(mapX) + " is the x and the y is " + std::to_string(mapY));
return true;
}
}
}
return false;
}​
I'm a little stuck on progression until having this fixed in the game loop aspect of things. If anyone thinks they either know the issue from this or might be able to help it'd be great and I would appreciate it. For reference also, my tile textures are 128x64 pixels and the math behind drawing them to screen treats them as 128x50 (to cleanly link together).
Rather than writing specific routines for rendering and click mapping, seriously consider thinking of these as two views on the data, which can be transformed in terms of matrix transformations of a coordinate space. You can have two coordinate spaces - one is a nice rectangular grid that you use for positioning and logic. The other is the isometric view that you use for display and input.
If you're not familiar with linear algebra, it'll take a little bit to wrap your head around it, but once you do, it makes everything trivial.
So, how does that work? Your isometric view is merely a rotation of a bog standard grid view, right? Well, close. Isometric view also changes the dimensions if you're starting with a square grid. Anyhow: can we just do a simple coordinate transformation?
Logical coordinate system -> display system (e.g. for rendering)
Texture point => Rotate 45 degrees => Scale by sqrt(2) because a 45 degree rotation changes the dimension of the block by sqrt(1 * 1 + 1 * 1)
Display system -> logical coordinate system (e.g. for mapping clicks into logical space)
Click point => descale by sqrt(2) to unsquish => unrotate by 45 degrees
Why?
If you can do coordinate transformations, then you'd be dealing with a pretty bog-standard rectangular grid for everything else you write, which will make your any other logic MUCH simpler. Your calculations there won't involve computing angles or slopes. E.g. now your "can I move 'down'" logic is much simpler.
Let's say you have 64 x 64 tiles, for simplicity. Now transforming a screen space click to a logical tile is simply:
(int, int) whichTile(clickX, clickY) {
logicalX, logicalY = transform(clickX, clickY)
return (logicalX / 64, logicalY / 64)
}
You can do checks like see if x0,y0 and x1,y1 are on the same tile, in the logical space by someting as simple as:
bool isSameTile(x0, y0, x1, y1) {
return floor(x0/64) == floor(x1/64) && floor(y0/64) == floor(y1/64)
}
Everything gets much simpler once you define the transforms and work in the logical space.
http://en.wikipedia.org/wiki/Rotation_matrix
http://en.wikipedia.org/wiki/Scaling_%28geometry%29#Matrix_representation
http://www.alcove-games.com/advanced-tutorials/isometric-tile-picking/
If you don't want to deal with some matrix library, you can do the equivalent math pretty straightforwardly, but if you separate concerns of logic management from display / input through these transformations, I suspect you'll have a much easier time of it.

C++ Zoom into the centre of the screen in 2D coordinates

I'm having difficulty working out the correct calculations in order to zoom into the centre of the screen in 2D coordinates whilst keeping everything in the correct scale.
I have a vector which I use to handle moving around my map editor as follows:
scroll = sf::Vector2<float>(-640.0f, -360.0f);
It's set at -640.0f, -360.0f to make 0,0 the centre of the screen on initialising (based on my window being 1280x720).
My zoom value ranges from 0.1f to 2.0f and it's increased or decreased in 0.05 increments:
zoomScale = zoomScale + 0.05;
When drawing elements on to the screen they are drawn using the following code:
sf::Rect<float> dRect;
dRect.left = (mapSeg[i]->position.x - scroll.x) * (layerScales[l] * zoomScale);
dRect.top = (mapSeg[i]->position.y - scroll.y) * (layerScales[l] * zoomScale);
dRect.width = (float)segDef[mapSeg[i]->segmentIndex]->width;
dRect.height = (float)segDef[mapSeg[i]->segmentIndex]->height;
sf::Sprite segSprite;
segSprite.setTexture(segDef[mapSeg[i]->segmentIndex]->tex);
segSprite.setPosition(dRect.left, dRect.top);
segSprite.setScale((layerScales[l] * zoomScale), (layerScales[l] * zoomScale));
segSprite.setOrigin(segDef[mapSeg[i]->segmentIndex]->width / 2, segDef[mapSeg[i]->segmentIndex]->height / 2);
segSprite.setRotation(mapSeg[i]->rotation);
Window.draw(segSprite);
layerScales is a value used to scale up layers of segments for parallax scrolling.
This seems to work fine when zooming in and out but the centre point seems to shift (an element that I know should always be at 0,0 will be located at different co-ordinates as soon as I zoom). I use the following to calculate what the position as at the mouse to test this as follows:
mosPosX = ((float)input.mousePos.x + scroll.x) / zoomScale)
mosPosY = ((float)input.mousePos.y + scroll.y) / zoomScale)
I'm sure there's a calculation I should be doing to the 'scroll' vector to take into account this zoom but I can't seem to get it to work right.
I tried implementing something like below but it didn't produce the correct results:
scroll.x = (scroll.x - (SCREEN_WIDTH / 2)) * zoomScale - (scroll.x - (SCREEN_WIDTH / 2));
scroll.y = (scroll.y - (SCREEN_HEIGHT / 2)) * zoomScale - (scroll.y - (SCREEN_HEIGHT / 2));
Any ideas what I'm doing wrong?
I will do this the easy way (not most efficient but works fine) and only for single axis (second is the same)
it is better to have offset unscaled:
scaledpos = (unscaledpos*zoomscale)+scrolloffset
know center point should not move after scale change (0 means before 1 means after):
scaledpos0 == scaledpos1
so do this:
scaledpos0 = (midpointpos*zoomscale0)+scrolloffset0; // old scale
scaledpos1 = (midpointpos*zoomscale1)+scrolloffset0; // change zoom only
scrolloffset1+=scaledpos0-scaledpos1; // correct offset so midpoint stays where is ... i usualy use mouse coordinate instead of midpoint so i zoom where the mouse is
when you can not change the scaling equation then just do the same with yours
scaledpos0 = (midpointpos+scrolloffset0)*zoomscale0;
scaledpos1 = (midpointpos+scrolloffset0)*zoomscale1;
scrolloffset1+=(scaledpos0-scaledpos1)/zoomscale1;
Hope I did no silly error in there (writing from memory). For more info see
Zooming graphics based on current mouse position

Can someone explain how I can use Quanternions to use the mouse to look around like a FPS?

Yesterday I asked: How could simply calling Pitch and Yaw cause the camera to roll?
Basically, I found out because of "Gimbal Lock" that if you pitch + yaw you will inevitably produce a rolling effect. For more information you can read that question.
I'm trying to stop this from happening. When you look around in a normal FPS shooter you don't have your camera rolling all over the place!
Here is my current passive mouse func:
int windowWidth = 640;
int windowHeight = 480;
int oldMouseX = -1;
int oldMouseY = -1;
void mousePassiveHandler(int x, int y)
{
int snapThreshold = 50;
if (oldMouseX != -1 && oldMouseY != -1)
{
cam.yaw((x - oldMouseX)/10.0);
cam.pitch((y - oldMouseY)/10.0);
oldMouseX = x;
oldMouseY = y;
if ((fabs(x - (windowWidth / 2)) > snapThreshold) || (fabs(y - (windowHeight / 2)) > snapThreshold))
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
}
else
{
oldMouseX = windowWidth / 2;
oldMouseY = windowHeight / 2;
glutWarpPointer(windowWidth / 2, windowHeight / 2);
}
glutPostRedisplay();
}
Which causes the camera to pitch/yaw based on the mouse movement (while keeping the cursor in the center). I've also posted my original camera class here.
Someone in that thread suggested I use Quaternions to prevent this effect from happening but after reading the wikipedia page on them I simply don't grok them.
How could I create a Quaternions in my OpenGL/Glut app so I can properly make my "Camera" look around without unwanted roll?
A Simple Quaternion-Based Camera, designed to be used with gluLookAt.
http://www.gamedev.net/reference/articles/article1997.asp
Keep your delta changes low to avoid that (i.e < 45 degrees)
Just calculate a small "delta" matrix with the rotations for each frame, fold this into the camera matrix each frame. (by fold I mean: cam = cam * delta)
If you're running for a long time, you might get some numerical errors, so you need to re-orthogonalize it. (look it up if that seems to happen)
That's the easiest way to avoid gimbal lock when just playing around with things. Once you get more proficient, you'll understand the rest.
As for quaternions, just find a good lib for them that can convert them to rotation matrices, then use the same technique (compute delta quat, multiply into main quat).
I would represent everything in polar coordinates. The wikipedia page should get you started.
You don't really need quaternions for that simple case, what you need is to input your heading and pitch into a 3-dimensional matrix calculation:
Use your heading value with a rotation on Y axis to calculate MY
Use your pitch value with a rotation on X axis to calculate MX
For each point P, calculate R = MX * MY * P
The calculation can be done in 2 ways:
T = MY * P, then R = MX * T
T = MX * MY, then R = T * P
The first way is slower but easier to code at first, the second one is faster but you will need to code a matrix-matrix multiplication function.
ps. See http://en.wikipedia.org/wiki/Rotation_matrix#Dimension_three for the matrices