Getting absolute rectangle coordinates after direct2d translation and scale - c++

I'm using direct2d to draw a bitmap (play a video) in a window, and I want to get the absolute coordinates for any position in the playing space, whether transformations are applied or not. So if the resolution is 1280x720, then by hovering the cursor over the image, I should get values like x = 0 ... 1280, y = 0 ... 720.
The positions of the total video area are in the variable m_rcLiveWindowPos, while the variable m_rcDstVideoRect contains the positions of the actual video after adjusting for the aspect ratio. Finally, m_rcSrcVideoRect is just the video resolution (ex: left=0, top=0, right=1280, bottom=720).
Below, I applied a translation and then a scale to the renderTarget. The rawScaleFactor is a number representing the amount to scale the video: if rawScaleFactor=1, then the video should be played at 100%. If 2, then at 200%.
This all works great -- the video zooms in properly and I can click and drag the video around. The problem is that I want to get the absolute x and y coordinates of the video resolution while my cursor is hovering over the video. The first definitions of mousePosInImage work for videos with no zoom/panning with the m_rcDstVideoRect sitting in a "fitted" position, but the values are incorrect for a zoomed-in video.
if (rawScaleFactor != 0)
{
// Make the dragging more precise based on the scaling factor.
float dragPosX = (float)m_rawScaleOffsetX / (rawScaleFactor * 2.0f);
float dragPosY = (float)m_rawScaleOffsetY / (rawScaleFactor * 2.0f);
D2D1_MATRIX_3X2_F translation = D2D1::Matrix3x2F::Translation(dragPosX, dragPosY);
// Get the center point of the current image.
float centerPointX = float(m_rcLiveWindowPos.Width()) / 2;
float centerPointY = float(m_rcLiveWindowPos.Height()) / 2;
// Calculate the amount that the image must scaled by.
D2D1ScaleFactor = ((float)m_videoResolution.width / (float)(m_rcDstVideoRect.right - m_rcDstVideoRect.left)) * (float)rawScaleFactor;
D2D1_MATRIX_3X2_F scale = D2D1::Matrix3x2F::Scale(D2D1::Size(D2D1ScaleFactor, D2D1ScaleFactor),
D2D1::Point2F(centerPointX, centerPointY));
// First translate the image, then scale it.
m_pJRenderTarget->SetTransform(translation * scale);
int32_t width = ((int32_t)m_videoResolution.width);
int32_t height = ((int32_t)m_videoResolution.height);
// This works for non-zoomed in video:
m_mousePosInImageX = int32_t(width * (rawMousePosX - m_rcDstVideoRect.left) / (m_rcDstVideoRect.right - m_rcDstVideoRect.left));
m_mousePosInImageY = int32_t(height * (rawMousePosY - m_rcDstVideoRect.top) / (m_rcDstVideoRect.bottom - m_rcDstVideoRect.top));
// Does not work for all cases...
m_mousePosInImageX = int32_t((centerPointX * D2D1ScaleFactor) - (centerPointX) + (m_mousePosInImageX / D2D1ScaleFactor));
m_mousePosInImageY = int32_t((centerPointY * D2D1ScaleFactor) - (centerPointY) + (m_mousePosInImageY / D2D1ScaleFactor));
}
m_pJRenderTarget-> DrawBitmap(m_pJVideoBitmap,
m_rcDstVideoRect,
1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR,
m_rcSrcVideoRect);
I need a way to "reflect" the changes that SetTransform() did in the mousePosInImage variables.

Related

Rotating an image using Borland C++ Builder and Windows API functions

I built this example to quickly rotate images 90 degrees but I always get a cut of the image on the sides. After many tests, unfortunately I still don't understand the cause of the problem.
void rotate()
{
Graphics::TBitmap *SrcBitmap = new Graphics::TBitmap;
Graphics::TBitmap *DestBitmap = new Graphics::TBitmap;
SrcBitmap->LoadFromFile("Crayon.bmp");
DestBitmap->Width=SrcBitmap->Width;
DestBitmap->Height=SrcBitmap->Height;
SetGraphicsMode(DestBitmap->Canvas->Handle, GM_ADVANCED);
double myangle = (double)(90.0 / 180.0) * 3.1415926;
int x0=SrcBitmap->Width/2;
int y0=SrcBitmap->Height/2;
double cx=x0 - cos(myangle)*x0 + sin(myangle)*y0;
double cy=y0 - cos(myangle)*y0 - sin(myangle)*x0;
xForm.eM11 = (FLOAT) cos(myangle);
xForm.eM12 = (FLOAT) sin(myangle);
xForm.eM21 = (FLOAT) -sin(myangle);
xForm.eM22 = (FLOAT) cos(myangle);
xForm.eDx = (FLOAT) cx;
xForm.eDy = (FLOAT) cy;
SetWorldTransform(DestBitmap->Canvas->Handle, &xForm);
BitBlt(DestBitmap->Canvas->Handle,
0,
0,
SrcBitmap->Width,
SrcBitmap->Height,
SrcBitmap->Canvas->Handle,
0,
0,
SRCCOPY);
DestBitmap->SaveToFile("Crayon2.bmp");
delete DestBitmap;
delete SrcBitmap;
}
If rotating the whole image, the width and height for destination image should be flipped:
DestBitmap->Width = SrcBitmap->Height;
DestBitmap->Height = SrcBitmap->Width;
The transform routine was centering the image based on original width/height. We want to adjust x/y position to push the starting point to left/top for BitBlt
int offset = (SrcBitmap->Width - SrcBitmap->Height) / 2;
BitBlt(DestBitmap->Canvas->Handle, offset, offset, SrcBitmap->Width, SrcBitmap->Height,
SrcBitmap->Canvas->Handle, 0, 0, SRCCOPY);
Once I had a similar problem.
I'm wanted to rotate two images around a common rotation point. But I couldn't do it with the standard function, because it doesn't allow a rotation point decentralized to the center.
Nevertheless I had made notes to the standard function at that time. Maybe they help you.
I'm remember that it was important that the size of the target image is correct! If a portrait image becomes a landscape image, the image becomes wider, therefore the BitBlt function must also specify the size of the target image.
Here my note to standard function.
Filling the xForm parameters was not quite the same for me as in your code snippet.
This was then the function I used to rotate around any center.

How to zoom in on cursor point in Mandelbrot Set?

I'm currently trying to implement a zoom feature for the Mandelbrot Set code I've been working on. The idea is to zoom in/out where I left/right click. So far whenever I click the screen, the fractal is indeed zoomed in. The issue is that the fractal is rendered not at the origin-- in other words, it's not zoomed in on the point I want. I was hoping through here I can get both a code review and conceptual understanding of how to zoom in on a point in general.
Here's how I transformed the pixel coordinate before I used escape algorithm:
MandelBrot.Frag
vec2 normalizedFragPos = (gl_FragCoord.xy/windowSize); //normalize fragment position
dvec2 scaledFragPos = normalizedFragPos*aspectRatio;
scaledFragPos -= aspectRatio/2; //Render the fractal at center of window
scaledFragPos /= scale; //Factor to zoom in or out coordinates.
scaledFragPos -= translation; //Translate coordinate
//Escape Algorithm Below
On my left-click handle, I thought I should convert the cursor position to the same coordinate range as the Mandelbrot Range. So I basically did the same thing I did in the fragment shader:
Window.cpp
float x_coord{ float(GET_X_LPARAM(informaton_long))/size.x }; // normalized mouse x-coordinate
float y_coord{ float(GET_Y_LPARAM(informaton_long))/size.y }; // normalized mouse y-coordinate
x_coord *= aspectRatio[0]; //move point based of relative position to length of window.
y_coord *= aspectRatio[1]; //move point based of relative position to width of window.
x_coord /= scale; //Scale point to match previous zoom factor
y_coord /= scale; //Scale point to match previous zoom factor
translation[0] = x_coord;
translation[1] = y_coord;
//increment scale
scale += .15f;
Lets apply some algebra. Your shader does the following transformation:
mandelbrotCoord = aspectRatio * (gl_FragCoord / windowSize - 0.5) / scale - translation
When we zoom in on mouseCoord, we want to change the scale and adjust the translation such that the madelbrotCoord under the mouse stays the same. To do that we first calculate the mandelbrotCoord under the mouse using the old scale:
mandelbrotCoord = aspectRatio * (mouseCoord / windowSize - 0.5) / scale - translation
Then change the scale (which should be changed exponentially BTW):
scale *= 1.1;
Then solve for the new translation:
translation = aspectRatio * (mouseCoord / windowSize - 0.5) / scale - mandelbrotCoord
Also notice that your system probably reports the mouse coordinate with the y coordinate increasing downwards, whereas OpenGL has its window y coordinate increasing upwards (unless you override it with glClipControl). Therefore you're likely to need to flip the y coordinate of the mouseCoord too.
mouseCoord[1] = windowSize[1] - mouseCoord[1];
For best result I would also adjust the mouse coordinates to be in the middle of the pixel (+0.5, +0.5).
Putting it all together:
float mouseCoord[] = {
GET_X_LPARAM(informaton_long) + 0.5,
GET_Y_LPARAM(informaton_long) + 0.5
};
mouseCoord[1] = size[1] - mouseCoord[1];
float anchor[] = {
aspectRatio[0] * (mouseCoord[0] / size[0] - 0.5) / scale - translation[0],
aspectRatio[1] * (mouseCoord[1] / size[1] - 0.5) / scale - translation[1]
};
scale *= 1.1;
translation[0] = aspectRatio[0] * (mouseCoord[0] / size[0] - 0.5) / scale - anchor[0];
translation[1] = aspectRatio[1] * (mouseCoord[1] / size[1] - 0.5) / scale - anchor[1];
Note: some of the math above might be canceled away. However, if you want to implement a proper pan&zoom functionality (when you can zoom with the mouse wheel while you are panning) then you'll need to store the initial mandelbrotCoord of where the panning started, and then reuse it on subsequent motion and wheel events till the mouse is released. Surprisingly large amount of image viewers get this part wrong!

Sprite rotation offset doesn't stay where it belongs. (SDL)

So, here is the code for my 2D point class to rotate:
float nx = (x * cos(angle)) - (y * sin(angle));
float ny = (y * cos(angle)) + (x * sin(angle));
x = nx;
y = ny;
x and y are local variables in the point class.
And here is the code for my sprite class's rotation:
//Make clip
SDL_Rect clip;
clip.w = width;
clip.h = height;
clip.x = (width * _frameX) + (sep * (_frameX) + osX);
clip.y = (height * _frameY) + (sep * (_frameY) + osY);
//Make a rotated image
col bgColor = image->format->colorkey;
//Surfaces
img *toEdit = newImage(clip.w, clip.h);
img *toDraw = 0;
//Copy the source into the workspace
drawRect(0, 0, toEdit->w, toEdit->h, toEdit, bgColor);
drawImage(0, 0, image, toEdit, &clip);
//Edit the image
toDraw = SPG_Transform(toEdit, bgColor, angle, xScale, yScale, SPG_NONE);
SDL_SetColorKey(toDraw, SDL_SRCCOLORKEY, bgColor);
//Find new origin and offset by pivot
2DVec *pivot = new xyVec(pvX, pvY);
pivot->rotate(angle);
//Draw and remove the finished image
drawImage(_x - pivot->x - (toDraw->w / 2), _y - pivot->y - (toDraw->h / 2), toDraw, _destination);
//Delete stuff
deleteImage(toEdit);
delete pivot;
deleteImage(toDraw);
The code uses the center of the sprite as the origin. It works fine if I leave the pivot at (0,0), but if I move it somewhere else, the character's shoulder for instance, it starts making the sprite dance around as it spins like a spirograph, instead of the pivot staying on the character's shoulder.
The image rotation function is from SPriG, a library for drawing primitives and transformed images in SDL. Since the pivot is coming from the center of the image, I figure the new size of the clipped surface produced by rotating shouldn't matter.
[EDIT]
I've messed with the code a bit. By slowing it down, I found that for some reason, the vector is rotating 60 times faster than the image, even though I'm not multiplying anything by 60. So, I tried to just divide the input by 60, only now, it's coming out all jerky and not rotating to anything between multiples of 60.
The vector rotation code I found on this very site, and people have repeatedly confirmed that it works, so why does it only rotate in increments of 60?
I haven't touched the source of SPriG in a long time, but I can give you some info.
If SPriG has problems with rotating off of center, it would probably be faster and easier for you to migrate to SDL_gpu (and I suggest SDL 2.0). That way you get a similar API but the performance is much better (it uses the graphics card).
I can guess that the vector does not rotate 60 times faster than the image, but rather more like 57 times faster! This is because you are rotating the vector with sin() and cos(), which accept values in radians. The image is being rotated by an angle in degrees. The conversion factor for radians to degrees is 180/pi, which is about 57. SPriG can use either degrees or radians, but uses degrees by default. Use SPG_EnableRadians(1) to switch that behavior. Alternatively, you can stick to degree measure in your angle variable by multiplying the argument to sin() and cos() by pi/180.

C++ Zoom into the centre of the screen in 2D coordinates

I'm having difficulty working out the correct calculations in order to zoom into the centre of the screen in 2D coordinates whilst keeping everything in the correct scale.
I have a vector which I use to handle moving around my map editor as follows:
scroll = sf::Vector2<float>(-640.0f, -360.0f);
It's set at -640.0f, -360.0f to make 0,0 the centre of the screen on initialising (based on my window being 1280x720).
My zoom value ranges from 0.1f to 2.0f and it's increased or decreased in 0.05 increments:
zoomScale = zoomScale + 0.05;
When drawing elements on to the screen they are drawn using the following code:
sf::Rect<float> dRect;
dRect.left = (mapSeg[i]->position.x - scroll.x) * (layerScales[l] * zoomScale);
dRect.top = (mapSeg[i]->position.y - scroll.y) * (layerScales[l] * zoomScale);
dRect.width = (float)segDef[mapSeg[i]->segmentIndex]->width;
dRect.height = (float)segDef[mapSeg[i]->segmentIndex]->height;
sf::Sprite segSprite;
segSprite.setTexture(segDef[mapSeg[i]->segmentIndex]->tex);
segSprite.setPosition(dRect.left, dRect.top);
segSprite.setScale((layerScales[l] * zoomScale), (layerScales[l] * zoomScale));
segSprite.setOrigin(segDef[mapSeg[i]->segmentIndex]->width / 2, segDef[mapSeg[i]->segmentIndex]->height / 2);
segSprite.setRotation(mapSeg[i]->rotation);
Window.draw(segSprite);
layerScales is a value used to scale up layers of segments for parallax scrolling.
This seems to work fine when zooming in and out but the centre point seems to shift (an element that I know should always be at 0,0 will be located at different co-ordinates as soon as I zoom). I use the following to calculate what the position as at the mouse to test this as follows:
mosPosX = ((float)input.mousePos.x + scroll.x) / zoomScale)
mosPosY = ((float)input.mousePos.y + scroll.y) / zoomScale)
I'm sure there's a calculation I should be doing to the 'scroll' vector to take into account this zoom but I can't seem to get it to work right.
I tried implementing something like below but it didn't produce the correct results:
scroll.x = (scroll.x - (SCREEN_WIDTH / 2)) * zoomScale - (scroll.x - (SCREEN_WIDTH / 2));
scroll.y = (scroll.y - (SCREEN_HEIGHT / 2)) * zoomScale - (scroll.y - (SCREEN_HEIGHT / 2));
Any ideas what I'm doing wrong?
I will do this the easy way (not most efficient but works fine) and only for single axis (second is the same)
it is better to have offset unscaled:
scaledpos = (unscaledpos*zoomscale)+scrolloffset
know center point should not move after scale change (0 means before 1 means after):
scaledpos0 == scaledpos1
so do this:
scaledpos0 = (midpointpos*zoomscale0)+scrolloffset0; // old scale
scaledpos1 = (midpointpos*zoomscale1)+scrolloffset0; // change zoom only
scrolloffset1+=scaledpos0-scaledpos1; // correct offset so midpoint stays where is ... i usualy use mouse coordinate instead of midpoint so i zoom where the mouse is
when you can not change the scaling equation then just do the same with yours
scaledpos0 = (midpointpos+scrolloffset0)*zoomscale0;
scaledpos1 = (midpointpos+scrolloffset0)*zoomscale1;
scrolloffset1+=(scaledpos0-scaledpos1)/zoomscale1;
Hope I did no silly error in there (writing from memory). For more info see
Zooming graphics based on current mouse position

Aspect ratios - how to go about them? (D3D viewport setup)

Allright - seems my question was as cloudy as my head. Lets try again.
I have 3 properties while configuring viewports for a D3D device:
- The resolution the device is running in (full-screen).
- The physical aspect ratio of the monitor (as fraction and float:1, so for ex. 4:3 & 1.33).
- The aspect ratio of the source resolution (source resolution itself is kind of moot and tells us little more than the aspect ratio the rendering wants and the kind of resolution that would be ideal to run in).
Then we run into this:
// -- figure out aspect ratio adjusted VPs --
m_nativeVP.Width = xRes;
m_nativeVP.Height = yRes;
m_nativeVP.X = 0;
m_nativeVP.Y = 0;
m_nativeVP.MaxZ = 1.f;
m_nativeVP.MinZ = 0.f;
FIX_ME // this does not cover all bases -- fix!
uint xResAdj, yResAdj;
if (g_displayAspectRatio.Get() < g_renderAspectRatio.Get())
{
xResAdj = xRes;
yResAdj = (uint) ((float) xRes / g_renderAspectRatio.Get());
}
else if (g_displayAspectRatio.Get() > g_renderAspectRatio.Get())
{
xResAdj = (uint) ((float) yRes * g_renderAspectRatio.Get());
yResAdj = yRes;
}
else // ==
{
xResAdj = xRes;
yResAdj = yRes;
}
m_fullVP.Width = xResAdj;
m_fullVP.Height = yResAdj;
m_fullVP.X = (xRes - xResAdj) >> 1;
m_fullVP.Y = (yRes - yResAdj) >> 1;
m_fullVP.MaxZ = 1.f;
m_fullVP.MinZ = 0.f;
Now as long as g_displayAspectRatio equals the ratio of xRes/yRes (= adapted from device resolution), all is well and this code will do what's expected of it. But as soon as those 2 values are no longer related (for example, someone runs a 4:3 resolution on a 16:10 screen, hardware-stretched) another step is required to compensate, and I've got trouble figuring out how exactly.
(and p.s I use C-style casts on atomic types, live with it :-) )
I'm assuming what you want to achieve is a "square" projection, e.g. when you draw a circle you want it to look like a circle rather than an ellipse.
The only thing you should play with is your projection (camera) aspect ratio. In normal cases, monitors keep pixels square and all you have to do is set your camera aspect ratio equal to your viewport's aspect ratio:
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio;
In the stretched case you describe (4:3 image stretched on a 16:10 screen for example), pixels are not square anymore and you have to take that into account in your camera aspect ratio:
stretch_factor_x = screen_size_x / viewport_res_x;
stretch_factor_y = screen_size_y / viewport_res_y;
pixel_aspect_ratio = stretch_factor_x / stretch_factor_y;
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio * pixel_aspect_ratio;
Where screen_size_x and screen_size_y are multiples of the real size of the monitor (e.g. 16:10).
However, you should simply assume square pixels (unless you have a specific reason no to), as the monitor may report incorrect physical size informations to the system, or no informations at all. Also monitors don't always stretch, mine for example keeps 1:1 pixels aspect ratio and adds black borders for lower resolutions.
Edit
If you want to adjust your viewport to some aspect ratio and fit it on an arbitrary resolution then you could do like that :
viewport_aspect_ratio = 16.0 / 10.0; // The aspect ratio you want your viewport to have
screen_aspect_ratio = screen_res_x / screen_res_y;
if (viewport_aspect_ratio > screen_aspect_ratio) {
// Viewport is wider than screen, fit on X
viewport_res_x = screen_res_x;
viewport_res_y = viewport_res_x / viewport_aspect_ratio;
} else {
// Screen is wider than viewport, fit on Y
viewport_res_y = screen_res_y;
viewport_res_x = viewport_res_y * viewport_aspect_ratio;
}
camera_aspect_ratio = viewport_res_x / viewport_res_y;