Drawing Image in centre while maintaining the Aspect Ratio - c++

Part 1 : In one of my projects, I want to display a Image in center of a Custom Control using GDI+ while maintaining the aspect ratio of the Image. Here is my code
Gdiplus::Image image(imagePathWStr); // imagePathWStr is the Image Path
int originalWidth = image.GetWidth();
int originalHeight = image.GetHeight();
// This code calculates the aspect ratio in which I have to draw the image
int16 cntrlwidth = controlPosition.right - controlPosition.left; // controlPosition is the custom Control Rect
int16 cntrlheigth = controlPosition.bottom - controlPosition.top;
float percentWidth = (float)cntrlwidth / (float)originalWidth;
float percentHeight = (float)cntrlheigth / (float)originalHeight;
float percent = percentHeight < percentWidth ? percentHeight : percentWidth;
int newImageWidth = (int)(originalWidth * percent);
int newImageHeight = (int)(originalHeight * percent);
Gdiplus::RectF imageContainer;
imageContainer.X = controlPosition.left;
imageContainer.Y = controlPosition.top;
imageContainer.Width = controlPosition.right;
imageContainer.Height = controlPosition.bottom;
Gdiplus::Graphics gdiGraphics(hDC);
Gdiplus::Unit scrUnit = Gdiplus::UnitPixel;
gdiGraphics.DrawImage(&image, imageContainer, 0, 0, originalWidth, originalHeight, scrUnit);
However when the Control is resized vertically the image is moving to the right and not always stay in center, Also when Control is resized Horizontally, it is moving to the bottom. I am not able to figure out why.
I am using C++ on Windows.
Part 2: Now I have got a Rectangle drawn as well on top of this
Gdiplus::RectF cropRect;
cropRect.X = 100;
cropRect.Y = 100;
cropRect.Width = 300;
cropRect.Height = 300;
Gdiplus::Pen* myPen = new Gdiplus::Pen(Gdiplus::Color::White);
myPen->SetWidth(3);
gdiGraphics.DrawRectangle(myPen, cropRect);
Now When I resize the Control, Image is getting resized correctly but this rectangle is not, I multiplied Width and Height accordingly
Gdiplus::RectF cropRectangle;
cropRectangle.X = cropRect.GetLeft();
cropRectangle.Y = cropRec.GetTop();
cropRectangle.Width = (cropRec.Width)* percent;
cropRectangle.Height = (cropRec.Height ) * percent;
My Part 1 has been solved after Adrians Answer now I am stuck on Part 2 of my problems
Thanks
-Pankaj

when the size of the control changes you'll have a WM_SIZE message posted to your window. you must refresh your aspect ratio according to new size.

Related

Rotating an image using Borland C++ Builder and Windows API functions

I built this example to quickly rotate images 90 degrees but I always get a cut of the image on the sides. After many tests, unfortunately I still don't understand the cause of the problem.
void rotate()
{
Graphics::TBitmap *SrcBitmap = new Graphics::TBitmap;
Graphics::TBitmap *DestBitmap = new Graphics::TBitmap;
SrcBitmap->LoadFromFile("Crayon.bmp");
DestBitmap->Width=SrcBitmap->Width;
DestBitmap->Height=SrcBitmap->Height;
SetGraphicsMode(DestBitmap->Canvas->Handle, GM_ADVANCED);
double myangle = (double)(90.0 / 180.0) * 3.1415926;
int x0=SrcBitmap->Width/2;
int y0=SrcBitmap->Height/2;
double cx=x0 - cos(myangle)*x0 + sin(myangle)*y0;
double cy=y0 - cos(myangle)*y0 - sin(myangle)*x0;
xForm.eM11 = (FLOAT) cos(myangle);
xForm.eM12 = (FLOAT) sin(myangle);
xForm.eM21 = (FLOAT) -sin(myangle);
xForm.eM22 = (FLOAT) cos(myangle);
xForm.eDx = (FLOAT) cx;
xForm.eDy = (FLOAT) cy;
SetWorldTransform(DestBitmap->Canvas->Handle, &xForm);
BitBlt(DestBitmap->Canvas->Handle,
0,
0,
SrcBitmap->Width,
SrcBitmap->Height,
SrcBitmap->Canvas->Handle,
0,
0,
SRCCOPY);
DestBitmap->SaveToFile("Crayon2.bmp");
delete DestBitmap;
delete SrcBitmap;
}
If rotating the whole image, the width and height for destination image should be flipped:
DestBitmap->Width = SrcBitmap->Height;
DestBitmap->Height = SrcBitmap->Width;
The transform routine was centering the image based on original width/height. We want to adjust x/y position to push the starting point to left/top for BitBlt
int offset = (SrcBitmap->Width - SrcBitmap->Height) / 2;
BitBlt(DestBitmap->Canvas->Handle, offset, offset, SrcBitmap->Width, SrcBitmap->Height,
SrcBitmap->Canvas->Handle, 0, 0, SRCCOPY);
Once I had a similar problem.
I'm wanted to rotate two images around a common rotation point. But I couldn't do it with the standard function, because it doesn't allow a rotation point decentralized to the center.
Nevertheless I had made notes to the standard function at that time. Maybe they help you.
I'm remember that it was important that the size of the target image is correct! If a portrait image becomes a landscape image, the image becomes wider, therefore the BitBlt function must also specify the size of the target image.
Here my note to standard function.
Filling the xForm parameters was not quite the same for me as in your code snippet.
This was then the function I used to rotate around any center.

Getting absolute rectangle coordinates after direct2d translation and scale

I'm using direct2d to draw a bitmap (play a video) in a window, and I want to get the absolute coordinates for any position in the playing space, whether transformations are applied or not. So if the resolution is 1280x720, then by hovering the cursor over the image, I should get values like x = 0 ... 1280, y = 0 ... 720.
The positions of the total video area are in the variable m_rcLiveWindowPos, while the variable m_rcDstVideoRect contains the positions of the actual video after adjusting for the aspect ratio. Finally, m_rcSrcVideoRect is just the video resolution (ex: left=0, top=0, right=1280, bottom=720).
Below, I applied a translation and then a scale to the renderTarget. The rawScaleFactor is a number representing the amount to scale the video: if rawScaleFactor=1, then the video should be played at 100%. If 2, then at 200%.
This all works great -- the video zooms in properly and I can click and drag the video around. The problem is that I want to get the absolute x and y coordinates of the video resolution while my cursor is hovering over the video. The first definitions of mousePosInImage work for videos with no zoom/panning with the m_rcDstVideoRect sitting in a "fitted" position, but the values are incorrect for a zoomed-in video.
if (rawScaleFactor != 0)
{
// Make the dragging more precise based on the scaling factor.
float dragPosX = (float)m_rawScaleOffsetX / (rawScaleFactor * 2.0f);
float dragPosY = (float)m_rawScaleOffsetY / (rawScaleFactor * 2.0f);
D2D1_MATRIX_3X2_F translation = D2D1::Matrix3x2F::Translation(dragPosX, dragPosY);
// Get the center point of the current image.
float centerPointX = float(m_rcLiveWindowPos.Width()) / 2;
float centerPointY = float(m_rcLiveWindowPos.Height()) / 2;
// Calculate the amount that the image must scaled by.
D2D1ScaleFactor = ((float)m_videoResolution.width / (float)(m_rcDstVideoRect.right - m_rcDstVideoRect.left)) * (float)rawScaleFactor;
D2D1_MATRIX_3X2_F scale = D2D1::Matrix3x2F::Scale(D2D1::Size(D2D1ScaleFactor, D2D1ScaleFactor),
D2D1::Point2F(centerPointX, centerPointY));
// First translate the image, then scale it.
m_pJRenderTarget->SetTransform(translation * scale);
int32_t width = ((int32_t)m_videoResolution.width);
int32_t height = ((int32_t)m_videoResolution.height);
// This works for non-zoomed in video:
m_mousePosInImageX = int32_t(width * (rawMousePosX - m_rcDstVideoRect.left) / (m_rcDstVideoRect.right - m_rcDstVideoRect.left));
m_mousePosInImageY = int32_t(height * (rawMousePosY - m_rcDstVideoRect.top) / (m_rcDstVideoRect.bottom - m_rcDstVideoRect.top));
// Does not work for all cases...
m_mousePosInImageX = int32_t((centerPointX * D2D1ScaleFactor) - (centerPointX) + (m_mousePosInImageX / D2D1ScaleFactor));
m_mousePosInImageY = int32_t((centerPointY * D2D1ScaleFactor) - (centerPointY) + (m_mousePosInImageY / D2D1ScaleFactor));
}
m_pJRenderTarget-> DrawBitmap(m_pJVideoBitmap,
m_rcDstVideoRect,
1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR,
m_rcSrcVideoRect);
I need a way to "reflect" the changes that SetTransform() did in the mousePosInImage variables.

Converting mouse position way off

I've got a remote screen that takes mouse position then converts the position into the captured screens position but the position is always off. I'm not the greatest at math but I'm sure my conversion is correct. The image size I get in correlation with the screen along with the window size is correct. The only thing that ends up being off is my click point after conversion.
** Also I know I have no error checking. This is to help with readability.
EXAMPLE: I want to click on Screen point 1102x 999y. The Remote Window point is approx 676 584. After conversion the point gets translated to 1081.5 935.3. I would understand maybe a few pixels off because its hard to click on that exact spot through the RW but its getting translated 60 pixels up and 20 pixels left.
Lets say that my remote screen is 800 x 600 and the captured is 1280 by 1024.
to get the difference I use (1280 / 800)*Clickpointx and (1024 / 600)*Clickpointy.
My problem is sWidth and sHeight are always off by at least 20 and sometimes are off by upwards of 80 depending on where the click is inside the window.
POINT p;
float sWidth;
float sHeight;
GetCursorPos(&p);
RECT rect;
GetWindowRect(hWnd, &rect); // Get Window Size
ScreenToClient(hWnd, &p); // convert points relative to screen to points relative to window
float wWidth = rect.right - rect.left;
float wHeight = rect.bottom - rect.top;
Gdiplus::Image* image = Gdiplus::Image::FromStream(sRemote_Handler.istream); //Get Captured image Size;
float iWidth = image->GetWidth();
float iHeight = image->GetHeight();
INPUT input;
input.type = INPUT_MOUSE;
input.mi.time = 0;
float pointx = p.x;
float pointy = p.y;
sWidth = (iWidth / wWidth)*pointx; // divide image width by screen width
sHeight = (iHeight / wHeight)*pointy; // divide image height by screen height
input.mi.dx = sWidth*(65536.0f / iWidth); //convert to pixels
input.mi.dy = sHeight*(65536.0f / iHeight); // ^
input.mi.dwFlags = MOUSEEVENTF_ABSOLUTE | MOUSEEVENTF_MOVE;
SendInput(1, &input, sizeof(input)); // move mouse
I've tried changing the conversion method to (Clickpointx*1280)/800 and (Clickpointy*1240)/600. Still the same results.
sWidth = (pointx * iWidth) / wWidth;
sHeight = (pointy * iHeight) / wHeight;
Changed the conversion method once more. Same Results
sWidth = (pointx / iWidth)*wWidth;
sHeight = (pointy / iHeight)*wHeight;
EDIT
I've came to the conclusion that its the ScreenToClient windows function.
When clicking in the bottom right corner which should be damn near 800,600 both values are around 50 less then what they should be. When I click in the top left both values are where they should be. The numbers are reporting as 0,1. It seems as though the further I get from the initial 0,0 point the less accurate ScreenToClient gets.

How to solve performance issues with QPixmap (large drawingjobs)?

I am coding a small map editor (with rectangle tiles) and I need a way to draw a large amount of images OR one big image. The application is simple: You draw images on an empty screen with your mouse and when you are finished you can save it. A tile consists of a small image.
I tried out several solutions to display the tiles:
Each tile has its own QGraphicsItem (This works until you have a
1000x1000 map)
Each tile gets drawn on one big QPixmap (This means a very large image. Example: Map with 1000x100 and each tile has a size of 32x32 means that the QPixmap has a size of 32000x32000. This is a problem for QPainter.)
The current solution: Iterate through width & height of the TileLayer and draw each single tile with painter->drawPixmap(). The paint() method of my TileLayer looks like this:
void TileLayerGraphicsItem::paint(QPainter* painter, const QStyleOptionGraphicsItem* option,QWidget* /*widget*/)
{
painter->setClipRect(option->exposedRect);
int m_width=m_layer->getSize().width();
int m_height=m_layer->getSize().height();
for(int i=0;i<m_width;i++)
{
for(int j=0;j<(m_height);j++)
{
Tile* thetile=m_layer->getTile(i,j);
if(thetile==NULL)continue;
const QRectF target(thetile->getLayerPos().x()*thetile->getSize().width(),thetile->getLayerPos().y()*thetile->getSize().height(),thetile->getSize().width(),thetile->getSize().height());
const QRectF source(0, 0, thetile->getSize().width(), thetile->getSize().height());
painter->drawImage(target,*thetile->getImage(),source);
}
}}
This works for small maps with 100x100 or even 1000x100 tiles. But not for 1000x1000. The whole application begins to lag, this is of course because I have a for loop that is extremely expensive. To make my tool useful I need to be able to make at least 1000x1000 tilemaps without lags. Does anyone have an idea what I can do? How should I represent the tiles?
Update:
I changed the following: Only maps that exceed the window size of the minimap will be drawn with drawing single pixels for each tile. This is my render function now:
void RectangleRenderer::renderMinimapImage(QPainter* painter, TileMap* map,QSize windowSize)
{
for(int i=0;i<map->getLayers().size();i++)
{
TileLayer* currLayer=map->getLayers().at(i);
//if the layer is small draw it completly
if(windowSize.width()>currLayer->getSize().width()&&windowSize.height()>currLayer->getSize().height())
{
...
}
else // This is the part where the map is so big that only some pixels are drawn!
{
painter->fillRect(0,0,windowSize.width(),windowSize.height(),QBrush(QColor(map->MapColor)));
for(float i=0;i<windowSize.width();i++)
for(float j=0;j<windowSize.height();j++)
{
float tX=i/windowSize.width();
float tY=j/windowSize.height();
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
Tile* thetile=currLayer->getTile((int)pX,(int)pY);
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(i,j);
}
}
}
}
This works not correct, however it is pretty fast. The problem is my lerp(linear interpolation) function to get the correct tiles to draw a pixel from.
Does anyone have a better solution to get the correct tiles while I iterate through the minimap pixels? At the moment I use linear interpolation between 0 and the maximum size of the tilemap and it does not work correctly.
UPDATE 2
//currLayer->getSize() returns how many tiles are in the map
// currLayer->getTileSize() returns how big each tile is (32 pixels width for example)
int raw_width = currLayer->getSize().width()*currLayer->getTileSize().width();
int raw_height = currLayer->getSize().height()*currLayer->getTileSize().height();
int desired_width = windowSize.width();
int desired_height = windowSize.height();
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width;
qreal ratio_y = (qreal)desired_height / raw_height;
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2,0);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
//trying to do the following: use your code to get the desired pixel. Then divide that number by the size of the tile to get the correct pixel
Tile* thetile=currLayer->getTile((int)((r * floating_factor)*raw_width)/currLayer->getTileSize().width(),(int)(((c * floating_factor)*raw_height)/currLayer->getTileSize().height()));
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(r,c);
}
}
Trying to reverse engineer the example code, but it still does not work correctly.
Update 3
I tried (update 1) with linear interpolation again. And while I looked at the code I saw the error:
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
should be:
float pX=lerp(0,currLayer->getSize().width(),tX);
float pY=lerp(0,currLayer->getSize().height(),tY);
That's it. Now it works.
This shows how to do it properly. You use a level of detail (lod) variable to determine how to draw the elements that are currently visible on the screen, based on their zoom.
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-example.html
Also don't iterate through all the elements that could be visible, but only go through the ones that have changed, and of those, only the ones that are currently visible.
Your next option to use is some other manual caching, so you don't have to repeatedly iterate through O(n^2) constantly.
If you can't optimize it for QGraphicsView/QGraphicsScene... then OpenGL is probably what you may want to look into. It can do a lot of the drawing and caching directly on the graphics card so you don't have to worry about it as much.
UPDATE:
Pushing changes to QImage on a worker thread can let you cache, and update a cache, while leaving the rest of your program responsive, and then you use a Queued connection to get back on the GUI thread to draw the QImage as a Pixmap.
QGraphicsView will let you know which tiles are visible if you ask nicely:
http://qt-project.org/doc/qt-5/qgraphicsview.html#items-5
UPDATE 2:
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-chip-cpp.html
You may need to adjust the range of zooming out that is allowed on the project to test this feature...
Under where it has
const qreal lod = option->levelOfDetailFromTransform(painter->worldTransform());
if (lod < 0.2) {
if (lod < 0.125) {
painter->fillRect(QRectF(0, 0, 110, 70), fillColor);
return;
}
QBrush b = painter->brush();
painter->setBrush(fillColor);
painter->drawRect(13, 13, 97, 57);
painter->setBrush(b);
return;
}
Add in something like:
if(lod < 0.05)
{
// using some sort of row/col value to know which ones to not draw...
// This below would only draw 1/3 of the rows and 1/3 of the column
// speeding up the redraw by about 9x.
if(row%3 != 0 || col%3 != 0)
return;// don't do any painting, return
}
UPDATE 3:
Decimation Example:
// How to decimate an image to any size, properly
// aka fast scaling
int raw_width = 1000;
int raw_height = 1000;
int desired_width = 300;
int desired_height = 200;
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width();
qreal ratio_y = (qreal)desired_height / raw_height();
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
pixel[r][c] = raw_pixel[(int)(r * floating_factor)*raw_width][(int)(c * floating_factor)];
}
}
Hope that helps.

Aspect ratios - how to go about them? (D3D viewport setup)

Allright - seems my question was as cloudy as my head. Lets try again.
I have 3 properties while configuring viewports for a D3D device:
- The resolution the device is running in (full-screen).
- The physical aspect ratio of the monitor (as fraction and float:1, so for ex. 4:3 & 1.33).
- The aspect ratio of the source resolution (source resolution itself is kind of moot and tells us little more than the aspect ratio the rendering wants and the kind of resolution that would be ideal to run in).
Then we run into this:
// -- figure out aspect ratio adjusted VPs --
m_nativeVP.Width = xRes;
m_nativeVP.Height = yRes;
m_nativeVP.X = 0;
m_nativeVP.Y = 0;
m_nativeVP.MaxZ = 1.f;
m_nativeVP.MinZ = 0.f;
FIX_ME // this does not cover all bases -- fix!
uint xResAdj, yResAdj;
if (g_displayAspectRatio.Get() < g_renderAspectRatio.Get())
{
xResAdj = xRes;
yResAdj = (uint) ((float) xRes / g_renderAspectRatio.Get());
}
else if (g_displayAspectRatio.Get() > g_renderAspectRatio.Get())
{
xResAdj = (uint) ((float) yRes * g_renderAspectRatio.Get());
yResAdj = yRes;
}
else // ==
{
xResAdj = xRes;
yResAdj = yRes;
}
m_fullVP.Width = xResAdj;
m_fullVP.Height = yResAdj;
m_fullVP.X = (xRes - xResAdj) >> 1;
m_fullVP.Y = (yRes - yResAdj) >> 1;
m_fullVP.MaxZ = 1.f;
m_fullVP.MinZ = 0.f;
Now as long as g_displayAspectRatio equals the ratio of xRes/yRes (= adapted from device resolution), all is well and this code will do what's expected of it. But as soon as those 2 values are no longer related (for example, someone runs a 4:3 resolution on a 16:10 screen, hardware-stretched) another step is required to compensate, and I've got trouble figuring out how exactly.
(and p.s I use C-style casts on atomic types, live with it :-) )
I'm assuming what you want to achieve is a "square" projection, e.g. when you draw a circle you want it to look like a circle rather than an ellipse.
The only thing you should play with is your projection (camera) aspect ratio. In normal cases, monitors keep pixels square and all you have to do is set your camera aspect ratio equal to your viewport's aspect ratio:
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio;
In the stretched case you describe (4:3 image stretched on a 16:10 screen for example), pixels are not square anymore and you have to take that into account in your camera aspect ratio:
stretch_factor_x = screen_size_x / viewport_res_x;
stretch_factor_y = screen_size_y / viewport_res_y;
pixel_aspect_ratio = stretch_factor_x / stretch_factor_y;
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio * pixel_aspect_ratio;
Where screen_size_x and screen_size_y are multiples of the real size of the monitor (e.g. 16:10).
However, you should simply assume square pixels (unless you have a specific reason no to), as the monitor may report incorrect physical size informations to the system, or no informations at all. Also monitors don't always stretch, mine for example keeps 1:1 pixels aspect ratio and adds black borders for lower resolutions.
Edit
If you want to adjust your viewport to some aspect ratio and fit it on an arbitrary resolution then you could do like that :
viewport_aspect_ratio = 16.0 / 10.0; // The aspect ratio you want your viewport to have
screen_aspect_ratio = screen_res_x / screen_res_y;
if (viewport_aspect_ratio > screen_aspect_ratio) {
// Viewport is wider than screen, fit on X
viewport_res_x = screen_res_x;
viewport_res_y = viewport_res_x / viewport_aspect_ratio;
} else {
// Screen is wider than viewport, fit on Y
viewport_res_y = screen_res_y;
viewport_res_x = viewport_res_y * viewport_aspect_ratio;
}
camera_aspect_ratio = viewport_res_x / viewport_res_y;