Dialog Controls in Ultra High Resolutions - mfc

I am having trouble with the spacing and size of dialog controls when using my application in an ultra high resolution environment. I place the controls using the following code in a for loop:
GetClientRect(cRectDimen);
int iHalf = cRectDimen.right / 2;
int iY = cRectDimen.top;
int iX = cRectDimen.left+5;
int iVeryFarRight = cRectDimen.right - 5;
int iFarRight = iHalf - 10;
POINT ptTop,ptBottom;
cStat = new CStatic;
iY += 20;
ptTop.x = iX + 10;
ptTop.y = iY;
ptBottom.x = iX + pDataField->m_csDesc.GetLength() * 10;
ptBottom.y = iY + 15;
cStatRect.SetRect(ptTop,ptBottom);
Yet the ultra high resolution image appears as:
And the high resolution image as:

You need to take in account the size of the font.
CFont* pFont = GetFont();
LOGFONT lf;
pFont->GetLogFont(&lf);
int iFontHeight = lf.lfHeight; // use this + padding to space your controls vertically
If you want to get more detail on the font, you can use GetTextMetrics().

Related

How to downsample a not-power-of-2 texture in UnrealEngine?

I am rendering the Viewport with a resolution of something like 1920x1080 multiplied by a Oversampling value like 4. Now i need to downsample from the rendered Resolution 7680‬x4320 back to the 1920x1080.
Are there any functions in Unreal I could use for that ? Or any Library (windows only) which handle this nicely ?
Or what would be a propper way of writing this my own ?
We tried to implement a downsampling but it only works if SnapshotScale is 2, when its higher than 2 it doesn't seem to have an effect regarding image quality.
UTexture2D* AAVESnapShotManager::DownsampleTexture(UTexture2D* Texture)
{
UTexture2D* Result = UTexture2D::CreateTransient(RenderSettings.imageWidth, RenderSettings.imageHeight, PF_B8G8R8A8);
void* TextureDataVoid = Texture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY);
void* ResultDataVoid = Result->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
FColor* TextureData = (FColor*)TextureDataVoid;
FColor* ResultData = (FColor*)ResultDataVoid;
int32 WindowSize = RenderSettings.resolutionScale / 2;
for (int x = 0; x < Result->GetSizeX(); ++x)
{
for (int y = 0; y < Result->GetSizeY(); ++y)
{
const uint32 ResultIndex = y * Result->GetSizeX() + x;
uint32_t R = 0, G = 0, B = 0, A = 0;
int32 Samples = 0;
for (int32 dx = -WindowSize; dx < WindowSize; ++dx)
{
for (int32 dy = -WindowSize; dy < WindowSize; ++dy)
{
int32 PosX = (x * RenderSettings.resolutionScale + dx);
int32 PosY = (y * RenderSettings.resolutionScale + dy);
if (PosX < 0 || PosX >= Texture->GetSizeX() || PosY < 0 || PosY >= Texture->GetSizeY())
{
continue;
}
size_t TextureIndex = PosY * Texture->GetSizeX() + PosX;
FColor& Color = TextureData[TextureIndex];
R += Color.R;
G += Color.G;
B += Color.B;
A += Color.A;
++Samples;
}
}
ResultData[ResultIndex] = FColor(R / Samples, G / Samples, B / Samples, A / Samples);
}
}
Texture->PlatformData->Mips[0].BulkData.Unlock();
Result->PlatformData->Mips[0].BulkData.Unlock();
Result->UpdateResource();
return Result;
}
I expect a high quality oversampled Texture output, working with any positive int value in SnapshotScale.
I have a suggestion. It's not really direct, but it involves no writing of image filtering or importing of libraries.
Make an unlit Material with nodes TextureObject->TextureSample-> connect to Emissive.
Use the texture you start with in your function to populate the Texture Object on a Material Instance Dynamic of the material.
Use the "Draw Material to Render Target" function to draw the Material Instance Dynamic to a Render Target that is pre-set with your target resolution.

Masking image and video in opencv c++

I am a beginner in image processing especially in openCV C++. I have a problem on my work. In C# with EmguCV it is possible to make masking in image and video files based on ROI. My question is, is it possible to make masks the same way in OpenCV C++? . I have tried to use ROI in OpenCV C++, but the result only cropping the image not like the example that i attached Here. I also attached the pseucode of masking in C# with EmguCV but have not found yet in C++ version. I am looking forward for any answer. Thank You
pixelSize, out long processingTime)
{
int x = imageInput.Width / pixelSize;
int y = imageInput.Height / pixelSize;
Mat imageBlock = new Mat();
Point darkestBlockPoint = new Point();
int darkestBlockValue = 100000;
//AppendLogTxt("", "y,x,value", "masking");
for (int i = marginV; i < y - marginV; i++)
{
for (int j = marginH; j < x - marginH; j++)
{
imageBlock = new Mat(imageInput, new Rectangle(j * pixelSize, i * pixelSize, pixelSize, pixelSize));
MCvScalar avg = CvInvoke.Mean(imageBlock);
//AppendLogTxt("", i.ToString() + "," + j.ToString() + "," + avg.V0.ToString(), "masking");
if ((int)avg.V0 < darkestBlockValue)
{
darkestBlockValue = (int)avg.V0;
darkestBlockPoint.X = j;
darkestBlockPoint.Y = i;
}
}
}
darkestBlockPoint.X = darkestBlockPoint.X * pixelSize + pixelSize / 2;
darkestBlockPoint.Y = darkestBlockPoint.Y * pixelSize + pixelSize / 2;
return darkestBlockPoint;
}

Getting information about nearest monitor to mouse cursor in X11

I need to get information such as size and coordinates about nearest monitor to the mouse cursor position in multi-monitor systems. I've done it before in Windows and I want to know how to do it in Linux X11.
Using the code below I can measure the sum of whole screens size but cannot measure each monitor separately.
Screen *screen = DefaultScreenOfDisplay(DisplayHandle);
int xx = screen->width / 2 - Settings::WindowWidth / 2;
int yy = screen->height / 2 - Settings::WindowHeight / 2;
My previous code:
POINT mouse_position;
GetCursorPos(&mouse_position);
HMONITOR hMonitor = MonitorFromPoint(mouse_position, MONITOR_DEFAULTTOPRIMARY);
MONITORINFOEX monitor_info;
memset(&monitor_info, 0, sizeof(MONITORINFOEX));
monitor_info.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitor_info);
// CREATE WINDOW IN CENTER OF MONITOR //
int edge = GetSystemMetrics(SM_CXEDGE);
int fixed_frame = GetSystemMetrics(SM_CXFIXEDFRAME);
int monitor_width = monitor_info.rcMonitor.right - monitor_info.rcMonitor.left;
int monitor_height = monitor_info.rcMonitor.bottom - monitor_info.rcMonitor.top;
int xx = monitor_width / 2 - Settings::WindowWidth / 2;
int yy = monitor_height / 2 - Settings::WindowHeight / 2;
int win_x = xx - edge + monitor_info.rcMonitor.left;
int win_y = yy - fixed_frame + monitor_info.rcMonitor.top;
Thanks
If you have two monitors that form a single desktop, use the Xinerama extension. The code below picks the largest screen of the available monitors, but you'll get the idea.
#include <X11/extensions/Xinerama.h>
// By default go fullscreen
m_winWidth = DisplayWidth (m_display, m_screenNo);
m_winHeight = DisplayHeight (m_display, m_screenNo);
// But, with Xinerama, use the largest physical screen
if (XineramaIsActive (m_display))
{
int m = 0;
int pixels = 0;
XineramaScreenInfo *xs = XineramaQueryScreens (m_display, &m);
if (0 != xs && m > 0)
{
for (int i = 0; i < m; i++)
{
//printf ("%dx%d, [%d, %d] %d\n", xs[i].width, xs[i].height, xs[i].x_org, xs[i].y_org, xs[i].screen_number);
if (xs[i].width * xs[i].height > pixels)
{
m_xineramaScreen = xs[i].screen_number; // pick screen
pixels = xs[i].width * xs[i].height;
m_winWidth = xs[i].width;
m_winHeight = xs[i].height;
}
}
XFree (xs);
}
}

Cocos2d-x Touch Event coordinate system

I'm using cocos2d-x 3.7.1
I have a Node in my Scene, I'm adding child Nodes to that Node. (HexField is a subclass of Node)
int rhombusSizeX = 1;
int rhombusSizeY = 2;
for (int y = 0; y < rhombusSizeY; ++y){
for (int x = 0; x < rhombusSizeX; ++x){
HexField* field = HexField::create();
field->setPosition(Vec2(x*30 + y*15, y*30));
field->setName("HexField " + to_string(x) + "," + to_string(y));
auto listener = EventListenerTouchOneByOne::create();
listener->setSwallowTouches(true);
listener->onTouchBegan = CC_CALLBACK_2(HexField::onTouchBegan, field);
listener->onTouchMoved = CC_CALLBACK_2(HexField::onTouchMoved, field);
listener->onTouchEnded = CC_CALLBACK_2(HexField::onTouchEnded, field);
Director::getInstance()->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, field);
this->addChild(field, 1);
}
}
If there is only one HexField added
int rhombusSizeX = 1;
int rhombusSizeY = 1;
The touch->getLocation() in HexField::onTouchBegan is reported as expected in World Coordinates.
If there is more than one HexField added
int rhombusSizeX = 5;
int rhombusSizeY = 5;
touch->getLocation() returns coordinates relative to the "one before last" added HexField which in this case will be HexField 3,4.
Why is that so? Is it a bug?
I came to an answer now.
It all happened because I didn't call:
Director::getInstance()->popMatrix(MATRIX_STACK_TYPE::MATRIX_STACK_MODELVIEW);
after:
Director::getInstance()->pushMatrix(MATRIX_STACK_TYPE::MATRIX_STACK_MODELVIEW);
Director::getInstance()->loadMatrix(MATRIX_STACK_TYPE::MATRIX_STACK_MODELVIEW, transform);
in my draw function.
Looks like that error escalated to other parts of my program causing some strange behavior.

Quick code to resize DIB image and maintain good img quality

There is many algorithms to do image resizing - lancorz, bicubic, bilinear, e.g. But most of them are pretty complex and therefore consume too much CPU.
What I need is fast relatively simple C++ code to resize images with acceptable quality.
Here is an example of what I'm currently doing:
for (int y = 0; y < height; y ++)
{
int srcY1Coord = int((double)(y * srcHeight) / height);
int srcY2Coord = min(srcHeight - 1, max(srcY1Coord, int((double)((y + 1) * srcHeight) / height) - 1));
for (int x = 0; x < width; x ++)
{
int srcX1Coord = int((double)(x * srcWidth) / width);
int srcX2Coord = min(srcWidth - 1, max(srcX1Coord, int((double)((x + 1) * srcWidth) / width) - 1));
int srcPixelsCount = (srcX2Coord - srcX1Coord + 1) * (srcY2Coord - srcY1Coord + 1);
RGB32 color32;
UINT32 r(0), g(0), b(0), a(0);
for (int xSrc = srcX1Coord; xSrc <= srcX2Coord; xSrc ++)
for (int ySrc = srcY1Coord; ySrc <= srcY2Coord; ySrc ++)
{
RGB32 curSrcColor32 = pSrcDIB->GetDIBPixel(xSrc, ySrc);
r += curSrcColor32.r; g += curSrcColor32.g; b += curSrcColor32.b; a += curSrcColor32.alpha;
}
color32.r = BYTE(r / srcPixelsCount); color32.g = BYTE(g / srcPixelsCount); color32.b = BYTE(b / srcPixelsCount); color32.alpha = BYTE(a / srcPixelsCount);
SetDIBPixel(x, y, color32);
}
}
The code above is fast enough, but the quality is not ok on scaling pictures up.
Therefore, possibly someone already has fast and good C++ code sample for scaling DIBs?
Note: I was using StretchDIBits before - it was super-slow when was needed to downsize 10000x10000 picture down to 100x100 size, my code is much, much faster, I just want to have a bit higher quality
P.S. I'm using my own SetPixel/GetPixel functions, to work directly with data array and fast, that's not device context!
Why are you doing it on the CPU? Using GDI, there's a good chance of some hardware acceleration. Use StretchBlt and SetStretchBltMode.
In pseudocode:
create source dc and destination dc using CreateCompatibleDC
create source and destination bitmaps
SelectObject source bitmap into source DC and dest bitmap into dest DC
SetStretchBltMode
StretchBlt
release DCs
Allright, here is the answer, had to do it myself... It works perfectly well for scaling pictures up (for scaling down my initial code works perfectly well too). Hope someone will find a good use for it, it's fast enough and produced very good picture quality.
for (int y = 0; y < height; y ++)
{
double srcY1Coord = (y * srcHeight) / (double)height;
int srcY1CoordInt = (int)(srcY1Coord);
double srcY2Coord = ((y + 1) * srcHeight) / (double)height - 0.00000000001;
int srcY2CoordInt = min(maxSrcYcoord, (int)(srcY2Coord));
double yMultiplierForFirstCoord = (0.5 * (1 - (srcY1Coord - srcY1CoordInt)));
double yMultiplierForLastCoord = (0.5 * (srcY2Coord - srcY2CoordInt));
for (int x = 0; x < width; x ++)
{
double srcX1Coord = (x * srcWidth) / (double)width;
int srcX1CoordInt = (int)(srcX1Coord);
double srcX2Coord = ((x + 1) * srcWidth) / (double)width - 0.00000000001;
int srcX2CoordInt = min(maxSrcXcoord, (int)(srcX2Coord));
RGB32 color32;
ASSERT(srcX1Coord < srcWidth && srcY1Coord < srcHeight);
double r(0), g(0), b(0), a(0), multiplier(0);
for (int xSrc = srcX1CoordInt; xSrc <= srcX2CoordInt; xSrc ++)
for (int ySrc = srcY1CoordInt; ySrc <= srcY2CoordInt; ySrc ++)
{
RGB32 curSrcColor32 = pSrcDIB->GetDIBPixel(xSrc, ySrc);
double xMultiplier = xSrc < srcX1Coord ? (0.5 * (1 - (srcX1Coord - srcX1CoordInt))) : (xSrc >= srcX2Coord ? (0.5 * (srcX2Coord - srcX2CoordInt)) : 0.5);
double yMultiplier = ySrc < srcY1Coord ? yMultiplierForFirstCoord : (ySrc >= srcY2Coord ? yMultiplierForLastCoord : 0.5);
double curPixelMultiplier = xMultiplier + yMultiplier;
if (curPixelMultiplier > 0)
{
r += (curSrcColor32.r * curPixelMultiplier); g += (curSrcColor32.g * curPixelMultiplier); b += (curSrcColor32.b * curPixelMultiplier); a += (curSrcColor32.alpha * curPixelMultiplier);
multiplier += curPixelMultiplier;
}
}
color32.r = BYTE(r / multiplier); color32.g = BYTE(g / multiplier); color32.b = BYTE(b / multiplier); color32.alpha = BYTE(a / multiplier);
SetDIBPixel(x, y, color32);
}
}
P.S. Please don't ask why I’m not using StretchDIBits - leave comments for these who understand that not always system api is available or acceptable.
Again, why do it on the CPU? Why not use OpenGL / DirectX and fragment shaders? In pseudocode:
upload source texture (cache it if it's to be reused)
create destination texture
use shader program
render quad
download output texture
where shader program is the filtering method you're using. The GPU is much better at processing pixels than CPU/GetPixel/SetPixel.
You could probably find fragment shaders for lots of different filtering methods on the web - GPU Gems is a good place to start.