SDL: in non-fullscreen mode finding maximum window size - sdl

I have an application that does not run in full-screen mode.
After SDL_init I execute SDL_SetVideoMode(0, 0, SDL_OPENGL | SDL_HWSURFACE | SDL_ASYNCBLIT). From what I read, this should allocate a window of maximum size.
The unfortunate thing now is that it allocates a window of 1600x900: which is the physical size of the monitor but not the free space on the monitor (some of it is used by the menu-row and by the window-border).
Any ideas how I can find how much space is available?

What I have in my program that runs fullscreen (hiding menus, docks, panels etc) is:
if ( SDL_Init( SDL_INIT_VIDEO | SDL_INIT_TIMER ) < 0 ) {
throw SDL_GetError();
}
const SDL_VideoInfo* vidinfo = SDL_GetVideoInfo();
int max_w = vidinfo->current_w;
int max_h = vidinfo->current_h;
.
.
.
SDL_Surface *screen = SDL_SetVideoMode(max_w,max_h,0,SDL_FULLSCREEN);
Be sure to call SDL_GetVideoInfo() before SDL_SetVideoMode().

Related

SetWindowPos() cross-process with multiple monitors and different display scalings

I have already asked a similar question here, but now the problems seems to be a bit different, so I figured I'd create a new question for it.
I am using SetWindowPos() to move/resize windows from another process. This works fine for as long as all screens are using the same display scaling, but under the following scenario it doesn't work as expected:
Primary Screen is at (0,0) with 3440x1440 and 150% scaling.
Secondary Screen is at (3440, 0) with 900x1440 and 100% scaling.
My application is PROCESS_PER_MONITOR_DPI_AWARE_V2 and the target application is PROCESS_DPI_UNAWARE (gets scaled by Windows).
Now if I move a window so that the upper left is on the primary screen and the center is still on the secondary screen, for example to (3400, 0).
SetWindowPos(hwnd, HWND_BOTTOM, 3300, 0, 0, 0, SWP_NOACTIVATE | SWP_NOZORDER | SWP_NOSIZE);
Then this is what happens:
The window is scaled according to the second screen's 100% display scaling.
The window is not moved to (3300, 0). Instead, the coordinates it receives in the WM_WINDOWPOSCHANGING message are (2200, 0). The coordinates seem to get scaled down to logical coordinates.
I am therefore unable to move a window to that location. I tried to use PhysicalToLogicalPointForPerMonitorDPI() on the coordinates I pass to SetWindowPos(), but without success (it doesn't even change the coordinates).
Now it seems like I just can't move the window to any position such that the upper left is on the primary screen, but the window's center is still on the secondary screen, because Windows will scale the coordinates down and if I manually scale them up, then I would already position the window on the second screen and Windows does not apply the scaling anymore. Even if I was able to get around that, manually calculating the scaling is possible with a two-screen setup, but quickly becomes too complex with more screens. So, how do I get this to work?
EDIT1: I tried to use SetThreadDpiAwarenessContext() as suggested, but it still doesn't work. Now, when I move the window to (3000,0) it is moved to (4500,0) instead. It seems like I somehow need to scale the coordinates I pass to SetWindowPos(), but I have no idea how.
m_previousContext = SetThreadDpiAwarenessContext(GetWindowDpiAwarenessContext(hwnd));
if(m_previousContext == NULL)
Log::out<Log::Level::Error>("Failed to set thread dpi awareness context.");
SetWindowPos(hwnd, HWND_BOTTOM, 3000, 0, 0, 0, SWP_NOACTIVATE | SWP_NOZORDER | SWP_NOSIZE);
Also, isn't this really inefficient if I regularly resize and move windows around?
EDIT2: I've attach a link to a minimal working binary. You can download it from Google Drive here. It requires Windows 10, version 1607 to run. When I run it on the setup mentioned above with SetWindowPos.exe 002108B6 3000 0 then the window is moved to (4500,0) instead.
Below is the code:
int main(int argc, char** argv)
{
if(argc < 4)
{
std::cerr << "Usage: SetWindowPos.exe <HWND> <x> <y>" << std::endl;
return 1;
}
HWND hwnd;
int x, y;
try
{
hwnd = (HWND)hexStrToInt(argv[1]); // I've omitted the implementation of hexStrToInt
x = atoi(argv[2]);
y = atoi(argv[3]);
}
catch(...)
{
std::cerr << "Invalid arguments." << std::endl;
return 1;
}
if(IsWindow(hwnd) == FALSE)
{
std::cerr << "Invalid window handle " << argv[1] << "." << std::endl;
return 1;
}
auto context = SetThreadDpiAwarenessContext(GetWindowDpiAwarenessContext(hwnd));
SetWindowPos(hwnd, HWND_BOTTOM, x, y, 0, 0, SWP_NOACTIVATE | SWP_NOZORDER | SWP_NOSIZE);
SetThreadDpiAwarenessContext(context);
return 0;
}
Use SetThreadDpiAwarenessContext to temporarily set your thread awareness mode to the same value as the target application.
SetThreadDpiAwarenessContext
High-DPI Scaling Improvements for Desktop Applications in the Windows 10 Creators Update (1703)
High DPI Scaling Improvements for Desktop Applications and “Mixed Mode” DPI Scaling in the Windows 10 Anniversary Update (1607)

Move window event on X11 window

I currently have 2 X11 windows that I want to keep in sync. One overlays a transparent graphic while the other one shows video. I currently have the one that overlays graphic to be always on top of the one of the video, but I am having trouble making sure both windows are at the same places when moved. I am looking for a window move event in the X11 documentation, but I can't seem to find one.
In addition, in my event handle loop, I have tried to get the location of one of my windows and to move the other window to that location. This has failed whenever XGetWindowAttributes is called it always returns x=0 and y=0 even though the window is moved.
Window win;
int nxvisuals = 0;
XVisualInfo visual_template;
XVisualInfo *visual_list;
XVisualInfo vinfo;
Visual *visual;
int depth;
Atom wm_state;
(void)wm_state;
x_display = XOpenDisplay ( NULL ); // open the standard display (the primary screen)
visual_template.screen = DefaultScreen(x_display);
visual_list = XGetVisualInfo(x_display, VisualScreenMask, &visual_template, &nxvisuals);
XMatchVisualInfo(x_display, XDefaultScreen(x_display), 32, TrueColor, &vinfo)
Window parent = XDefaultRootWindow(x_display);
XSync(x_display, True);
visual = vinfo.visual;
depth = vinfo.depth;
XSetWindowAttributes swa;
swa.event_mask = ExposureMask | PointerMotionMask | KeyPressMask;
swa.colormap = XCreateColormap(x_display, XDefaultRootWindow(x_display), visual, AllocNone);
swa.background_pixel = 0;
swa.border_pixel = 0;
win = XCreateWindow ( // create a window with the provided parameters
x_display, parent,
0, 0, 1024, 576, 0,
depth, InputOutput,
visual, CWEventMask | CWBackPixel | CWColormap | CWBorderPixel,
&swa );
XSync(x_display, True);
XSetWindowAttributes xattr;
xattr.override_redirect = False;
XChangeWindowAttributes ( x_display, win, CWOverrideRedirect, &xattr );
XWMHints hints;
hints.input = True;
hints.flags = InputHint;
XSetWMHints(x_display, win, &hints);
XSizeHints *size_hints = XAllocSizeHints();
size_hints->flags = PMinSize | PMaxSize | PSize;
size_hints->min_width = 1024;
size_hints->max_width = 1024;
size_hints->min_height = 576;
size_hints->max_height = 576;
XSetNormalHints(x_display, win, size_hints);
XSetWMSizeHints(x_display,win , size_hints, PSize | PMinSize | PMaxSize);
XMapWindow ( x_display , win ); // make the window visible on the screen
XStoreName ( x_display , win , "OpenGL" ); // give the window a name
/* Second window starts here */
int cnxvisuals = 0;
XVisualInfo cvisual_template;
XVisualInfo *cvisual_list;
XVisualInfo cvinfo;
Visual *cvisual;
int cdepth;
cvisual_template.screen = DefaultScreen(x_display);
cvisual_list = XGetVisualInfo(x_display, VisualScreenMask, &cvisual_template, &cnxvisuals);
XMatchVisualInfo(x_display, XDefaultScreen(x_display), 24, TrueColor, &cvinfo)
Window child = XDefaultRootWindow(x_display);
XSync(x_display, True);
cvisual = cvinfo.visual;
cdepth = cvinfo.depth;
XSetWindowAttributes cswa;
cswa.event_mask = PointerMotionMask | KeyPressMask;
cswa.colormap = XCreateColormap(x_display, XDefaultRootWindow(x_display), cvisual, AllocNone);
cswa.background_pixel = 0;
cswa.border_pixel = 0;
child = XCreateWindow ( // create a window with the provided parameters
x_display, parent,
0, 0, 1024, 576, 0,
cdepth, InputOutput,
cvisual, CWEventMask | CWBackPixel | CWColormap | CWBorderPixel,
&cswa );
XSync(x_display, True);
XSetWindowAttributes xcattr;
xcattr.override_redirect = False;
XChangeWindowAttributes ( x_display, child, CWOverrideRedirect, &xcattr );
XWMHints chints;
chints.input = True;
chints.flags = InputHint;
XSetWMHints(x_display, child, &chints);
XSetNormalHints(x_display, child, size_hints);
XSetWMSizeHints(x_display,child , size_hints, PSize | PMinSize | PMaxSize);
XMapWindow ( x_display , child ); // make the window visible on the screen
XStoreName ( x_display , child , "video" ); // give the window a name
XSelectInput(x_display, child, ExposureMask | FocusChangeMask);
int id = pthread_create(&x11loop, NULL,x11_handle_events,this);
Here is my handle events call
void* x11_handle_events(void *void_ptr)
{
Renderer* renderer = static_cast<Renderer*>(void_ptr);
renderer->stop = false;
XEvent event;
XWindowAttributes opengl_attrs;
while(!renderer->stop)
{
XNextEvent(renderer->x_display, &event);
switch(event.type)
{
case Expose:
if (event.xexpose.window == renderer->child)
{
XRaiseWindow(renderer->x_display, renderer->win);
}
break;
case FocusIn:
if (event.xfocus.window == renderer->child)
{
XRaiseWindow(renderer->x_display, renderer->win);
}
break;
}
// Make sure both windows are in the same location
XGetWindowAttributes(renderer->x_display, renderer->child, &opengl_attrs);
XMoveWindow(renderer->x_display, renderer->win, opengl_attrs.x, opengl_attrs.y);
}
pthread_exit(0);
return NULL;
}
The event you're looking for is ConfigureNotify
http://tronche.com/gui/x/xlib/events/window-state-change/configure.html
The X server can report ConfigureNotify events to clients wanting information about actual changes to a window's state, such as size, position, border, and stacking order. The X server generates this event type whenever one of the following configure window requests made by a client application actually completes:
snip
A window is moved by calling XMoveWindow().
The x and y members are set to the coordinates relative to the parent window's origin and indicate the position of the upper-left outside corner of the window. The width and height members are set to the inside size of the window, not including the border. The border_width member is set to the width of the window's border, in pixels.
The event mask is iirc StructureNotifyMask.
The window manager might disagree with your moving around though... but if it still doesn't work, leave a comment and we'll look deeper.

C++ SDL2 memory leak? SDL_RenderClear?

I'm using SDL2 on Windows with Code::blocks.
I write this little program. But it cause a memory leak!
The code is very simple. it does only clear and update the screen.
#include <SDL.h>
const int SCREEN_WIDTH = 640;
const int SCREEN_HEIGHT = 480;
SDL_Window* window = NULL;
SDL_Renderer* renderer = NULL;
SDL_Event event;
bool quit = false;
void loadSDL();
void closeSDL();
int main( int argc, char* args[] )
{
loadSDL();
while(!quit)
{
while(SDL_PollEvent(&event) != 0)
{
if(event.type == SDL_QUIT)
{
quit = true;
}
}
SDL_RenderClear( renderer );
SDL_RenderPresent( renderer );
}
closeSDL();
return 0;
}
void loadSDL()
{
SDL_Init( SDL_INIT_VIDEO );
window = SDL_CreateWindow( "Test1", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN );
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
SDL_SetRenderDrawColor(renderer, 0x0, 0x0, 0x0, 0xFF);
}
void closeSDL()
{
SDL_DestroyRenderer( renderer );
SDL_DestroyWindow( window );
window = NULL;
renderer = NULL;
SDL_Quit();
}
I don't know what is wrong...
If I comment this line out
SDL_RenderClear( renderer );
There is no memory leak!
Memory leaks are not the most obvious things to track down. To properly identify a leak, you'll need to use a profiling tool as mentioned in the comments.
The most common reason for what you are seeing is that the OS is free to assign memory to processes before they request it and to delay releasing unused memory. Sometimes this looks like a leak as your process's RAM usage grows in Task Manager. If you wait for a while, it will likely stabilize.
As for a leak specifically in SDL_RenderClear(), it helps to know which renderer you're using. They have different code paths. However, in this case they are quite similar. Here's the GL version from SDL_render_gl.c:
static int
GL_RenderClear(SDL_Renderer * renderer)
{
GL_RenderData *data = (GL_RenderData *) renderer->driverdata;
GL_ActivateRenderer(renderer);
data->glClearColor((GLfloat) renderer->r * inv255f,
(GLfloat) renderer->g * inv255f,
(GLfloat) renderer->b * inv255f,
(GLfloat) renderer->a * inv255f);
data->glClear(GL_COLOR_BUFFER_BIT);
return 0;
}
The only indirect call here is GL_ActivateRenderer(), which does a simple comparison and set. The Direct3D RenderClear() is a little more complicated but does essentially the same thing. It is unlikely that your problem is here.

Should I make sure to destroy SDL 2.0 objects (renderer, window, textures, etc.) before exiting the program?

This tutorial on SDL 2.0 uses code that returns from main without first destroying any of the resource pointers:
int main(int argc, char** argv){
if (SDL_Init(SDL_INIT_EVERYTHING) == -1){
std::cout << SDL_GetError() << std::endl;
return 1;
}
window = SDL_CreateWindow("Lesson 2", SDL_WINDOWPOS_CENTERED,
SDL_WINDOWPOS_CENTERED, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
if (window == nullptr){
std::cout << SDL_GetError() << std::endl;
return 2; //this
}
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED
| SDL_RENDERER_PRESENTVSYNC);
if (renderer == nullptr){
std::cout << SDL_GetError() << std::endl;
return 3; //and this too
}
Should I tell my terminate function to DestroyRenderer, DestroyWindow, DestroyTexture, etc. before exiting?
Same question as 'should i free memory that i've allocated before quitting a program'. Yes, if there is no bugs in SDL/X11/GL/etc. finalize code - all will be freed anyway. But i see no reasons why you shouldn't want to do that yourself.
Of course if you crash rather then exit - there is a good chance some things wouldn't be done and e.g. you wouldn't return display to native desktop resolution.
Ive personally had problems with SDL_TEXTURE that caused a memory leak while the program was running and the display of pictures just stopped after the program leaked about 2gb of ram when normally it uses 37 mb of ram.
SDL_DestroyTexture(texture);
Just called this afer every time i used to display a different picture with renderer and the memory leak was gone

Access Violation using SDL

I have a small script which is meant to get the user's screen resolution and assign it to a variable but i get an Access Violation error and not sure how to fix it (I'm quite new to this language) so was hoping some one can show me how I should write it.
This is my setup:
//get player's screen info
const SDL_VideoInfo* myScreen = SDL_GetVideoInfo();
//SDL screen
SDL_Surface *screen;
int reso_x = myScreen->current_w; //resolution width (ERROR here)
int reso_y = myScreen->current_h; //resolution height
Uint8 video_bpp = 32;
Uint32 videoflags = SDL_SWSURFACE | SDL_DOUBLEBUF | SDL_ANYFORMAT;// | SDL_FULLSCREEN;
/* Initialize the SDL library */
if ( SDL_Init(videoflags) < 0 ) {
fprintf(stderr, "Couldn't initialize SDL: %s\n",
SDL_GetError());
exit(1);
}
//setup Screen
screen = SDL_SetVideoMode(reso_x, reso_y, video_bpp, videoflags|SDL_FULLSCREEN);
Does any one know the cause of my mistake?
You shouldn't make any SDL calls before SDL_init. My guess is GetVideoInfo is returning null because you are not in a valid state at that point. Also the flags you are passing to init are wrong, it should be SDL_INIT_VIDEO not what kind of video you want. Your video flags should go to the SetVideoMode function.