OpenGL and Windows 10 per-monitor aware DPI scaling - opengl

What's the correct way to deal with DPI scaling in an OpenGL application when the application is DPI monitor aware. aka:
SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2);
I'm finding different behaviour on different devices - probably driver related, but some machines need to setup the viewport like this:
glViewport(0, 0, prc->right, prc->bottom)
while others need something like this:
glViewport(0, 0, (int)(prc->right * 96 / dpi), (int)(prc->bottom * 96 / dpi));
(where prc is the client rect and dpi is the current DPI of the window).
I've put together a simple demo program that shows the problem. The problems happen after changing the system scaling, but before signing out/back in.
Problems include:
Rendering at the wrong scale (not sure which driver is wrong)
Vertical shifting by what looks like the difference in the title bar height before changing the scaling vs after changing the scaling. (on macbook pro/bootcamp)
I've tried tearing down and recreating the wgl context on WM_DPICHANGED but to no avail and not sure what else to try.
Update: I've updated the sample program repo to include screen shots of what I'm seeing and I've included the .exe for the test program.
See here: https://bitbucket.org/toptensoftware/minimalopengl/overview
Update 2 - found a GPU that works as expected Radeon RX460. Updated screen shots in repo to show what's expected.
Update 3 - I'm now fairly confident this is caused by issues with the NVidia and Intel drivers and have logged bugs with both. I guess WHQL compliance doesn't cover OpenGL drivers.
Still... it'd be nice to have proper documentation or example program from Microsoft on how OpenGL and the per-monitor DPI support is supposed to work.

Have the same problem on gtx 1050. I've noticed that this problem only appears with SetThreadDpiAwarenessContext function.
from msdn:
SetThreadDpiAwarenessContext function.
Set the DPI awareness for the current thread to the provided value.
It means that if driver creates one or more additional threads to render OpenGL, each thread will have different dpi awareness level.
So there are three solutions:
Use old SetProcessDpiAwareness(PROCESS_PER_MONITOR_DPI_AWARE) function from Windows 8.1 and call EnableNonClientDpiScaling(hWnd) in WM_NCCREATE message;
Use new SetProcessDpiAwarenessContext(DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2) from Creators Update;
Set dpi awareness level in application manifest (will be applied to whole application).
Both functions set awareness level for whole process.

Related

C++ glfw3: one (of the two) windows in fullscreen mode is not really fullscreen (Mac Os)

In my app (C++14, MacOsX 10.11) I use glfw3 to create two windows that should run in fullscreen mode in two monitors with different native resolutions. I'm creating the windows like this:
glfwCreateWindow(capture_monitor_width, capture_monitor_height, "Capture Window",capture_monitor,NULL);
//..
glfwCreateWindow(projection_monitor_width, projection_monitor_height, "Projection Window",projection_monitor,NULL).
(where projection_motinor_width, projection_monitor_height,capture_motinor_width,capture_monitor_height have been retrieved by the appropriate GLFWvidmode* and they have been tested to be correct in all cases)
The problem is that while I'm getting the fullscreen window correctly in my primary monitor, in my secondary one it is displaced upwards so that it only covers the upper 3/4 (more or less) of the screen. Note that by simply replacing projection_monitorwith NULL in the snippet above I get a properly aligned window that does cover the entire screen (yet it has a title bar which I don't need in my app).
Any ideas? Could this be some sort of bug? Any hacks around it?
With the latest 'devel' version the problem is no longer there. So apparently it's a bug that has already been fixed.

How to render child window with Direct2D in native desktop Windows application?

I have a desktop application where all windows (HWND) render itself with Direct2D 1.1. My question is how to do it more correctly?
Should each window has its own Direct2D device context derived from one Direct2D device? In this case, I cannot render transparent content on a child window without additional tricks (I have to change target on parent window’s context, render parent window to Direct2D bitmap and then draw this bitmap on child’s target).
May be it is better to have one Direct2D device context where all windows render itself? I believe DirectComposition works in a similar way. Unfortunately, I cannot use it because I target Windows 7.
You're asking a question whose answer will be very application specific. I recommend avoiding the whole problem of trying to get HWNDs to render with transparency amongst each other, especially if you're throwing Direct2D into the mix. There is just too much pain in that direction. Every version of Windows that you support will have different bugs that you'll be constantly bumping into and grasping at workarounds for.
Case in point: For the v4.0 release of Paint.NET, I converted all text rendering to DirectWrite, and almost all UI controls to use Direct2D. The image thumbnail control at the top of the window (the MDI selector) is using Direct2D for rendering but it also has to compose on top of what's behind it. And it has to play nice with glass on Win7 (it looks great though!). The code for this is awful, tricky, barely maintainable, and it seems to bump into a different rendering bug on every release of Windows: 7, 7 SP1, 8, 8.1, and 10 all behave slightly differently! It's really annoying to test, too; it's the only reason I have to set up and maintain VMs for every version of Windows that I support (other than the installer and updater). Windows 7 worked fine, then 7 SP1 added a bug which required some tuning to how I filled the alpha channel. Windows 8 has flickering when you resize the window unless I do a certain hack, but 8.1 works fine. 10 then has its own flickering bug if software rendering is used. Remote Desktop breaks things in its own way. Then you also have to worry about High Contrast, and whether DWM is enabled/disabled if you're supporting Windows 7. They all behave differently and it's really really painful.
Anyway. What you seem to really need is a UI system like WPF or XAML which doesn't use anything other than a top-level HWND container. At that point you're custom rendering everything and doing your own hit-testing and input routing (and accessibility and all sorts of other things), so it's not a small task.
Regarding the "how to do it more correctly" question and cardinality of device and device context: Have you thought about just using ID2D1Factory::CreateHWNDRenderTarget or ID2D1Factory::CreateDCRenderTarget ? They return ID2D1RenderTarget but you can call QueryInterface to cast them to ID2D1DeviceContext (this fact is missing from the docs but is also clearly intentional). This should simplify working with Direct2D and HWNDs quite a bit. This is what I do in Paint.NET: I still use an HWND for each control, but each control is using its own HWND or DC render target. If you're willing to poke around with Reflector or ILSpy, checkout Direct2DControl and Direct2DControlHandler in the Paint.NET DLLs.
Also, be careful about using more than 1 hardware accelerated HWND render target. You don't want to get into a weird area where every Direct2D-based UI control is waiting on VSync. Using D2D1_PRESENT_OPTIONS_IMMEDIATELY when creating the HWND render target should help. DWM already handles VSync, so you should be fine to tell Direct2D to ignore it unless you're doing some rather specific stuff with animations and timers.

Full screen Direct3D9 device only displays at native resolution when second monitor is plugged in

With a single monitor my program works in both windowed and full screen mode (using any resolution chosen from EnumAdapterModes), but when I plug in my second monitor (running the same code) I can create a full screen device at any resolution from EnumAdapterModes, but only at the native resolution (1600 x 900) does it display the scene, otherwise the screen is just black among other problems listed below.
What I've discovered so far:
This problem does not occur in windowed or multihead mode
I can still render to a texture (I had to switch modes to display it though)
All function calls return success codes (including TestCooperativeLevel)
If I try to draw to the back buffer using Clear or the DrawPrimitive functions or call Present (which still leaves a black screen), than calls to GetRenderTargetData fail and attempting to lock a volume texture will return different slice pitches at sub levels
Commercial games that use Direct3D9 (Portal) don't have any problem switching between resolutions with my second monitor plugged in so there must be a solution
The problem seems to be related to the back buffer created by the Direct3D9 run time but the only solution I can come up with is to force multihead mode on devices with multiple monitors, any ideas?
Question that seems to have the same problem but lacks a solution: How do I render a fullscreen frame with a different resolution than my display?
Finally figured it out, seems to be a driver bug in Windows Vista and later and using Direct3D9Ex fixed the problem.
I didn't want to use Direct3D9Ex because it was only introduced on Windows Vista and I wanted to support Windows XP as a minimum, but MSDN has some sample code on how to support both so it's all good.

Why is Windows 8.1 and freeglut not fitting draw calls into window (window cutoff)?

I am experiencing some undesirable behavior while running these OpenGL examples (click the download to get access to the visual studio solution). Everything compiles correctly, but when running tutorial 3, the window cuts off the top and right side of the triangle. (Shown Here)
I have run this demo on Ubuntu and Mac before with no problems. I have not tried this on Windows 7 or below. In no way have I modified the code or the project. Also when I bring freeglut into fullscreen, the triangle is not cutoff at all.
Am I missing something or is this a new setback to using windows 8 and freeglut? Has any one else had this problem?
Most likely it is a bug. In Windows, when you specify the size of the window, the size is including system borders. Therefore the effective client area (black area in your screenshot) is less than the original size requested, and this problem propagates to glViewport. When these two do not match, you experience those cuts.
There are two opportunities for a fix. i. make adjustments to free-glut so it correctly respects requesting a window-size by their client area ii. adjust glviewport to adhere to the actual client area.

Other monitors go black when switch one monitor to fullscreen with DXGI

When I switch one of my monitor to fullscreen mode, sometimes the other monitors just become black and won't show anything. Did I do something wrong or it is just some bug?
I created a window, and then created a swapchain binded to that window. And I called the swapchain's SetFullScreenState with first parameter true, and second parameter the IDXGIOutput object of the monitor I wanted to switch fullscreen. Sometimes it works fine, but sometimes all the other monitors are lost (with only the fullscreened one showing things).
My graphics card is Radeon HD6750, and driver version is 12.3.
I found the MulitMon10 sample has the same problem, while some games don't. Or do Skyrim and The Tales of Monkey Island use D3D or OpenGL...?
This question is two years old. I just came across it.
I had a similar issue with DX11, sometimes happening in debug version, systematicaly in release version.
In my paradigm, the primary monitor hosts a console and an optional 'press buttons' GUI. The secondary monitor (one among available ones) is the fullscreen application window where 2D professional images are displayed and GPU transformed using 1D and 3D lookup tables.
Having the primary monitor going blank was a show stopper. All needed dialogs are childs of the console window (thus, opening on the primary monitor). The secondary monitor is a motion picture digital projector .... enough 'blabla'.
So, my solution was to create the swapchain in windowed mode while the targeted window was already in fullscreen mode.
Do not ask me why. It works for me. Here is a bit more:
First, my display window is set to fill the entire monitor surface ( no border, no everything).
Second, I create the swapchain for this window with “windowed = true”.
In facts, even if it looks fullscreen, it is windowed. With no border, it works the same as far as displaying/rendering 2D images is concerned. Feeding directly the backbuffer works too.
Then, and only then, you can switch the backbuffer to real administrative fullscreen. Since this operation is extremely brutal for the eyes, I tend to only do it when absolutely necessary. In effects, Win7 will reset the entire desktop (thus, all monitors, all windows) and create multiple light flashes.
When going real fullscreen after the backbuffer is created, I never experienced the desagrement of being stuck in the midle of a desktop reset (back to the original question).
To be complete, there is a difference between ‘Windowed fullscreen’ and ‘Real fullscreen’. Something you may use.
Windowed fullscreen: other windows/dialogs will overlap your 2D creation.
Real fullscreen: other windows/dialog should stay underneath (not visible, but there).
Toggling between the two modes upon need would be nice, except the desktop reset stress is an heavy penalty to live with.