How to detect power saving mode of graphic card in windows application? - c++

I have an application using an OpenGL window that works ok, but someone detects that if the graphics performance is configurated as power saving, the screen doesn't show any render, it only show a black screen that could be interpreted as a UI bug.
I was wondering if there is a way to know if my application is running in power saving mode, since that configuration implies to use the less powered machine's gpu I don't know if it is possible to use the winAPI. In example I have a Intel GPU and a Nvidia GPU, so the power saving mode use Intel's GPU.
I want to send a warning message or turn off the power save mode.
The winAPI function GetSystemPowerStatus seems to be related with the battery, so it doesn't work for my purpose.
References
https://www.amd.com/en/support/kb/faq/gpu-110
https://asapguide.com/graphics-performance-preferences/

The problem was that I was trying to use texture2D that the Intel GPU doesn't support. I just change it to texture, as recommended in comments

Related

OpenGL - Display video a stream of the desktop on Windows

So I am trying to figure out how get a video feed (or screenshot feed if I must) of the Desktop using OpenGL in Windows and display that in a 3D environment. I plan to integrate this with ARToolkit to make essentially a virtual screen. The only issue is that I have tried manually getting the pixels in OpenGl, but I have been unable to properly display them in a 3D environment?
I apologize in advance that I do not have minimum runnable code, but due to all the dependencies and whatnot trying to get an ARToolkit code running would be far from minimal. How would I capture the desktop on Windows and display it in ARToolkit?
BONUS: If you can grab each desktop from the 'virtual' desktops in Windows 10, that would be an excellent bonus!
Alternative: If you know another AR library that renders differently, or allows me to achieve the same effect, I would be grateful.
There are 2 different problems here:
a) Make an augmentation that plays video
b) Stream the desktop to somewhere else
For playing video on an augmentation you basically need to have a texture that gets updated on each frame. I recall that ARToolkit for Unity has an example that plays video.However.
Streaming the desktop to the other device is a problem of its own. There are tools that do screen recording, but you probably don't want that.
It sounds to me that what you want to do it to make a VLC viewer and put that into an augmentation. If I am correct, I suggest you to start by looking at existing open source VLC viewers.

Full screen Direct3D9 device only displays at native resolution when second monitor is plugged in

With a single monitor my program works in both windowed and full screen mode (using any resolution chosen from EnumAdapterModes), but when I plug in my second monitor (running the same code) I can create a full screen device at any resolution from EnumAdapterModes, but only at the native resolution (1600 x 900) does it display the scene, otherwise the screen is just black among other problems listed below.
What I've discovered so far:
This problem does not occur in windowed or multihead mode
I can still render to a texture (I had to switch modes to display it though)
All function calls return success codes (including TestCooperativeLevel)
If I try to draw to the back buffer using Clear or the DrawPrimitive functions or call Present (which still leaves a black screen), than calls to GetRenderTargetData fail and attempting to lock a volume texture will return different slice pitches at sub levels
Commercial games that use Direct3D9 (Portal) don't have any problem switching between resolutions with my second monitor plugged in so there must be a solution
The problem seems to be related to the back buffer created by the Direct3D9 run time but the only solution I can come up with is to force multihead mode on devices with multiple monitors, any ideas?
Question that seems to have the same problem but lacks a solution: How do I render a fullscreen frame with a different resolution than my display?
Finally figured it out, seems to be a driver bug in Windows Vista and later and using Direct3D9Ex fixed the problem.
I didn't want to use Direct3D9Ex because it was only introduced on Windows Vista and I wanted to support Windows XP as a minimum, but MSDN has some sample code on how to support both so it's all good.

Draw Graphics w/o Desktop Environment C++?

Okay, this is a really strange question and I'm not sure how to phrase but, but I can't seem to find anything on it anywhere, most likely because I'm not using the correct terminology. Also, this may be operating system specific, if it is, I'm using Debian.
Basically, when you boot an older computer or a modern server computer, or stuff along those lines, they boot to a terminal screen. Where all you do is type stuff. And if you want to do anything graphically, you usually download a desktop environment.
But I'm wondering, how could I go about drawing graphics without a desktop environment?
I remember back on MS-DOS you could use QBASIC to change the screen mode and you could then draw colored lines onto the screen like that. It's probably much more complicated in C++, but I'd still like to be pointed in the right direction.
Sorry if this question is a bit unspecific, but I'd really like to be pointed in the right direction.
This is done by using a framebuffer console. Then you use a framework/library that can draw on that. For example DirectFB. There's also some small libraries floating around, like libFB. I think SDL can also use the framebuffer. Never tried it myself though.
Then there's framebuffer versions of GUI toolkits like Gtk+ and Qt, if GUI widgets is that you want.
There's also SVGAlib, which talks to graphics cards directly, but it's outdated by now. Not recommended. In general, you're looking for "Linux framebuffer graphics". That should get you a few starting points.
To get a framebuffer console, you need to configure your kernel accordingly. Usually you enable a KMS driver for you graphics card, and also enable the KMS framebuffer. If there isn't a KMS driver for your card, you can use a generic VESA framebuffer console that works on most hardware (although, it being just generic VESA, is slow and non-accelerated.)
Commonly, a "desktop environment" (on Linux) is made of two parts: XWindow-like graphics "library" plus a "window management" (Gnome, KDE, Xcfe,..). So, if I understand your question, you only have to setup a XWindow system without a window manager.
On MS-DOS, you could write software which wrote to the screen, either by writing into a range of RAM which was shared by the video controller, or calling a BIOS API.
A newer O/S (i.e. Windows) will prevent you from doing either of those: instead you call an O/S API, which calls to an O/S-specific video device driver, which outputs to the hardware.
As I read it you're asking how to deal directly with the graphics hardware.
That depends on the hardware.
If you have an old PC at hand and want to experiment with it, then you need correspondingly old development software that can run on that hardware under the particular OS, i.e. some C compiler from those days running in MS-DOS. You may be able to do this is in a "DOS-box" in Windows (not a console window but an emulation of the old PC). 64-bit Windows 7 does not support DOS boxes, but there is a free alternative called DOSbox.
Then, if you go that route, you can search for "graphics adapter" graphics modes etc. on the net.
Basically, with the old PC architecture and a program running under DOS, you used a DOS service to switch the graphics mode, and then you accessed the graphics memory at a known memory address for the mode.
The curses (or ncurses) library is the old way of doing it in Unix flavours, although these days there is probably something better...

Adapting OpenGL render context on remote desktop connection attempt

We have an application which uses an OpenGL render context in a subwindow to display a large bitmap. However, when a user remotely connects to a box running this app, the openGL display stops working, most likely due to the reduced texture resolution.
While we can detect the remote desktop connection starting/ending using WTS_REMOTE_CONNECT, the openGL context does not switch to the virtual driver when trying to determine the new max texture resolution.
Completely restarting the openGL subthread hangs on ChoosePixelFormat, this wont return until I am logged in locally again, otherwise this would be the "bad" solution.
It seam that application is badly written.
Code that is responsible for detecting context changes and reacting to them accordingly, do not exist or is buggy. Any way, you can not do much, unless you have access to source code. Also you can report is as a bug to vendor or provider from whom you bought it.

GDI+ Dithering Problem

I have a C++ application that uses the Win32 API for Windows, and I'm having a problem with GDI+ dithering, when I don't know why it should be.
I have a custom control (custom window). When I receive the WM_PAINT message, I draw some Polygons using FillPolygon on a Graphics device. This Graphics device was created using the HDC from BeginPaint.
When the polygons appear on the screen, though, they are dithered instead of transparent, and only seem to show few colors (maybe 256?) When I do the same thing in C# using the .NET interface into GDI+, it works fine, which is leaving me wondering what's going on.
I'm not doing anything special, this is a simple example that should work fine, as far as I know. Am I doing something wrong?
Edit: Nevermind. It only happens over Remote Desktop, even though the C# example doesnt Dither over remote desktop. Remote Desktop is set at 32-bit color, so I don't know what's up with that.
Hmm... The filling capabilities are determined by the target device. When working over remote desktop, AFAIK Windows substitutes the display driver, so that can change the supported features of the display.
when drawing on wm_paint, you actually draw directly on the screen surface, while .net usually uses double buffering (draws to in memory bitmap and then blits the entire bitmap)
there are some settings in gdi+ that affect the drawing quality. maybe there are different defaults for on-screen, off-screen and remote painting?
It only happens over Remote Desktop
Many remoting applications will reduce colour depth in order to reduce bandwidth requirements. While I haven't used Remote Desktop, the same happens on certain VNC connections. I'd check your RD server and client settings.