I am trying change my graphics card setting progmatically. I have nvidia 570 GTX graphics card connected to two computer monitors and one LCD TV. The problem is that my 570 GTX can only output two displays at one time so I am constantly switching between my monitors and my TV from within the control panel. I want to make a program that automates this process.
I am planning on using Nvidia's api to change the graphics cards settings:
http://developer.download.nvidia.com/SDK/9.5/Samples/DEMOS/common/src/NvCpl/docs/NVControlPanel_API.pdf
The problem is I have no idea how to do this in C++; how to import and implement the dll calls, which function to use etc.
Does anyone wanna take a quick crack at it for me? Any help would be much appreciated.
Related
I'm having a problem with a directx 11 game I'm developing on laptops with two video cards. The normal case I'm running into (and I have this on my own laptop) is a weak intel card and a powerful nvidia card. Obviously I want the nvidia one and I've already got it enumerating the adapters and figuring out the correct one to create the device interface for.
The problem is nvidia one doesn't have an output. When you call EnumOutputs on the IDXGIAdapter interface you don't find any. And this makes sense because the laptop only has one screen and its attached to the intel adapter (you can find it by calling EnumOutputs on the intel IDXGIAdapter interface).
But this seemingly makes it impossible to create a fullscreen swap chain for that device (IDXGIFactory::CreateSwapChain fails when given the nvidia device and fullscreen settings even when I'm certain the other mode parameters are valid).
It seems like other games are figuring out a way around this. Off of my steam list for example Half-Life 2 seems to be running in fullscreen mode. However stardew valley is running in borderless windowed mode which I could do but has its own issues.
I'm aware that its possible to change the laptop's settings so the nvidia card is the dominate one. But I need this to work on customer's laptops where I can't expect them to deal with all that.
One potential solution might be create a device for both adapters and then create a swap chain on the intel one as a device shared resource https://learn.microsoft.com/en-us/windows/desktop/api/d3d11/nf-d3d11-id3d11device-opensharedresource I'm not even sure if that's possible though. The docs are vague.
Before I go down a difficult potentially dead end though I'm wondering if anyone knows the solution.
So I have two NVidia GPU Cards
Card A: GeForce GTX 560 Ti - Wired to Monitor A (Dell P2210)
Card B: GeForce 9800 GTX+ - Wired to Monitor B (ViewSonic VP20)
Setup: an Asus Mother Board with Intel Core i7 that supports SLI
In NVidia Control Panel, I disabled Monitor A, So I only have Monitor B for all my display purposes.
I ran my program, which
simulated 10000 particles in OpenGL and rendered them (properly showed in Monitor B)
use cudaSetDevice() to 'target' at Card A to run computational intensive CUDA Kernel.
The idea is simple - use Card B for all the OpenGL rendering work and use Card A for all the CUDA Kernel computational work.
My Question is this:
After using GPU-Z to monitor both of the Cards, I can see that:
Card A's GPU Load increased immediately to over 60% percent as expected.
However, Card B's GPU Load increased only to up to 2%. For 10000 particle rendered in 3D in opengl, I am not sure if that is what I should have expected.
So how can I find out if the OpenGL rendering was indeed using Card B (whose connected Monitor B is the only one that is enabled), and had nothing to do with Card A?
And and extension to the question is:
Is there a way to 'force' the OpenGL rendering logic to use a particular GPU Card?
You can tell which GPU a OpenGL context is using with glGetString(GL_RENDERER);
Is there a way to 'force' the OpenGL rendering logic to use a particular GPU Card?
Given the functions of the context creation APIs available at the moment: No.
I have an old piece of C++ code which displays animations, and expects a runtime environment which has an ATI graphics card, and uses the ATI Catalyst Control Center SDK to get information about the graphics card, and monitors attached. The rendering itself is all done with Direct3D.
I need to get this code to work with an nVidia graphics card, so I need a way of finding out whether the graphics card uses VGA, DVI or HDMI output, and whether the monitor/s support HDMI output and what their max resolutions are.
The second one should be easy, but I don't know where to start with the first...
Thanks.
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.
Is it possible to tap into the VGA output of a (different) computer? The computer in question will be running a driving simulator (which is at full screen). I would like to feed this video to another computer running a program, that I've written, which can detect motorway/freeway lanes and generate an output to steer the vehicle running in the driving simulator.
I did find this: http://www.synthenv.com/PixelPusher_usb_frame_grabber.aspx
A Frame grabber that can a VGA input and output it as a USB. Its also compatible with OpenCV (which is what I'm using for computer vision). Any suggestions on how to go about this?
Have you looked at VGA2USB Frame Grabber?
Frame grabbers are definitely an option. You could also convert your VGA signal to S-Video and use any graphics card with TV-in. Or, if you do not insist on running this on two computers, you could use a screen-grabbing camera driver, like http://www.splitmedialabs.com/vh-video-sdk/vh-screen-capture