How to set GPU in visual studio / OpenGL [duplicate] - c++

This question already has answers here:
Forcing NVIDIA GPU programmatically in Optimus laptops
(2 answers)
Closed 5 years ago.
I'm using a Surface Book 2 and visual studio. I'm trying to make an OpenGL application and I noticed that it is defaulting to the integrated intel GPU rather than the discrete NVIDIA GPU that is also on the laptop.
I know that I can use the NVIDIA control panel to set the NVIDIA GPU as the default, but the base setting is to "let the application choose" (I understand that the purpose of this setting is to save battery when the better GPU is not needed). I am trying to find a way that I can choose the GPU in my application without manually changing settings in the NVIDIA control panel.
I looked around and it sounds like OpenGL does not support any methods choosing between different GPUs (which is very surprising to me). Is there any way that I can select which GPU I want without using a different API and without changing the settings in the NVIDIA control panel?

Find the executable generated by Visual Studio, and set your GPU for it.

Related

C++ MFC Windows - NVIDIA 3D Active Shutter API Alternative

I was handed a NVIDIA Active Shutter 3D program that needs to be converted to not be dependent on NVIDIA GPU's. I've never handled graphics API's and am having a hard time finding an alternative API that will work with what I have.
Can someone please point me in the right direction?
Basically I just need the existing code to work on a Samsung Active Shutter HDTV without using NVIDIA anything.
Existing program is c++ mfc windows standalone and uses NVAPI (dx9 VS2008 project) and a completely custom engine that I didn't code.
Open to any and all reasonable suggestions. I'm not a coding veteran so please try to keep it as beginner friendly as possible. I normally do c# so i'm a bit out of my element with this c++ stuff.
Thanks ahead of time for the help!
In Direct3D there is no vendor-independent way to enable active stereo until Direct3D 11.1. Prior to 11.1, you have no choice but to use the AMD and NVIDIA specific non-standard methods.
Also note that Direct3D 9 and Direct3D 11.1 are very different APIs, and Direct3D 11.1 stereoscopy requires Windows 8 or later. Porting effort may or may not be substantial.
If you're interested in Direct3D stereoscopy you can start at this MSDN sample.

Running OpenGL on windows server 2012 R2

This should be straightforward, but for some reason I can't make it work.
I hired a Softlayer Bare Metal Server that comes with an Nvidea Tesla GPU.
I'm remotley executing a program (openScad) that needs OpenGL > 2.0 in order to properly export a PNG file.
When I invoke openScad and export a model, I get a 0kb png file as output, a clear symptom that OpenGL > 2.0 support is not present.
In order to make sure that I was running openGL > 2.0 I connected to my server via RD and ran GlView. To my surprise I saw that the server was supporting nothing but openGL 1.1.
After a little research I found out that for standard RD sessions the GPU is not used so it makes sense that I'm only seeing openGL 1.1.
The problem is that when I execute openscad remotley, it seems that the GPU is not used either.
What can I do to successfully make the GPU capabilities of my server work when I invoke openscad remotely?
PS: I checked with softlayer support and they are not taking any responsibility
Most (currently all) OpenGL implementations that use a GPU assume that there's a display system of some sort using that GPU; in the case of Windows that would be GDI. However on a headless server Windows usually doesn't start the GDI on the GPU but uses some framebuffer.
The NVidia Tesla GPUs are marketed as compute-only-devices and hence their driver does not support any graphics functionality (note that this is a marketing limitation implemented in software, as the silicon is perfectly capable of doing graphics). Or in other words: If you can implement your graphics operations using CUDA or OpenCL, then you can use it to generate pictures. Otherwise (i.e. for OpenGL or Direct3D) it's useless.
Note that NVidia is marketing their "GRID" products for remote/cloud rendering.
I'm replying because i faced a similar problem in the past; also trying to run an application that needed openGL 4 on a windows server.
windows remote desktop indeed doesn't trigger opengl. However if you use tigervnc instead and then start your openScad application it might recognize your opengl drivers. At least this trick did it for me.
(when opening an openGL context in a program it scan's for monitors/RD's attached i pressume).
hope it helps.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.

graphics libraries and GPUs [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does OpenGL work at the lowest level?
When we make a program that uses the OpenGL library, for example for the Windows platform and have a graphics card that supports OpenGL, what happens is this:
We developed our program in a programming language linking the graphics with OpenGL (eg Visual C++).
Compile and link the program for the target platform (eg Windows)
When you run the program, as we have a graphics card that supports OpenGL, the driver installed on the same Windows will be responsible for managing the same graphics. To do this, when the CPU will send the required data to the chip on the graphics card (eg NVIDIA GPU) sketch the results.
In this context, we talk about graphics acceleration and downloaded to the CPU that the work of calculating the framebuffer end of our graphic representation.
In this environment, when the driver of the GPU receives data, how leverages the capabilities of the GPU to accelerate the drawing? Translates instructions and data received CUDA language type to exploit parallelism capabilities? Or just copy the data received from the CPU in specific areas of the device memory? Do not quite understand this part.
Finally, if I had a card that supports OpenGL, does the driver installed in Windows detect the problem? Would get a CPU error or would you calculate our framebuffer?
You'd better work into computer gaming sites. They frequently give articles on how 3D graphics works and how "artefacts" present themselves in case of errors in games or drivers.
You can also read article on architecture of 3D libraries like Mesa or Gallium.
Overall drivers have a set of methods for implementing this or that functionality of Direct 3D or OpenGL or another standard API. When they are loading, they check the hardware. You can have cheap videocard or expensive one, recent one or one released 3 years ago... that is different hardware. So drivers are trying to map each API feature to an implementation that can be used on given computer, accelerated by GPU, accelerated by CPU like SSE4, or even some more basic implementation.
Then driver try to estimate GPU load. Sometimes function can be accelerated, yet the GPU (especially low-end ones) is alreay overloaded by other task, then it maybe would try to calculate on CPU instead of waiting for GPU time slot.
When you make mistake there is always several variants, depending on intelligence and quality of driver.
Maybe driver would fix the error for you, ignoring your commands and running its own set of them instead.
Maybe the driver would return to your program some error code
Maybe the driver would execute the command as is. If you issued painting wit hred colour instead of green - that is an error, but the kind that driver can not know about. Search for "3d artefacts" on PC gaming related sites.
In worst case your eror would interfere with error in driver and your computer would crash and reboot.
Of course all those adaptive strategies are rather complex and indeterminate, that causes 3D drivers be closed and know-how of their internals closely guarded.
Search sites dedicated to 3D gaming and perhaps also to 3D modelling - they should rate videocards "which are better to buy" and sometimes when they review new chip families they compose rather detailed essays about technical internals of all this.
To question 5.
Some of the things that a driver does: It compiles your GPU programs (vertex,fragment, etc. shaders) to the machine instructions of the specific card, uploads the compiled programs to the appropriate area of the device memory and arranges the programs to be executed in parallel on the many many graphics cores on the card.
It uploads the graphics data (vertex coordinates, textures, etc.) to the appropriate type of graphics card memory, using various hints from the programmer, for example whether the date is frequently, infrequently, or not at all updated.
It may utilize special units in the graphics card for transfer of data to/from host memory, for example some nVidia card have a DMA unit (some Quadro card may have two or more), which can upload, for example, textures in parallel with the usual driver operation (other transfers, drawing, etc).

How to tell whether an OpenGL context is hardware accelerated?

I know that if the openGl implementation does not find a suitable driver it happily falls back and render everything in software mode. It's good for graphics applications but it is not acceptable for computer games.
I know many users using Windows XP and if the user does not install the video card driver for his GPU then the OpenGL won't be hardware accelerated (while DirectX is or if not it will throw errors).
Is there a better (and possibly cross platform) way to determine if OpenGL uses the hardware acceleration than measuring the FPS and if it's too low notify the user?
I know that games like Quake3 can find it out somehow...
It seems that there is no direct way to query OpenGL for this but there are some methods that may help you to determine if hardware acceleration is present. See here for Windows ideas. In a UNIX environment glxinfo | grep "direct rendering" should work.
See also glGetString and 5.040 How do I know my program is using hardware acceleration on a Wintel card?
This previous answer suggests that checking to see if the user only has OpenGL 1.1 may be sufficient.
How to write an installer that checks for openGL support?