OpenGL multi-GPU support - opengl

When we create the OpenGL context on PC, is there any way to choose which physical device or how many devices are used? Do the latest OpenGL (4.5) APIs support multi-GPU architecture? If I have two identical graphics cards (for example, two Nvidia GeForce cards), how do I properly program the OpenGL APIs in order to get benefits from the fact that I have two cards? How do I transfer the OpenGL program from a single GPU version to a multi-GPU version with minimal efforts?

OpenGL drivers expose multiple GPUs (in Crossfire/SLI configurations) as if they were a single GPU. Behind the scenes, the driver will (theoretically) figure out how to dispatch rendering calls efficiently between the two GPUs. There are several methods for doing so, and you have zero control over which mechanism a driver picks.
If you want more direct control over which GPU is associated with which GL context, you have to use vendor-specific extensions. AMD has WGL_AMD_gpu_association, while NVIDIA has WGL_NV_gpu_affinity.

Related

Who runs OpenGL shaders if there is no video card

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

How to detect the default GPU at Runtime?

I have a little weird problem here that i´m having a lot of dificulties to find out the answer.
I have a C++ 3D Engine, and I´m using OpenCL for optimizations and OpenGL interoperability.
In My machine i have Two GPU´s installed, a GTX 960 and a AMD R9 280X.
Everything is working fine, including the detection of the GPU´s and CPU and
the graphics interoperability are running really fast as expected.
But, allways in a machine we have a default GPU on the system(This are setup on windows depending the order we install the drivers).
So, when i´m starting read all the devices and detect the GPU´s when i try create the interoperability contexts i have a weird situation:
When i have AMD as default GPU:
in the case of NVIDIA devices the OpenCL returns to me an error informing that its not possible create the CL context(Becouse is not the default GPU), and when i create the OpenGL context for the AMD GPU the context are created properly.
When i have NVIDIA as default GPU:
in the case of NVIDIA devices the context are created properly , but when i try create the AMD context, instead return me an error, the system Crash!
So, my main problem is how to detect the default GPU during Runtime to create interoperability contexts, only for the default GPU since the AMD are crashing instead return error...(Becouse with the errors i can setup a flag informing the default GPU based on this results...).
Anyone have an idea of how can i detect the default GPU at runtime using C++ ?
Kind Regards.
One technique is to ask OpenGL for the device name and use that to choose the OpenCL device. Note: You may with to reduce these to enumerations before comparing, because the strings won't match (e.g. AMD vs. ATI).
Most likely you are mixing stuff from both GPUs. For example context is created for non-default GPU using device from default GPU. You can run into this sort of problems when using Khronos C++ bindings for OpenCL. Whatever is not created explicitly and set as default for non default GPU it will be created by the wrapper for you using default GPU.
Other C++ wrappers may suffer from similar problems. It's hard to say something more without seeing the source code.
Finally after a lot of tests, its working as expected and really Fast!!!
Basically i have two components:
01) OpenGL Component
02) OpenCL Component
So, in the OpenGL component i extract the GPU vendor from the graphics context created (Since the GL context is the first things created on the system to make possible render the graphics in a Window).
After This Initialization, i start the inicialization of the OpenCL component passing to it, the Vendor collected by OpenGL, since is the default GPU card registered on the system.
During the devices initialization i put a flag marking the default GPU for OpenGL interoperation, so for all other devices a normal execution context are created , and for the default GPU device the interoperation context are created.
after that when i request a kernel execution , i pass to it the component name using it, and if this component are a normal component the CPU and Second GPU devices for Heterogeneous computing are used, and if this call comes from the 3D component the GPU for OpenGL interoperation are used!!!
Reeaaallly Cool!!!
I tested inverting the Default GPU from NVIDIA to AMD and AMD to NVIDIA , and works lovelly!!!
I tested pointing my Math and Physics component to the second GPU and the 3D Graphics Component to the default GPU , and i reach great results!
The software are running like a Monster Dragster now!!!
Thanks so much for your help!
Kind Regards.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.

hardware requirement for OpenGL

My knowledge of OpenGL is very little. I was researching on some RTOS for my project. Some sort of light wt. UI is also required. I came across the OpenGL support for some UI package. My doubt is that whether a separate GPU is required for OpenGL or not?
No separate GPU is required, all you need are openGL drivers. There are even openGL software drivers (Mesa) that will render OpenGL onto anything.
Assuming it's a relatively recent RTOS it may support OpenGL-ES which is a reduced subset to support low power/low memory devices.

Where are run the Opengl commands?

i'm programming a simple OpenGL program on a multi-core computer that has a GPU. The GPU is a simple GeForce with PhysX, CUDA and OpenGL 2.1 support. When i run this program, is the host CPU that executes OpenGL specific commands or the ones are directly transferred
to the GPU ???
Normally that's a function of the drivers you're using. If you're just using vanilla VGA drivers, then all of the OpenGL computations are done on your CPU. Normally, however, and with modern graphics cards and production drivers, calls to OpenGL routines that your graphics card's GPU can handle in hardware are performed there. Others that the GPU can't perform are handed off to the CPU.