Where are run the Opengl commands? - opengl

i'm programming a simple OpenGL program on a multi-core computer that has a GPU. The GPU is a simple GeForce with PhysX, CUDA and OpenGL 2.1 support. When i run this program, is the host CPU that executes OpenGL specific commands or the ones are directly transferred
to the GPU ???

Normally that's a function of the drivers you're using. If you're just using vanilla VGA drivers, then all of the OpenGL computations are done on your CPU. Normally, however, and with modern graphics cards and production drivers, calls to OpenGL routines that your graphics card's GPU can handle in hardware are performed there. Others that the GPU can't perform are handed off to the CPU.

Related

Who runs OpenGL shaders if there is no video card

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

OpenGL multi-GPU support

When we create the OpenGL context on PC, is there any way to choose which physical device or how many devices are used? Do the latest OpenGL (4.5) APIs support multi-GPU architecture? If I have two identical graphics cards (for example, two Nvidia GeForce cards), how do I properly program the OpenGL APIs in order to get benefits from the fact that I have two cards? How do I transfer the OpenGL program from a single GPU version to a multi-GPU version with minimal efforts?
OpenGL drivers expose multiple GPUs (in Crossfire/SLI configurations) as if they were a single GPU. Behind the scenes, the driver will (theoretically) figure out how to dispatch rendering calls efficiently between the two GPUs. There are several methods for doing so, and you have zero control over which mechanism a driver picks.
If you want more direct control over which GPU is associated with which GL context, you have to use vendor-specific extensions. AMD has WGL_AMD_gpu_association, while NVIDIA has WGL_NV_gpu_affinity.

How to detect the default GPU at Runtime?

I have a little weird problem here that i´m having a lot of dificulties to find out the answer.
I have a C++ 3D Engine, and I´m using OpenCL for optimizations and OpenGL interoperability.
In My machine i have Two GPU´s installed, a GTX 960 and a AMD R9 280X.
Everything is working fine, including the detection of the GPU´s and CPU and
the graphics interoperability are running really fast as expected.
But, allways in a machine we have a default GPU on the system(This are setup on windows depending the order we install the drivers).
So, when i´m starting read all the devices and detect the GPU´s when i try create the interoperability contexts i have a weird situation:
When i have AMD as default GPU:
in the case of NVIDIA devices the OpenCL returns to me an error informing that its not possible create the CL context(Becouse is not the default GPU), and when i create the OpenGL context for the AMD GPU the context are created properly.
When i have NVIDIA as default GPU:
in the case of NVIDIA devices the context are created properly , but when i try create the AMD context, instead return me an error, the system Crash!
So, my main problem is how to detect the default GPU during Runtime to create interoperability contexts, only for the default GPU since the AMD are crashing instead return error...(Becouse with the errors i can setup a flag informing the default GPU based on this results...).
Anyone have an idea of how can i detect the default GPU at runtime using C++ ?
Kind Regards.
One technique is to ask OpenGL for the device name and use that to choose the OpenCL device. Note: You may with to reduce these to enumerations before comparing, because the strings won't match (e.g. AMD vs. ATI).
Most likely you are mixing stuff from both GPUs. For example context is created for non-default GPU using device from default GPU. You can run into this sort of problems when using Khronos C++ bindings for OpenCL. Whatever is not created explicitly and set as default for non default GPU it will be created by the wrapper for you using default GPU.
Other C++ wrappers may suffer from similar problems. It's hard to say something more without seeing the source code.
Finally after a lot of tests, its working as expected and really Fast!!!
Basically i have two components:
01) OpenGL Component
02) OpenCL Component
So, in the OpenGL component i extract the GPU vendor from the graphics context created (Since the GL context is the first things created on the system to make possible render the graphics in a Window).
After This Initialization, i start the inicialization of the OpenCL component passing to it, the Vendor collected by OpenGL, since is the default GPU card registered on the system.
During the devices initialization i put a flag marking the default GPU for OpenGL interoperation, so for all other devices a normal execution context are created , and for the default GPU device the interoperation context are created.
after that when i request a kernel execution , i pass to it the component name using it, and if this component are a normal component the CPU and Second GPU devices for Heterogeneous computing are used, and if this call comes from the 3D component the GPU for OpenGL interoperation are used!!!
Reeaaallly Cool!!!
I tested inverting the Default GPU from NVIDIA to AMD and AMD to NVIDIA , and works lovelly!!!
I tested pointing my Math and Physics component to the second GPU and the 3D Graphics Component to the default GPU , and i reach great results!
The software are running like a Monster Dragster now!!!
Thanks so much for your help!
Kind Regards.

OpenGL low performances on my computer

We began learning OpenGL at school and, in particular, implemented a .obj mesh loader. When I run my code at school with quite heavy meshes (4M up to 17M faces), I have to wait a few seconds for the mesh to be loaded but once it is done, I can rotate and move the scene with a perfect fluidity.
I compiled the same code at home, and I have very low performances when moving in a scene where heavy meshes are displayed.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL (this is the output of cout << glGetString(GL_version) << endl) and compiling with g++-4.9. I don't remember the version numbers of my school but I'll update my message as soon as possible if needed. Finally, I'm on Ubuntu 14.04 my graphic card is a Nvidia Geforce 605, my CPU is an Intel(R) Core(TM) i5-2320 CPU # 3.00GHz, and I have 8Go RAM.
If you have any idea to help me to understand (and fix it) why it is running so slowly on a quite good computer (certainly not a racehorse but good enough for that), please tell me. Thanks in advance !
TL;DR: You're using the wrong driver. Install the proprietary, closed source binary drivers from NVidia and you'll get very good performance. Also with a GeForce 605 you should get some OpenGL-4.x support.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL
(…)
my graphic card is a Nvidia Geforce 605
That's your problem right there. The open source "Noveau" drivers for NVidia GPUs that are part of Mesa are a very long way from offering any kind of reasonable HW acceleration support. This is because NVidia doesn't publish openly available documentation on their GPU's low level programming.
So at the moment the only option for getting HW accelerated OpenGL on your GPU is to install NVidia's proprietary drivers. They are available on NVidia's website; however since your GPU isn't "bleeding edge" right now I recommend you use those installable through the package manager; you'll have to add a "nonfree" package source repository though.
This is in stark contrast to the AMD GPUs which have full documentation coverage, openly accessible. Because of that the Mesa "radeon" drivers are quite mature; full OpenGL-3.3 core support, with performance good enough for most applications, in some applications even outperforming AMD's proprietary drivers. OpenGL-4 support is work in progress for Mesa at a whole and last time I checked the "radeon" drivers' development was actually moving at a faster pace than the Mesa OpenGL state tracker itself.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.