Using OpenGL on lower-power side of Hybrid Graphics chip - c++

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?

The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.

Related

Who runs OpenGL shaders if there is no video card

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

How to detect the default GPU at Runtime?

I have a little weird problem here that i´m having a lot of dificulties to find out the answer.
I have a C++ 3D Engine, and I´m using OpenCL for optimizations and OpenGL interoperability.
In My machine i have Two GPU´s installed, a GTX 960 and a AMD R9 280X.
Everything is working fine, including the detection of the GPU´s and CPU and
the graphics interoperability are running really fast as expected.
But, allways in a machine we have a default GPU on the system(This are setup on windows depending the order we install the drivers).
So, when i´m starting read all the devices and detect the GPU´s when i try create the interoperability contexts i have a weird situation:
When i have AMD as default GPU:
in the case of NVIDIA devices the OpenCL returns to me an error informing that its not possible create the CL context(Becouse is not the default GPU), and when i create the OpenGL context for the AMD GPU the context are created properly.
When i have NVIDIA as default GPU:
in the case of NVIDIA devices the context are created properly , but when i try create the AMD context, instead return me an error, the system Crash!
So, my main problem is how to detect the default GPU during Runtime to create interoperability contexts, only for the default GPU since the AMD are crashing instead return error...(Becouse with the errors i can setup a flag informing the default GPU based on this results...).
Anyone have an idea of how can i detect the default GPU at runtime using C++ ?
Kind Regards.
One technique is to ask OpenGL for the device name and use that to choose the OpenCL device. Note: You may with to reduce these to enumerations before comparing, because the strings won't match (e.g. AMD vs. ATI).
Most likely you are mixing stuff from both GPUs. For example context is created for non-default GPU using device from default GPU. You can run into this sort of problems when using Khronos C++ bindings for OpenCL. Whatever is not created explicitly and set as default for non default GPU it will be created by the wrapper for you using default GPU.
Other C++ wrappers may suffer from similar problems. It's hard to say something more without seeing the source code.
Finally after a lot of tests, its working as expected and really Fast!!!
Basically i have two components:
01) OpenGL Component
02) OpenCL Component
So, in the OpenGL component i extract the GPU vendor from the graphics context created (Since the GL context is the first things created on the system to make possible render the graphics in a Window).
After This Initialization, i start the inicialization of the OpenCL component passing to it, the Vendor collected by OpenGL, since is the default GPU card registered on the system.
During the devices initialization i put a flag marking the default GPU for OpenGL interoperation, so for all other devices a normal execution context are created , and for the default GPU device the interoperation context are created.
after that when i request a kernel execution , i pass to it the component name using it, and if this component are a normal component the CPU and Second GPU devices for Heterogeneous computing are used, and if this call comes from the 3D component the GPU for OpenGL interoperation are used!!!
Reeaaallly Cool!!!
I tested inverting the Default GPU from NVIDIA to AMD and AMD to NVIDIA , and works lovelly!!!
I tested pointing my Math and Physics component to the second GPU and the 3D Graphics Component to the default GPU , and i reach great results!
The software are running like a Monster Dragster now!!!
Thanks so much for your help!
Kind Regards.

Can EGL application run in console mode?

I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.

graphics libraries and GPUs [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does OpenGL work at the lowest level?
When we make a program that uses the OpenGL library, for example for the Windows platform and have a graphics card that supports OpenGL, what happens is this:
We developed our program in a programming language linking the graphics with OpenGL (eg Visual C++).
Compile and link the program for the target platform (eg Windows)
When you run the program, as we have a graphics card that supports OpenGL, the driver installed on the same Windows will be responsible for managing the same graphics. To do this, when the CPU will send the required data to the chip on the graphics card (eg NVIDIA GPU) sketch the results.
In this context, we talk about graphics acceleration and downloaded to the CPU that the work of calculating the framebuffer end of our graphic representation.
In this environment, when the driver of the GPU receives data, how leverages the capabilities of the GPU to accelerate the drawing? Translates instructions and data received CUDA language type to exploit parallelism capabilities? Or just copy the data received from the CPU in specific areas of the device memory? Do not quite understand this part.
Finally, if I had a card that supports OpenGL, does the driver installed in Windows detect the problem? Would get a CPU error or would you calculate our framebuffer?
You'd better work into computer gaming sites. They frequently give articles on how 3D graphics works and how "artefacts" present themselves in case of errors in games or drivers.
You can also read article on architecture of 3D libraries like Mesa or Gallium.
Overall drivers have a set of methods for implementing this or that functionality of Direct 3D or OpenGL or another standard API. When they are loading, they check the hardware. You can have cheap videocard or expensive one, recent one or one released 3 years ago... that is different hardware. So drivers are trying to map each API feature to an implementation that can be used on given computer, accelerated by GPU, accelerated by CPU like SSE4, or even some more basic implementation.
Then driver try to estimate GPU load. Sometimes function can be accelerated, yet the GPU (especially low-end ones) is alreay overloaded by other task, then it maybe would try to calculate on CPU instead of waiting for GPU time slot.
When you make mistake there is always several variants, depending on intelligence and quality of driver.
Maybe driver would fix the error for you, ignoring your commands and running its own set of them instead.
Maybe the driver would return to your program some error code
Maybe the driver would execute the command as is. If you issued painting wit hred colour instead of green - that is an error, but the kind that driver can not know about. Search for "3d artefacts" on PC gaming related sites.
In worst case your eror would interfere with error in driver and your computer would crash and reboot.
Of course all those adaptive strategies are rather complex and indeterminate, that causes 3D drivers be closed and know-how of their internals closely guarded.
Search sites dedicated to 3D gaming and perhaps also to 3D modelling - they should rate videocards "which are better to buy" and sometimes when they review new chip families they compose rather detailed essays about technical internals of all this.
To question 5.
Some of the things that a driver does: It compiles your GPU programs (vertex,fragment, etc. shaders) to the machine instructions of the specific card, uploads the compiled programs to the appropriate area of the device memory and arranges the programs to be executed in parallel on the many many graphics cores on the card.
It uploads the graphics data (vertex coordinates, textures, etc.) to the appropriate type of graphics card memory, using various hints from the programmer, for example whether the date is frequently, infrequently, or not at all updated.
It may utilize special units in the graphics card for transfer of data to/from host memory, for example some nVidia card have a DMA unit (some Quadro card may have two or more), which can upload, for example, textures in parallel with the usual driver operation (other transfers, drawing, etc).

How to tell whether an OpenGL context is hardware accelerated?

I know that if the openGl implementation does not find a suitable driver it happily falls back and render everything in software mode. It's good for graphics applications but it is not acceptable for computer games.
I know many users using Windows XP and if the user does not install the video card driver for his GPU then the OpenGL won't be hardware accelerated (while DirectX is or if not it will throw errors).
Is there a better (and possibly cross platform) way to determine if OpenGL uses the hardware acceleration than measuring the FPS and if it's too low notify the user?
I know that games like Quake3 can find it out somehow...
It seems that there is no direct way to query OpenGL for this but there are some methods that may help you to determine if hardware acceleration is present. See here for Windows ideas. In a UNIX environment glxinfo | grep "direct rendering" should work.
See also glGetString and 5.040 How do I know my program is using hardware acceleration on a Wintel card?
This previous answer suggests that checking to see if the user only has OpenGL 1.1 may be sufficient.
How to write an installer that checks for openGL support?