Where are the shaders compiled into machine code in Nouveau driver? - opengl

I am learning about how the OpenGL pipeline works in consideration of the OpenGL API and the driver. According to this answer: Link, one of the duties of the GPU driver is to
compile shaders in the machine code of the GPU
To have a better understanding, I started to examine the code of Nouveau driver, as it is opensource: Gitlab link. As far as I can understand, there should be a specific part in this repo, where the shaders are compiled into machine code. So the driver, indirectly implements theglCompileShader() method after it has been translated to the driver specific functions by the OpenGL client library (in that case: by Mesa).
I would like to know, which part of the Nouveau driver is responsible for the above method.

Related

Who runs OpenGL shaders if there is no video card

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

How to detect the default GPU at Runtime?

I have a little weird problem here that i´m having a lot of dificulties to find out the answer.
I have a C++ 3D Engine, and I´m using OpenCL for optimizations and OpenGL interoperability.
In My machine i have Two GPU´s installed, a GTX 960 and a AMD R9 280X.
Everything is working fine, including the detection of the GPU´s and CPU and
the graphics interoperability are running really fast as expected.
But, allways in a machine we have a default GPU on the system(This are setup on windows depending the order we install the drivers).
So, when i´m starting read all the devices and detect the GPU´s when i try create the interoperability contexts i have a weird situation:
When i have AMD as default GPU:
in the case of NVIDIA devices the OpenCL returns to me an error informing that its not possible create the CL context(Becouse is not the default GPU), and when i create the OpenGL context for the AMD GPU the context are created properly.
When i have NVIDIA as default GPU:
in the case of NVIDIA devices the context are created properly , but when i try create the AMD context, instead return me an error, the system Crash!
So, my main problem is how to detect the default GPU during Runtime to create interoperability contexts, only for the default GPU since the AMD are crashing instead return error...(Becouse with the errors i can setup a flag informing the default GPU based on this results...).
Anyone have an idea of how can i detect the default GPU at runtime using C++ ?
Kind Regards.
One technique is to ask OpenGL for the device name and use that to choose the OpenCL device. Note: You may with to reduce these to enumerations before comparing, because the strings won't match (e.g. AMD vs. ATI).
Most likely you are mixing stuff from both GPUs. For example context is created for non-default GPU using device from default GPU. You can run into this sort of problems when using Khronos C++ bindings for OpenCL. Whatever is not created explicitly and set as default for non default GPU it will be created by the wrapper for you using default GPU.
Other C++ wrappers may suffer from similar problems. It's hard to say something more without seeing the source code.
Finally after a lot of tests, its working as expected and really Fast!!!
Basically i have two components:
01) OpenGL Component
02) OpenCL Component
So, in the OpenGL component i extract the GPU vendor from the graphics context created (Since the GL context is the first things created on the system to make possible render the graphics in a Window).
After This Initialization, i start the inicialization of the OpenCL component passing to it, the Vendor collected by OpenGL, since is the default GPU card registered on the system.
During the devices initialization i put a flag marking the default GPU for OpenGL interoperation, so for all other devices a normal execution context are created , and for the default GPU device the interoperation context are created.
after that when i request a kernel execution , i pass to it the component name using it, and if this component are a normal component the CPU and Second GPU devices for Heterogeneous computing are used, and if this call comes from the 3D component the GPU for OpenGL interoperation are used!!!
Reeaaallly Cool!!!
I tested inverting the Default GPU from NVIDIA to AMD and AMD to NVIDIA , and works lovelly!!!
I tested pointing my Math and Physics component to the second GPU and the 3D Graphics Component to the default GPU , and i reach great results!
The software are running like a Monster Dragster now!!!
Thanks so much for your help!
Kind Regards.

Running OpenGL on windows server 2012 R2

This should be straightforward, but for some reason I can't make it work.
I hired a Softlayer Bare Metal Server that comes with an Nvidea Tesla GPU.
I'm remotley executing a program (openScad) that needs OpenGL > 2.0 in order to properly export a PNG file.
When I invoke openScad and export a model, I get a 0kb png file as output, a clear symptom that OpenGL > 2.0 support is not present.
In order to make sure that I was running openGL > 2.0 I connected to my server via RD and ran GlView. To my surprise I saw that the server was supporting nothing but openGL 1.1.
After a little research I found out that for standard RD sessions the GPU is not used so it makes sense that I'm only seeing openGL 1.1.
The problem is that when I execute openscad remotley, it seems that the GPU is not used either.
What can I do to successfully make the GPU capabilities of my server work when I invoke openscad remotely?
PS: I checked with softlayer support and they are not taking any responsibility
Most (currently all) OpenGL implementations that use a GPU assume that there's a display system of some sort using that GPU; in the case of Windows that would be GDI. However on a headless server Windows usually doesn't start the GDI on the GPU but uses some framebuffer.
The NVidia Tesla GPUs are marketed as compute-only-devices and hence their driver does not support any graphics functionality (note that this is a marketing limitation implemented in software, as the silicon is perfectly capable of doing graphics). Or in other words: If you can implement your graphics operations using CUDA or OpenCL, then you can use it to generate pictures. Otherwise (i.e. for OpenGL or Direct3D) it's useless.
Note that NVidia is marketing their "GRID" products for remote/cloud rendering.
I'm replying because i faced a similar problem in the past; also trying to run an application that needed openGL 4 on a windows server.
windows remote desktop indeed doesn't trigger opengl. However if you use tigervnc instead and then start your openScad application it might recognize your opengl drivers. At least this trick did it for me.
(when opening an openGL context in a program it scan's for monitors/RD's attached i pressume).
hope it helps.

Can EGL application run in console mode?

I want to implement an opengl application which generates images and I view the image via a webpage.
the application is intended to run on a linux server which has no display, no x windows, but with gpu.
I know that egl can use pixmap or pbuffer as render targets.
but the function eglGetDisplay worries me, it sounds like I still need to have attached display to make it work?
does egl work without display and xwindows or wayland?
This is a recurring question. TL;DR: With the current Linux graphics driver model it is impossible to use the GPU with traditional drivers without running a X server. If the GPU is supported by KMS+DRM+DRI you can do it. (EDIT:) Also in 2016 Nvidia finally introduced truly headless OpenGL support in their drivers through EGL.
The long story is, that technically GPUs are perfectly capable of rendering to an offscreen buffer without a display being attached or a graphics server running. However due to the history of graphics driver and environment development this is not possible, yet has not been possible for a long time. The assumption back then (when graphics was first introduced to Linux) was: "The graphics device is there to deliver a picture to a screen." That a graphics card could be used as an accelerating coprocessor was not even a figment of an idea.
Add to this, that until a few years ago, the Linux kernel itself had no idea how to talk to graphics devices (other than a dumb framebuffer somewhere in the system's address space). The X server was what talked to GPUs, so you needed that to run. And the first X server developers made the assumption that there is a person between keyboard and chair.
So what are your options:
Short term, if you're using a NVidia GPU: Just start an X server. You don't need a full blown desktop environment. You can even save yourself the trouble of starting a window manager. Just have the X server claim the VT and being active. There is now support for headless OpenGL contexts through EGL in the Nvidia drivers.
If you're using an AMD or Intel GPU you can talk directly to it. Either through EGL or using KMS (Google for something called kmscube, when trying it, make sure you switch away from your X server to a text VT first, otherwise you'll crash the X server). I've not tried it yet, but it should be possible to adjust the kmscube example that it uses the GPU to render into an offscreen buffer, without switching the VT to graphics mode or have any graphics output on the display framebuffer at all.
As datenwolf told u can create a frame buffer without using x with AMD and intel GPU. since iam using AMD graphics card with EGL and iam able to create a frame buffer and iam drawing on it.with Mesa Library by configuring without x u can achieve.

What is OpenGL as a computer file

Ok, I know that online there are millions of answers to what OpenGL is, but I'm trying to figure out what it is in terms of a file on my computer. I've researched and found that OpenGL acts as a multi-platform translator for different computer graphics cards. So, then, is it a dll?
I was wondering, if it's a dll, then couldn't I download any version of the dll (preferably the latest), and then use it, knowing what it has?
EDIT: Ok, so if it's a windows dll, and I make an OpenGL game that uses a late version, what if it's not supported on someone else's computer with an earlier version? Am I allowed to carry the dll with my game so that it's supported on other windows computers? Or is the dll set up to communicate with the graphics card strictly on specific computers?
OpenGL is constantly being updated (whatever it is). How can this be done if all it's doing is communicating with a graphics card on a bunch of different computers that have graphics cards that are never going to be updated since they're built in?
There are two "parts" to OpenGL - the specification that's updated by the Khronos Group once every few months, and the driver that's written by your graphics card manufacturer specifically for your graphics card model.
The OpenGL specification essentially details how everything about the OpenGL API should work - what the expected behavior should be, when something is considered unexpected behavior, when to throw which errors, etc. The specification lets the driver writers know exactly what they need to do and lets application writers know what to expect from a driver. This is what OpenGL really "is" - the glue that holds applications and drivers together. You can read all the specifications for each version here.
Then there's drivers that implement the OpenGL API and are considered compliant to the specification. The driver does exactly what you'd expect it to do - copy data to and from the graphics card's memory, write data to graphics card registers, keep track of state, process vertices, compile shaders, instruct hundreds of stream processors to simultaneously transform vertices and fill pixels, etc. Without OpenGL, each graphics card model would have a separate, slightly faster API that would only work for that one graphics card because of the way it was structured. With OpenGL, the drivers are all written against the same API and an application's code will run on all graphics cards.
Compliance to the OpenGL specification doesn't change with driver updates. Most driver updates will either fix minor bugs or do some internal optimizing.
I know at one point there was a small bug with ATI driver where you had to call glEnable(GL_TEXTURE_2D); before you could generate mipmaps the OpenGL 3 way (glGenerateMipMaps()) despite GL_TEXTURE_2D being deprecated as a possible value for glEnable(). I'm not sure if it's fixed now, but it's certainly the type of edge case that can easily be overlooked by driver writers.
As for optimizations, there's a lot to optimize. Maybe there's another way to optimize shaders when they're being compiled, maybe there's a more efficient way to distribute work between the stream processors, I don't know.
OpenGL is a cross-platform API for graphics programming. In terms of compiled code, it will be available as an OS-specific library - e.g. a DLL (opengl32.dll) in Windows, or an SO in Linux.
You can get the SDK and binary redistributables from OpenGL.org
Depending on which language you're using, there may be OpenGL wrappers available. These are classes or API libraries designed to specifically work with your language. For example, there are .NET OpenGL wrappers available for C# / VB.NET developers. A quick Google search on the subject should give you some results.
The OpenGL API occasionally has new versions released, but these updates are backwards-compatible in nature. Moreover, new features are generally added as extensions, and it's possible to detect which extensions are present and only use those which are locally available and supported... so you can build your software to take advantage of new features when they're available but still be able to run when they aren't.
The API has nothing to do with individual drivers -- drivers can be updated without changing the API, and so the fact that drivers are constantly updated does not matter for purposes of compatibility with your software. On that account, you can choose an API version to develop against, and as long as your target operating systems ships with a version of the OpenGL library compatible with that API, you don't need to worry about driver updates breaking your software's ability to be dynamically linked against the locally available libraries.