I am using binary shaders in OpenGL program
I compile those once on a single machine(linux or windows).Then use it on other machines to run the app.Till now it worked fine on GeForce 6x ,QUADRO ,KQUADRO.Now,trying it on NVIDIA TESLA or GRID it throws:
Program binary could not be loaded. Binary is not compatible with
current driver/hardware combination.Error:1005
Tesla GPUS are pretty old so I could assume the hardware lacks this feature.But how is that NVDIA GRID GPUs don't support this feature? I have latest drivers (319.72) installed with OpenGL 4.3.
Tesla and GRID use Ubuntu 12.04
This is not a hardware feature per-se. The compiled binary is architecture and driver version specific. Generally, you are not supposed to use binary shaders for anything more than caching shaders locally. Even on the same machine, from one driver version to another you are not guaranteed that the compiled binary will work.
The extension that you linked to even mentions this:
Loading a program binary may also fail if the implementation determines
that there has been a change in hardware or software configuration from
when the program binary was produced such as having been compiled with
an incompatible or outdated version of the compiler. In this case the
application should fall back to providing the original OpenGL Shading
Language source shaders, and perhaps again retrieve the program binary
for future use.
You are free to ship your software with pre-compiled binary shaders, but be aware that since OpenGL does not define a standard binary format that they will likely only work with a specific GPU architecture and driver version. This works well for some fixed-spec. systems, but generally for portability you need to provide a fallback in the form of the original shader source code. This relegates binary GLSL programs mostly to caching, and not portable distribution.
HLSL shaders can be distributed in binary form because Microsoft has a standard bytecode format and drivers translate this into the GPU's native instruction set. Unfortunately, OpenGL has no such equivalent. Individual vendors are allowed to extend the binary program system and define multiple binary formats (see the implementation-defined limit: GL_NUM_PROGRAM_BINARY_FORMATS for more details), some of which may be more portable than others, but binary GLSL programs are not required to work on any hardware/software configuration other than the one they were compiled/linked on.
Related
Apologies for the slightly jokey title, but I couldn't find another way to concisely describe the question. I work in a team that use predominantly OpenCL code with a CPU fallback. For the most part this works fine, except when it comes to Nvidia and their refusal to use SPIR-V for OpenCL.
I recently found and have been looking into SYCL, but the ecosystem surrounding it is more than a little bit confusing, and in one case I found one implementation referring to using another implementation.
So my question is: is there a single SYCL implementation that can produce a single binary that has runtime support for Nvidia, AMD and Intel (preferred, but not required) and either x64 or Arm64 (we would create a second binary for the other one) without having to do what we do now which is select a bunch of GPUs from the various vendors build the kernels for each one separately and then have to ship them all.
Thanks
As of December 2022, for Linux and x86_64:
The open-source version of DPC++ can compile code for all three GPU vendors. In my experience, a single binary for all three vendors works.
hipSYCL has official support for NVIDIA and AMD devices, and experimental support for Intel GPUs (via the above-mentioned DPC++).
without having to do what we do now which is select a bunch of GPUs from the various vendors build the kernels for each one separately and then have to ship them all.
Note: Under the hood, both hipSYCL and DPC++ work this way. The kernels are compiled to PTX, GCN, and/or SPIR-V. They are bundled into a single binary, though, so, in this respect, the distribution can be simpler (or not: you will likely have to also ship the SYCL runtime libraries with your application).
I have an OpenGL 4.5 capable GPU and I wish to test if my application runs on an OpenGL 4.3 capable system. Can I set my GPU to use OpenGL 4.3?
Can you forcibly limit your OpenGL implementation to a specific version? No. Implementations are permitted to give you any version which is 100% compatible with the one you asked for. And 4.5 is compatible with 4.3.
However, using the right OpenGL loading library, you can forcibly limit your header. Several libraries allow you to generate version-specific headers, which will provide APIs and enums for just that version and nothing else. And any extensions you wish to use.
OpenGL Stenciling, seperating ref from value written?
In the answer to this question, a vender specific extension GL_REPLACE_VALUE_AMD is able to do exactly what I'm struggling to do in OpenGL, but I'm worried it will limit what computers and platforms I want my program to run on, and I've had no luck researching where it would not be available.
My goal is for the program to run on any computer that supports OpenGL 2.0, without any functional differences between them. Should I compile a program that uses this extension, what computers/platforms in this set would no longer be able to run the program without problems, if any?
The fact that it's a vendor extension should be an immediate clue that there's a good chance that you'd be limiting yourself to that vendor's hardware. It's not a 100% guarantee; NV_texture_barrier has been implemented for years on pretty much anything that can run GL 3.3 or better.
Further research indicates that the publication date for that extension is from 2012. That suggests that the extension would likely be implemented by more recent, GL 4.x-capable hardware.
If you want more accurate information, there are databases of extension usage that give a clearer picture. From this, we see that the extension is only implemented on AMD hardware. While it is available on AMD's GL 3.x-class hardware, it is not available on any of AMD's 2.x class hardware.
So if your goal is to support GL 2.0 (why not 2.1?) as a maximum, then you can't use that extension.
I understand that AMD created an alternative implementation of OpenCL that runs on x86 CPUs. This is very useful from the standpoint of simplified debugging. Unfortunately, OpenCL isn't an option for me.
Are there any Open GL x86 implementations in existence? This would greatly ease my development process, at the cost of some CPU time, of course. I would then run the same code on a GPU, later, with no changes necessary.
Mesa might be an option for you.
From their website:
Mesa is the OpenGL implementation for several types of hardware made by Intel, AMD and NVIDIA, plus the VMware virtual GPU. There's also several software-based renderers: swrast (the legacy Mesa rasterizer), softpipe (a gallium reference driver) and llvmpipe (LLVM/JIT-based high-speed rasterizer).
When using Mesa you can set the LIBGL_ALWAYS_SOFTWARE environment variable, which will cause Mesa to "always use software rendering".
OpenGL is not an instruction set, neither is it a library. It's a drawing API for interfacing with GPUs (yes there are software based rasterizers like Mesa softpipe). Most computers you can find these days support OpenGL.
When you use the OpenGL API it's not like your OpenGL calls get "translated" into a special instruction set for the GPU that's then part of your program. OpenGL operations will just create calls that eventually end up in a device driver, just like reading or writing to a file.
I've been thinking of making an additional wrapper for my project to use OpenGL rather then Allegro. I was not sure which OpenGL version to go for since I know that some computers cannot run recent versions such as v4.4. Also, I need a version which compiles no problem in Linux, Windows, Mac.
You'll want to look at what kinds of graphics cards will be available on your target systems and bear some details in mind:
OpenGL up to 1.5 can be completely emulated in software in real time on most systems. You don't necessarily need hardware support for good performance.
OpenGL 1.4 has universal support. Virtually all hardware supports it.
Mac OS X only supports up to OpenGL 2.0 and OpenGL 2.1, depending on OS version and hardware. Systems using GMA950 have only OGL1.4 support. Mac OS X Lion 10.7 supports OpenGL 3.2 Core profile on supported hardware.
On Linux, it's not unusual for users to specifically prefer open source drivers over the alternative "binary blobs," so bear in mind that the version of Mesa that most people have supports only up to about OpenGL 2.1 compatibility. Upcoming versions have support for OpenGL 3.x. Closed-source "binary blobs" will generally support the highest OpenGL version for the hardware, including up to OpenGL 4.2 Core.
When considering what hardware is available to your users, the Steam hardware Survey may help. Note that most users have DirectX 9-compatible hardware, which is roughly feature-equivalent to OpenGL 2.0. Wikipedia's OpenGL article also specifies what hardware came with initial support for which versions.
If you use a library like GLEW or GLEE or any toolkit that depends on them or offers similar functionality (like SFML, or even Allegro since 4.3), then you'll not need to concern yourself with whether your code will compile. These toolkits will take care of the details of enabling extensions and providing all of the symbols you need.
Given all of this, I'd suggest targeting OpenGL 2.1 to get the widest audience possible with the best feature support.
Your safe bet is OpenGL 2.1, it needs to be supported by the driver on your target system though. OpenGL ES, used on several mobile platforms, is basically a simplified OpenGL 2, so even porting to those would be fairly easy. I highly recommend using libGlew as VJo said.
It's less about operating systems, and more about video card drivers.
I think 1.4 is the highest version which enjoys support by all consumer graphics systems: ATI (AMD), nVidia, and Intel IGP. Intel is definitely the limiting factor here, even when ATI or nVidia doesn't have hardware support, they release OpenGL 4.1 drivers which use software to emulate the missing features. Not so with Intel.
OpenGL is not a library you usually compile and ship yourself (unless you're a Linux distributor and are packaging X.Org/Mesa). Your program just dynamically links against libGL.so (Linux/BSD), opengl32.dll (Windows, on 64 Bit systems, it's also calles opengl32.dll, but it's in fact a 64 Bit DLL) or the OpenGL Framework (MacOS X). This gives your program access to the system's OpenGL installation. The version/profile you want to use has no influence on the library you link!
Then after your program has been initialized you can test, which OpenGL version is available. If you want to use OpenGL-3 or 4 you'll have to jump a few additional hoops in Windows to make full use of it, but normally some kind of wrapper helps you with context creation anyway, boiling it down to only a few lines.
Then in the program you can implement multiple code paths for the various versions. Usually lower OpenGL verion codepaths share a large subset with higher version codepaths. I recommend writing new code in the highest version available, then adding additional code paths (oftenly just substitutions which can be done by C preprocessor macros or similar) for lower versions until you reach the lowest common denominator of features you really need.
Then you need to use OpenGL 1.1, and use needed (and supported) functions through the use of wglGetProcAddress (on windows) or glXGetProcAddress (on linux).
Instead of using those two functions, you can use GLEW library, which does that for you and is cross platform.