Using the AMD C++ binding and SDK (the most recent one) running an OpenCL program that gets a platform, a GPU, then compiles 4 kernels has the above error upon startup. It works fine on my computer, whose GPU only supports up to 1.1, but other computers seem to have the above error. Is this a problem in the compilation (As in, I have to define some macros), in the lack of a driver, the C++ binding, or something else? I don't explicitly call clRetainDevice in my own codeāis it part of the binding somewhere?
It happens when you use the C++ bindings header file with OpenCL 1.2 header. For instance, when you run an application compiled with AMD SDK (OpenCL 1.2) on NVIDIA platform (OpenCL 1.1 only).
As fast and dirty work around, you can just edit the AMD SDK cl.h header and undef "CL_VERSION_1_2" preprocessor symbol. If you are not interested to 1.2 features, it should fix your problem.
Related
Using clang, I am able to compile OpenCL-C++ kernels (using clang -c). I am trying to load these compiled kernels into my OpenCL application, but am at a loss how to achieve that. I am using Ubuntu 22.04, with an Intel CPU and an Nvidia GPU. The GPU unfortunately does not support SPIR-V injection via clCreateProgramWithIL - if it did, I would happily take that route. I also cannot use clCreateProgamWithSource, because that unfortunately does not support C++ features inside the kernels.
Is there any way I can compile OpenCL-C++ kernels using clang and then load them into my OpenCL application? Or is there a way I can still use clCreateProgramWithSource with C++ features inside the kernels, maybe? Either way would work well! (There has been a similar question here but focusing on macOS, which has its own OpenCL implementation and compiler, as far as I know.)
I'm a bit confused about when exactly I need to use an OpenGL function loader like GLEW. In general, it seems like you first obtain a window and valid OpenGL context and then attempt to load functions.
Sometimes these functions are referred to as extensions, sometimes they are called core functions as well. It seems like what's loaded and classified as 'core' and 'extension' is platform dependent. Are the functions that are loaded in addition to some base set?
Do you need to load functions in the same way on OpenGL ES platforms as well? Taking a quick look at GLEW, I don't see any explicit support for Open GL ES. Other GL function loader libs do explicitly mention support specifically for ES however (like https://github.com/Dav1dde/glad)
OpenGL functions (core or extension) must be loaded at runtime, dynamically, whenever the function in question is not part of the platforms original OpenGL ABI (application binary interface).
For Windows the ABI covers is OpenGL-1.1
For Linux the ABI covers OpenGL-1.2 (there's no official OpenGL ABI for other *nixes, but they usually require OpenGL-1.2 as well)
For MacOS X the OpenGL version available and with it the ABI is defined by the OS version.
This leads to the following rules:
In Windows you're going to need a function loader for pretty much everything, except single textured, shaderless, fixed function drawing; it may be possible to load further functionality, but this is not a given.
In Linux you're going to need a function loader for pretty much everything, except basic multitextured with just the basic texenv modes, shaderless, fixed function drawing; it may be possible to load further functionality, but this is not a given.
In MacOS X you don't need a function loader at all, but the OpenGL features you can use are strictly determined by the OS version, either you have it, or you don't.
The difference between core OpenGL functions and extensions is, that core functions are found in the OpenGL specification, while extensions are functionality that may or may be not available in addition to what the OpenGL version available provides.
Both extensions and newer version core functions are loaded through the same mechanism.
datenwolf's answer is great, but I wanted to clarify something you said in the first bullet point of your question.
Core and extension status is not platform-dependent or even mutually exclusive.
Core means that some feature was introduced in a certain version of OpenGL. There are core functions, which are things that are guaranteed to exist in version X.Y and there are even core extensions, which are extensions that were introduced alongside version X.Y. Core extensions provide the same functions, types, enums, etc. as the core feature only in an extension form that does not require a specific version.
Framebuffer Objects went core in OpenGL 3.0, and are slightly less restrictive than the EXT extension (GL_EXT_framebuffer_object) that predates OpenGL 3.0. However, it is not necessary to have an OpenGL 3.0 implementation to have access to the core version of FBOs - an OpenGL 2.1 implementation might offer the core functionality.
In the extension specification for GL_ARB_framebuffer_object, you will find:
Issues
(8) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?
RESOLVED: Unlike most ARB extensions, this is a strict subset of
functionality already approved in OpenGL 3.0. This extension
exists only to support that functionality on older hardware that
cannot implement a full OpenGL 3.0 driver. Since there are no
possible behavior changes between the ARB extension and core
features, source code compatibility is improved by not using
suffixes on the extension.
That is the first mention of a core extension that I can recall, but it is not the last. Since then many ARB extensions have been created that "backport" (if you will) core functionality from a higher version.
Here is some sample output gathered by parsing gl.xml for another core extension:
>> Command: void glBufferStorage (GLenum target, GLsizeiptr size, const void *
data, GLbitfield flags)
* Provided by GL_ARB_buffer_storage (gl|glcore)
* Core in GL_VERSION_4_4 ( gl 4.4)
It is core in 4.4 (guaranteed to exist in a 4.4 implementation), but because the extension that provides it is glcore, this core function may be available in older implementations if the core extension is available.
The simple piece of software I wrote to parse gl.xml for this information can be found here if you are interested.
Function loaders are only needed on Windows and Linux. Here's a quick overview of how you build for various OpenGL versions on different platforms.
Windows
The Windows development tools only contain headers for OpenGL 1.1. The conspiracy theorists would probably claim that Microsoft is not interested in making the use of OpenGL easy because it wants developers to use a proprietary API instead.
For anything beyond 1.1, you need to load the entry points dynamically by calling wglGetProcAddress(). Libraries like GLEW provide header files for higher OpenGL versions, and encapsulate the logic for loading the entry points.
Linux
I haven't done OpenGL programming on Linux. From what I hear, it requires function loading similar to Windows. I'll defer to #datenwolf's answer for the details.
Mac OS
Mac OS supports two main OpenGL feature sets:
OpenGL 2.1 with legacy features. This is used by including <OpenGL/gl.h>.
OpenGL 3.x and higher, Core Profile only. Used by including <OpenGL/gl3.h>.
In both cases, you don't need any dynamic function loading. The header files contain all the declarations/definitions for the maximum version that can be supported, and the framework you link against (using -framework OpenGL) resolves the function names.
The maximum version you can use at build time is determined by the platform SDK you build against. By default, this is he platform SDK that matches the OS of your build machine. But you can change it by using the -isysroot build option.
At runtime, the machine has to run at least the OS matching the platform SDK used at build time, and you can only use features up to the version supported by the GPU. You can get an overview of what version is supported on which hardware on:
https://developer.apple.com/opengl/capabilities/
http://support.apple.com/en-us/HT202823
Android, NDK
With native code on Android, you choose the OpenGL version while setting up the context and surface. Your code then includes the desired header (like <GLES2/gl2.h> or <GLES3/gl3.h>) and links against the matching libraries. There is no dynamic function loading needed.
If the target device does not support the version you are trying to use, the context creation will fail. You can have an entry in the manifest that prevents the app from being installed on devices that will not support the required ES version.
Android, Java
This is very similar to the NDK case. The desired version is specified during setup, e.g. while creating a GLSurfaceView.
The GLES20 class contains the definitions for ES 2.0. GLES30 derives from GLES20, and adds the additional definitions for ES 3.0.
iOS
Not surprisingly, this is very similar to Mac OS. You include the header file that matches the desired OpenGL ES version (e.g. <OpenGLES/ES3/gl.h>), link against the framework, and you're all done.
Also matching Mac OS, the maximum version you can build against is determined by the platform SDK version you choose. Devices you want to run on then have to use at least the OS version that matches this platform SDK version, and support the OpenGL ES version you are using.
One main difference is obviously that you cross compile the app on a Mac. iOS uses a different set of platform SDKs with different headers and frameworks, but the overall process is pretty much the same as building for Mac OS.
I'm a bit confused about when exactly I need to use an OpenGL function loader like GLEW. In general, it seems like you first obtain a window and valid OpenGL context and then attempt to load functions.
Sometimes these functions are referred to as extensions, sometimes they are called core functions as well. It seems like what's loaded and classified as 'core' and 'extension' is platform dependent. Are the functions that are loaded in addition to some base set?
Do you need to load functions in the same way on OpenGL ES platforms as well? Taking a quick look at GLEW, I don't see any explicit support for Open GL ES. Other GL function loader libs do explicitly mention support specifically for ES however (like https://github.com/Dav1dde/glad)
OpenGL functions (core or extension) must be loaded at runtime, dynamically, whenever the function in question is not part of the platforms original OpenGL ABI (application binary interface).
For Windows the ABI covers is OpenGL-1.1
For Linux the ABI covers OpenGL-1.2 (there's no official OpenGL ABI for other *nixes, but they usually require OpenGL-1.2 as well)
For MacOS X the OpenGL version available and with it the ABI is defined by the OS version.
This leads to the following rules:
In Windows you're going to need a function loader for pretty much everything, except single textured, shaderless, fixed function drawing; it may be possible to load further functionality, but this is not a given.
In Linux you're going to need a function loader for pretty much everything, except basic multitextured with just the basic texenv modes, shaderless, fixed function drawing; it may be possible to load further functionality, but this is not a given.
In MacOS X you don't need a function loader at all, but the OpenGL features you can use are strictly determined by the OS version, either you have it, or you don't.
The difference between core OpenGL functions and extensions is, that core functions are found in the OpenGL specification, while extensions are functionality that may or may be not available in addition to what the OpenGL version available provides.
Both extensions and newer version core functions are loaded through the same mechanism.
datenwolf's answer is great, but I wanted to clarify something you said in the first bullet point of your question.
Core and extension status is not platform-dependent or even mutually exclusive.
Core means that some feature was introduced in a certain version of OpenGL. There are core functions, which are things that are guaranteed to exist in version X.Y and there are even core extensions, which are extensions that were introduced alongside version X.Y. Core extensions provide the same functions, types, enums, etc. as the core feature only in an extension form that does not require a specific version.
Framebuffer Objects went core in OpenGL 3.0, and are slightly less restrictive than the EXT extension (GL_EXT_framebuffer_object) that predates OpenGL 3.0. However, it is not necessary to have an OpenGL 3.0 implementation to have access to the core version of FBOs - an OpenGL 2.1 implementation might offer the core functionality.
In the extension specification for GL_ARB_framebuffer_object, you will find:
Issues
(8) Why don't the new tokens and entry points in this extension have
"ARB" suffixes like other ARB extensions?
RESOLVED: Unlike most ARB extensions, this is a strict subset of
functionality already approved in OpenGL 3.0. This extension
exists only to support that functionality on older hardware that
cannot implement a full OpenGL 3.0 driver. Since there are no
possible behavior changes between the ARB extension and core
features, source code compatibility is improved by not using
suffixes on the extension.
That is the first mention of a core extension that I can recall, but it is not the last. Since then many ARB extensions have been created that "backport" (if you will) core functionality from a higher version.
Here is some sample output gathered by parsing gl.xml for another core extension:
>> Command: void glBufferStorage (GLenum target, GLsizeiptr size, const void *
data, GLbitfield flags)
* Provided by GL_ARB_buffer_storage (gl|glcore)
* Core in GL_VERSION_4_4 ( gl 4.4)
It is core in 4.4 (guaranteed to exist in a 4.4 implementation), but because the extension that provides it is glcore, this core function may be available in older implementations if the core extension is available.
The simple piece of software I wrote to parse gl.xml for this information can be found here if you are interested.
Function loaders are only needed on Windows and Linux. Here's a quick overview of how you build for various OpenGL versions on different platforms.
Windows
The Windows development tools only contain headers for OpenGL 1.1. The conspiracy theorists would probably claim that Microsoft is not interested in making the use of OpenGL easy because it wants developers to use a proprietary API instead.
For anything beyond 1.1, you need to load the entry points dynamically by calling wglGetProcAddress(). Libraries like GLEW provide header files for higher OpenGL versions, and encapsulate the logic for loading the entry points.
Linux
I haven't done OpenGL programming on Linux. From what I hear, it requires function loading similar to Windows. I'll defer to #datenwolf's answer for the details.
Mac OS
Mac OS supports two main OpenGL feature sets:
OpenGL 2.1 with legacy features. This is used by including <OpenGL/gl.h>.
OpenGL 3.x and higher, Core Profile only. Used by including <OpenGL/gl3.h>.
In both cases, you don't need any dynamic function loading. The header files contain all the declarations/definitions for the maximum version that can be supported, and the framework you link against (using -framework OpenGL) resolves the function names.
The maximum version you can use at build time is determined by the platform SDK you build against. By default, this is he platform SDK that matches the OS of your build machine. But you can change it by using the -isysroot build option.
At runtime, the machine has to run at least the OS matching the platform SDK used at build time, and you can only use features up to the version supported by the GPU. You can get an overview of what version is supported on which hardware on:
https://developer.apple.com/opengl/capabilities/
http://support.apple.com/en-us/HT202823
Android, NDK
With native code on Android, you choose the OpenGL version while setting up the context and surface. Your code then includes the desired header (like <GLES2/gl2.h> or <GLES3/gl3.h>) and links against the matching libraries. There is no dynamic function loading needed.
If the target device does not support the version you are trying to use, the context creation will fail. You can have an entry in the manifest that prevents the app from being installed on devices that will not support the required ES version.
Android, Java
This is very similar to the NDK case. The desired version is specified during setup, e.g. while creating a GLSurfaceView.
The GLES20 class contains the definitions for ES 2.0. GLES30 derives from GLES20, and adds the additional definitions for ES 3.0.
iOS
Not surprisingly, this is very similar to Mac OS. You include the header file that matches the desired OpenGL ES version (e.g. <OpenGLES/ES3/gl.h>), link against the framework, and you're all done.
Also matching Mac OS, the maximum version you can build against is determined by the platform SDK version you choose. Devices you want to run on then have to use at least the OS version that matches this platform SDK version, and support the OpenGL ES version you are using.
One main difference is obviously that you cross compile the app on a Mac. iOS uses a different set of platform SDKs with different headers and frameworks, but the overall process is pretty much the same as building for Mac OS.
I have a computer that has an Intel CPU and an NVIDIA GPU, running Windows 7. I have a software module that is written in NVIDIA CUDA, and another module written in OpenCL. I would like to run the OpenCL module on the CPU, using the Intel implementation of OpenCL, and at the same time, use the CUDA module.
In my system I installed first the CUDA SDK, and then the SDK from Intel.
I've compiled the program in Visual Studio 2012, instructing the linker to use the Intel's library (and I compiled against the OpenCL headers provided by intel).
However when I run a simple program to query the hardware I'm only able to see the NVIDIA card.
I've tried modifying the Windows Registry, and the PATH variable, with no look. When I query the dependencies with "Dependecy Walker" I see that the program depends on a dll located in c:\windows\system32, which is not the folder where the Intel dll is. I've tried deleting this dll but I still see this dependency, and I'm only able to access the GPU.
Any idea about what could be happening?
On Windows, "OpenCL.dll" is the ICD provided by Khronos and redistributed by AMD, NVIDIA and Intel.
The actual drivers are referenced by the Registry, and the ICD enumerates them all.
When you query the OpenCL platforms, you'll see one for each installed driver (AMD, NVIDIA, Intel).
Within each platform there will be devices (or device), for example, in the NVIDIA platform you'll find your NVIDIA GPU and under the Intel platform you'll find your CPU.
Don't replace OpenCL.dll
Run clinfo or GPU-Z to see what OpenCL platforms and devices it sees.
Re-install the Intel CPU driver (a new one was just posted 2 days ago) to make sure their driver is installed.
Note: Your CPU needs to have SSE 4.2 for the Intel CPU driver to work.
You could try the Installable Client Driver (ICD) Loader. However, I have no experience if it works on Windows.
Or:
Since you don't want to use the GPU with OpenCL you can simply copy the Intel OpenCL.dll into your working directory. The working directory is visited first when .dlls are loaded. So, even if the Nvidia OpenCL.dll is installed into your windows/system32 directory the Intel library is found first and therefore loaded. There may be better solutions maybe load the dll on demand as discussed here Dynamically load a function from a DLL but as a fast solution it should work.
I am trying to get a program that will run on both ATI and NVidia, and as such, I want to avoid using either SDK. Is it possible to do this without an SDK, using only VS2010 and Windows (XP or 7)?
If so, how can I go about configuring VS2010 Linker so that it will work?
Strictly speaking, no SDK is needed. In fact, no SDK is desired, as both the NVIDIA and AMD/ATI SDKs tie the code to their environments, and, by extension, their hardware. What you do need is:
1) A GPU that will run OpenCL code. See this Question: List of OpenCl Compliant CPU/GPU
2) The OpenCL library (libOpenCL.so on Linux); this is usually included and installed with the Graphics driver, which may be downloaded from AMD or NVIDIA.
3) The OpenCL header files. These may be obtained from Khronos.org, but are included with all OpenCL SDKs that I am aware of. On a Linux system these typically go in the directory /usr/include/CL
The NVIDIA and AMD SDKs provide a number of utilities and wrappers that make using the OpenCL API easier, but they are not required for writing OpenCL code, or for making API calls. These wrappers and utilities are not poratble. If you're interested in writing portable code, stick to the OpenCL spec, also available from Khronos.org.
To write code, all that you need to do is include opencl.h in your host program, and then make the API calls that are necessary to set up the OpenCL environment and run your OpenCL program. Also, don't forget to link against the OpenCL library (give gcc the -lOpenCL flag under Linux).
OpenCL is a standard. It only defines conventions. To use it, you need a driver for your graphical card. NVidia, AMD (ATI) and Apple all provide such drivers. You definitively need a SDK.
#virtuallinux alludes to the right answer: If you're worried about accidentally using some vendor-specific extensions, get the Khronos SDK.