I know to get opengl version on Linux using glxinfo. What I need to get is opengl version number from its headers.
Inside GL/gl.h I have these defines. How do I get the numeric version number from these?
#define GL_VENDOR 0x1F00
#define GL_RENDERER 0x1F01
#define GL_VERSION 0x1F02
#define GL_EXTENSIONS 0x1F03
This is valid for core profiles :
int major = 0;
int minor = 0;
glGetIntegerv(GL_MAJOR_VERSION, &major);
glGetIntegerv(GL_MINOR_VERSION, &minor);
In "old" 1.1 OpenGL you can only get the version string with
glGetString(GL_VERSION)
But this is a string and you'll need to manually parse it.
I know to get opengl version on linux using glxinfo. What I need to get is opengl version number from its headers.
It doesn't work that way. The OpenGL version available is a runtime variable and is not known at compile time. You have to query it using glGet…
Related
I tried compiling OpenCV 2.4.13.1 with opencl 1.1 with headers from https://github.com/KhronosGroup/OpenCL-Headers
I had to change #ifdef CL_VERSION_1_2 to #ifdef CL_VERSION_1_1 in opencv/cmake/checks/opencl.cpp
also http://docs.opencv.org/2.4.13/modules/ocl/doc/introduction.html suggests that it should work with OpenCL 1.1
But I still get errors like cl_runtime_opencl.hpp:294:61: error: 'cl_device_partition_property' does not name a type when building.
Do I have to go back to an older version to get OpenCL1.1 working? or have I missed something?
Edit:
I don't mind an answer for OpenCV 3.0
I want to make sure that my application conforms to OpenGL 2.1.
How can I check this?
Because my computer supports GL4.4, even if I use, for example, glGenVertexArrays(), it will work successfully. But glGenVertexArrays() is only available with GL3+.
So, I want to verify that my app only uses GL2.1 functionality.
One way is to run it on my old PC that support only GL2.1, but I'm looking for an easier way.
If you find an extension loader that supports generating version specific headers, as described by #datenwolf, that's probably your easiest solution. There's another options you can try if necessary.
The official OpenGL headers you can find at https://www.opengl.org/registry contain the definitions grouped by version, and enclosed in preprocessor conditions. The layout looks like this:
...
#ifndef GL_VERSION_2_1
#define GL_VERSION_2_1 1
// GL 2.1 definitions
#endif
#ifndef GL_VERSION_3_0
#define GL_VERSION_3_0 1
// GL 3.0 definitions
#endif
#ifndef GL_VERSION_3_1
#define GL_VERSION_3_1 1
// GL 3.1 definitions
#endif
...
You should be able to include the official header at least for a version test. If you disable the versions you do not want to use by defining the corresponding pre-processor symbol, you will get compile errors if you are trying to use features from those versions. For example for GL 2.1:
#define GL_VERSION_3_0 1
#define GL_VERSION_3_1 1
#define GL_VERSION_3_2 1
#define GL_VERSION_3_3 1
#define GL_VERSION_4_0 1
#define GL_VERSION_4_1 1
#define GL_VERSION_4_2 1
#define GL_VERSION_4_3 1
#define GL_VERSION_4_4 1
#define GL_VERSION_4_5 1
#include <GL/glext.h>
// your code
Try https://github.com/cginternals/glbinding. It's an OpenGL wrapper library which supports exactly what you ask for:
Feature-Centered Header Design
The OpenGL API is iteratively developed and released in versions,
internally (for the API specification) named features. The latest
feature/version of OpenGL is 4.5. The previous version are 1.0, 1.1,
1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.1, 3.2, 3.3, 4.0, 4.1, 4.2, 4.3, and 4.4. OpenGL uses a deprecation model for removing outdated parts
of its API which results in compatibility (with deprecated API) and
core (without deprecated API) usage that is manifested in the targeted
OpenGL context. On top of that, new API concepts are suggested as
extensions (often vendor specific) that might be integrated into
future versions. All this results in many possible specific
manifestations of the OpenGL API you can use in your program.
One tough task is to adhere to one agreed set of functions in your own
OpenGL program (e.g., OpenGL 3.2 Core if you want to develop for every
Windows, macOS, and Linux released in the last 4 years). glbinding
facilitates this by providing per-feature headers by means of
well-defined/generated subsets of the OpenGL API.
All-Features OpenGL Headers
If you do not use per-feature headers the OpenGL program can look like
this:
#include <glbinding/gl/gl.h>
// draw code
gl::glClear(gl::GL_COLOR_BUFFER_BIT | gl::GL_DEPTH_BUFFER_BIT);
gl::glUniform1i(u_numcubes, m_numcubes);
gl::glDrawElementsInstanced(gl::GL_TRIANGLES, 18, gl::GL_UNSIGNED_BYTE, 0, m_numcubes * m_numcubes);
Single-Feature OpenGL Headers
When developing your code on Windows with latest drivers installed,
the code above is likely to compile and run. But if you want to port
it to systems with less mature driver support (e.g., macOS or Linux
using open source drivers), you may wonder if glDrawElementsInstanced
is available. In this case, just switch to per-feature headers of
glbinding and choose the OpenGL 3.2 Core headers (as you know that at
least this version is available on all target platforms):
#include <glbinding/gl32core/gl.h>
// draw code
gl32core::glClear(gl32core::GL_COLOR_BUFFER_BIT | gl32core::GL_DEPTH_BUFFER_BIT);
gl32core::glUniform1i(u_numcubes, m_numcubes);
gl32core::glDrawElementsInstanced(gl32core::GL_TRIANGLES, 18, gl32core::GL_UNSIGNED_BYTE, 0, m_numcubes * m_numcubes);
If the code compiles than you can be sure it is OpenGL 3.2 Core
compliant. Using functions that are not yet available or relying on
deprecated functionality is prevented.
You can compile it in an environment in which only the OpenGL-2.1 symbols are available. Depending on which extension wrapper / loader you use this can be easy or hard.
For example if you use the glloadgen OpenGL loader generator you can generate a header file and compilation unit that will cover only exactly the OpenGL-2.1 symbols and tokens. If you then compile your project using this, the compiler will error out on anything that's not covered.
I added several different versions of Eigen to default including directory of Visual C++.
But I got collapse problem when using LDLT (Cholesky decomposition) for some of the testing numerical examples.
So I want to determine which version is actually active when debugging the code.
Is there any function which can indicate the current active Eigen version number?
This answer is only a summary from the comments above:
At compile-time you have EIGEN_WORLD_VERSION, EIGEN_MAJOR_VERSION
and EIGEN_MINOR_VERSION, you can easily embed this information in
your application.
3.1.91 sounds like a beta version of 3.2.
The version number macros are defined in Macros.h located at
\Eigen\src\Core\util\.
In order to check the version number of Eigen C++ template library, just type
dpkg -p libeigen3-dev
in the terminal.
Or just type
pkg-config --modversion eigen3
you will get the Eigen version.
Although it is not the goal of the OP, people finding this question may be interested in checking if the version is equal to are newer than a specific release for compatibility reasons with different versions of Eigen. This can be done more easily using the EIGEN_VERSION_AT_LEAST(x, y, z) macro as follows:
#if EIGEN_VERSION_AT_LEAST(3,3,0)
// Implementation for Eigen 3.3.0 and newer
#else
// Implementation for older Eigen versions
#endif
This macro is also defined in Eigen/src/Core/util/Macros.h and uses EIGEN_WORLD_VERSION, EIGEN_MAJOR_VERSION and EIGEN_MINOR_VERSION internally.
On Linux:
grep "#define EIGEN_[^_]*_VERSION" /usr/local/include/eigen3/Eigen/src/Core/util/Macros.h
You'll get something like:
#define EIGEN_WORLD_VERSION 3
#define EIGEN_MAJOR_VERSION 3
#define EIGEN_MINOR_VERSION 7
It means version 3.3.7
I'm writing a small hello world OpenCL program using Khronos Group's cl.hpp for OpenCL 1.2 and nVidia's openCL libraries. The drivers and ICD I have support OpenCL 1.1. Since the nVidia side doesn't support 1.2 yet, I get some errors on functions required on OpenCL 1.2.
On the other side, cl.hpp for OpenCL 1.2 has a flag, CL_VERSION_1_1 to be exact, to run the header in 1.1 mode, but it's not working. Anybody has similar experience or solution?
Note: cl.hpp for version 1.1 works but, generates many warnings during compilation. This is why I'm trying to use 1.2 version.
Unfortunately NVIDIA distributes an old version of the OpenCL ICD (the library that dispatches API calls to the appropriate driver). Your best options are to either
Get hold of a more up to date version of the ICD (if you're using Linux, this is libOpenCL.so, and you can find a newer copy in AMD's APP SDK). The downside is that if you distribute your compiled code, it will also require the 1.2 ICD.
Use the OpenCL 1.1 header files, except that you can use the latest cl.hpp. It should (in theory) detect that it is being combined with OpenCL 1.1 headers and disable all the OpenCL 1.2 code (that doesn't get tested much though). The advantage of using the latest cl.hpp is that there are a lot of bug fixes that don't get back-ported to the 1.1 version of cl.hpp.
You can do this:
#include <CL/cl.h>
#undef CL_VERSION_1_2
#include <CL/cl.hpp>
I've just implemented that in my code and it seems to do the trick.
You can define the flag CL_USE_DEPRECATED_OPENCL_1_1_APIS which will make the 1.2 hpp file 1.1 compatible.
#define CL_USE_DEPRECATED_OPENCL_1_1_APIS
This is what I have done on NVIDIA and AMD. Works like a charm
I was fed up with downloading several GB OpenCL SDK's by Intel, Nvidia, and AMD with different issues:
Intel requires registration and has a temporary license.
Nvidia SDK does not support OpenCL 2.0 and you have to download cl.hpp anyway.
AMDs cl.hpp file defines min and max macros which can conflict with MSVC's min and max macros (I spend too much time figuring out how to fix this with e.g. NOMINMAX). The header is not even the same as the one defined by Khronos (which does not have the min/max problem).
Therefore, I downloaded the source code and includes from Khronos as suggested by this SO answer and compiled the OpenCL.lib file myself. The includes and OpenCL.lib files are a couple of MB. That's a lot smaller than all the extra stuff in the Intel/Nvidia/AMD SDKs! I can include the OpenCL includes and OpenCL.lib files in my project and no longer have to tell others to download an SDK.
The includes for OpenCL 2.0 from the Khronos registry has a new C++ binding file cl2.hpp. Looking at this file I have determined that the correct way to support the deprecated functions with the OpenCL 2.0 is something like this.
#define CL_HPP_MINIMUM_OPENCL_VERSION 110
#define CL_HPP_TARGET_OPENCL_VERSION 120
#define CL_HPP_CL_1_2_DEFAULT_BUILD
#include "CL/cl2.hpp"
This is because the cl2.hpp file has this code
#if CL_HPP_MINIMUM_OPENCL_VERSION <= 100 && !defined(CL_USE_DEPRECATED_OPENCL_1_0_APIS)
# define CL_USE_DEPRECATED_OPENCL_1_0_APIS
#endif
#if CL_HPP_MINIMUM_OPENCL_VERSION <= 110 && !defined(CL_USE_DEPRECATED_OPENCL_1_1_APIS)
# define CL_USE_DEPRECATED_OPENCL_1_1_APIS
#endif
#if CL_HPP_MINIMUM_OPENCL_VERSION <= 120 && !defined(CL_USE_DEPRECATED_OPENCL_1_2_APIS)
# define CL_USE_DEPRECATED_OPENCL_1_2_APIS
#endif
#if CL_HPP_MINIMUM_OPENCL_VERSION <= 200 && !defined(CL_USE_DEPRECATED_OPENCL_2_0_APIS)
# define CL_USE_DEPRECATED_OPENCL_2_0_APIS
#endif
Notice that you no longer need to (and should not) include <CL/opencl.h> anymore.
Lastly, after #include "CL/cl2.hpp" in order to get my code to work with Boost/Compute I had to add
#undef CL_VERSION_2_0
My own OpenCL code works without this but Boost/Compute does not. It appears I'm not the only one having this issue. My GPU does not support OpenCL 2.0.
Looks like the only way is to use the OpenCL 1.1 headers while working with 1.1 capable devices.
You can call can set the options of clBuildProgram as follows
const char options[] = "-cl-std=CL1.1";
clBuildProgram( program, 1, &devices, options, NULL, NULL );
This forces the compiler to use OpenCL 1.1 no matter which version is supported by your device
I've looked around and been unable to find the solution to what I find a relatively simple OpenCL related question.
Thing is, I just started using double precision in my OpenCL kernels, as my current project requires that much precision. Furthermore, I'm trying to keep everything managed, so that all kernels have the same #DEFINES that they can use.
Then I came to the extentions. By OpenCL I'll have to include
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
How do I include this in the build-options for clBuildProgram?
You can check the extensions supported by a device from the host calling clGetDeviceInfo with CL_DEVICE_EXTENSIONS (section 4.2 of the OpenCL 1.1 spec). The returned string will contain 'cl_khr_fp64' if the extension is supported.
When compiling OpenCL code with clBuildProgram, the compiler defines 'cl_khr_fp64' if the extension is supported (section 9.1 of the OpenCL 1.1 spec).
To enable the extension in the OpenCL code, you then have to include the pragma line. You can control the use of the extension from the host code by passing an option to clBuildProgram, like -D USE_FP64=1, and then test it in the OpenCL code:
#if USE_FP64
#pragma OPENCL EXTENSION cl_khr_fp64 : enable
#endif