I'm using openFrameworks on Windows, which uses GLFW and GLEW, and I'm having issues regarding extensions availability on different GPUs brands.
Basically, if I run my program on openGL 2, the extensions are available. But if I change to openGL 3.2 or higher, all the extensions became unavailable on Nvida (tested on a* GTX1080) and Intel (*UHD), but not on AMD (*Vega Mobile GL/GH and RX 5700).
This translates to not being able to use GL_ARB_texture_float, and therefore my compute shaders don't work as intended.
I'm using openGL 4.3, for the compute shader support and for support of the Intel GPU. All drivers are up to date and all GPU support GL_ARB_texture_float.
Also, enabling the extension on the GLSL does nothing.
This is how openFrameworks makes the context:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, settings.glVersionMajor);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, settings.glVersionMinor);
if((settings.glVersionMajor==3 && settings.glVersionMinor>=2) || settings.glVersionMajor>=4)
{
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
}
if(settings.glVersionMajor>=3)
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
}
Not sure what is going on, nor exactly how to search for an issue like this. Any pointers are welcome!
ARB_texture_float is an ancient extension that was incorporated into OpenGL proper in 3.0. That is, if you ask for 3.2, you can just use floating-point formats for textures. They're always available.
Furthermore, later GL versions add more floating-point formats.
Since there is no core profile of OpenGL less than version 3.2, I suspect that these implementations are simply not advertising extensions that have long since been part of core OpenGL if you ask for a core profile.
Related
I'm developing an application that can use any OpenGL version from 4.6 down to 2.0 by gradually disabling some features and optimizations. This means that it can live with 2.0 but prefers the latest supported version to be able to use all the available features from OpenGL 3.x-4.x.
Also, it handles all the differences between core and compatibility contexts, so it should work with any profile.
It seems that on Windows there won't be a problem, because I can just omit the version and the profile and automatically get a compatibility context with the latest supported version.
But things work differently on macOS and with Mesa. There I have to request a core forward compatible context of some specific version, even though I don't want a specific version, I want the latest one.
How do I handle this problem? Do I have to try all the versions 4.6, 4.5, 4.4, 4.3, 4.2, 4.1, 4.0, 3.3, 3.2, 3.1, 3.0, 2.1, 2.0 in a loop until the context is successfully created? Or is there a better solution?
If there is no better general solution, I would like to know how it works in practice with different drivers on different platforms.
If you ask for OpenGL version X.Y, the system can give you any supported version of OpenGL which is backwards compatible with X.Y. That is, to ask for X.Y means "I have written my code against GL version X.Y, so don't give me something that would break my code."
However, the core profile of OpenGL 3.2+ is not backwards compatible with 2.0. Indeed, this is the entire point of the core/compatibility distinction: the compatibility profile provides access to the higher features of the API while being backwards compatible with existing code. The core profile does not. For example, 2.0 lacks vertex array objects, and core profile OpenGL cannot work without them.
Now, all versions of OpenGL for each profile are backwards-compatible with all lower versions of the API for that profile. So 3.2 core profile is backwards-compatible with 4.6, and everything in-between. And the compatibility profile is backwards-compatible with all prior versions of OpenGL.
But implementations are not required to support the compatibility profile of OpenGL, only the core profile. As such, if you ask for OpenGL version 2.0, then the implementation will have to give you the highest version of OpenGL that is compatible with GL 2.0. If the implementation doesn't support the compatibility profile, then this will not be the highest core profile version of OpenGL supported.
If you want to support both core and any "compatibility" version of OpenGL, then you have to write specialized code for each pathway. You have to have a 2.0 version and a 3.2 core version of your code. And since you have two versions of your code, you'll have to check to see which version to use for that context.
Which means you don't need a way to do what you're asking to do. Just try to create a 3.2 core profile version, and if that doesn't work, create a 2.0 version.
I did some quick tests.
On Windows AMD Radeon (Ryzen 7):
Requesting any context version up to 2.1 results in a 4.6 Compatibility context. This is exactly what I want.
Requesting any context version above 2.1 results in the context of the requested version.
I assume it works the same on Linux proprietary drivers.
It probably works the same on Intel and NVidia but I can't test it now.
On Mesa for Windows 20.3.2
Requesting any context version up to 3.1 results in a 3.1 context.
Requesting any context version above 3.1 results in a 4.5 Core context. This is exactly what I want.
I assume it works the same on Linux open-source drivers
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context. This is exactly what I want.
On Android (Adreno 640)
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context.
I assume that it works the same with other vendors on Android.
It seems like only the first context creation is slow. In both cases, an additional attempt to create a context adds about 4 ms to the application's startup time on my system, whereas the whole context + window creation is about 300 ms with a native driver or 70 ms with Mesa.
I don't have a macOS to test so I'm going to use a conservative approach by trying forward-compatible 4.1, 3.3, 3.2, then 2.1. Anyway, most Macs support exactly 4.1, so for them, the context will be created with the first attempt.
This is what the documentation for iOS OpenGL ES recommends to do:
To support multiple versions of OpenGL ES as rendering options in your app, you should first attempt to initialize a rendering context of the newest version you want to target. If the returned object is nil, initialize a context of an older version instead.
So in GLFW pseudocode, my strategy for OpenGL looks like this:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
GLFWwindow* wnd;
#ifdef __APPLE__ // macOS
const std::array<char, 2> versions[] = {{4, 1}, {3, 3}, {3, 2}, {2, 1}};
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, true);
for(auto ver: versions)
{
if(ver[0] < 3) glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, false);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, ver[0]);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, ver[1]);
wnd = glfwCreateWindow(...);
if(wnd) break;
}
glfwMakeContextCurrent(wnd);
#else // Windows, Linux and other GLFW supported OSes
glfwWindowHint(GLFW_VISIBLE, false);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
std::string_view versionStr = glGetString(GL_VERSION);
if(versionStr[0] < '4' && versionStr.contains("Mesa"))
{
glfwDestroyWindow(wnd);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
}
glfwShowWindow(wnd);
#endif
The code for OpenGL ES would look similar but simpler. Mobile platforms will use a different library instead of GLFW (GLFM/SDL or native EGL). For iOS, I have to try ES 3.0 then ES 2.0. For Mesa and Android, I just request a 2.0 context and get the latest one (3.2). However, for Android, I assume that Mali and other vendors work the same.
Please, let me know in the comments if you can test my assumptions to confirm or deny them.
I have a slightly modified version of the sample code found on the main LWJGL page. It works but it uses legacy OpenGL version 2.1. If I attempt to use the forward-compatible context described in GLFW doc, the version used is 4.1 (no matter what major/minor I hint), the window is created, but it crashes on the first call to glPushMatrix().
Forward compatibility enabled like so:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Some info I print in the console:
LWJGL 3.1.6 build 14
OS name Mac OS X
OS version 10.13.4
OpenGL version 4.1 ATI-1.66.31
Logging:
[LWJGL] A function that is not available in the current context was called.
Problematic frame:
C [liblwjgl.dylib+0x1c494]
From here I don't know what to look for. Should this code be working? Or am I missing some ceremony? Many resources are outdated, making it harder to figure things out.
glPushMatrix is a function for not Core Profile context, but for OpenGL < 3.2.
If you want to use it (and other pre-core features) you need a Compatibility context, not a Forward compatible one, not a Core profile either.
The GLFW hints should be like these, without asking for a Core Profile.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
Likely the driver will give the highest available version, but with all of old features too.
I am a student learning c++ and opengl for 5 months now and we have touched some advanced topics over the course of time starting from basic opengl like glBegin/glEnd to VA to VBO to shaders etc. Our professor has made us build up our graphics engine over time form first class and every now and then he asks us to stop using one or the other deprecated features and move on to the newer versions.
Now as part of the current assignment, he asked us to get rid of everything prior to OpenGl ES 2.0. Our codebase is fairly large and I was wondering if I could set OpenGL to 2.0 and above only so that having those deprecated features would actually fail at compile time, so that I can make sure all those features are out of my engine.
When you initialize your OpenGL context, you can pass hints to the context to request a specific context version. For example, using the GLFW library:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
GLFWwindow* window = glfwCreateWindow(res_width, res_height, window_name, monitor, NULL);
This will fail (in the case of GLFW, it returns a NULL window) if the OpenGL library doesn't support ES 2.0. Your platform's native EGL (or WGL, GLX, AGL, etc.) functions offer this functionality.
In GLFW I'm setting OpenGL context version via:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
However when I print it to the console after glfwMakeContextCurrent(window); and glewInit(); via:
Log::brightWhite("OpenGL version:\t");
Log::white("%s\n", glGetString(GL_VERSION));
Log::brightWhite("GLSL version:\t");
Log::white("%s\n", glGetString(GL_SHADING_LANGUAGE_VERSION));
I get the following:
Why is it 4.3 and not 2.0?
Because the implementation is free to give you any version it likes, as long it is supporting everything which is in core GL 2.0. You typically will get the highsest supported compatibilty profile version of the implementation. There is nothing wrong with that.
Note that forward and backward compatible contexts and profiles were added in later versions, so when requesting a 1.x/2.x context, this is the behavior you should expet. Note that on OSX, GL 3.X an above is only supported in core profile, so you will very likely end up with a 2.1 context there.
I have many OpenGl shaders. We try to use as many different hardware as possible to evaluate the portability of our product. One of our customer recently ran into some rendering issues it seems that the target machine only provide version shaders model 2.0 all our development/build/test machine (even oldest ones run version 4.0), everything else (OpenGl version, GSLS version ...) seems identical.
I didn't find a way to downgrade the shaders model version since it's automatically provided by the graphic card driver.
Is there a way to manually install or select OpenGl/GLSL/Shader model version in use for develpment/test purpose ?
NOTE: the main target are windows XP SP2/7 (32&64) for both ATI/NVIDIA cards
OpenGL does not have the concept of "shader models"; that's a Direct3D thing. It only has versions of GLSL: 1.10, 1.20, etc.
Every OpenGL version matches a specific GLSL version. GL 2.1 supports GLSL 1.20. GL 3.0 supports GLSL 1.30. For GL 3.3 and above, they stopped fooling around and just used the same version number, so GL 3.3 supports GLSL 3.30. So there's an odd version number gap between GLSL 1.50 (maps to GL 3.2) and GLSL 3.30.
Technically, OpenGL implementations are allowed to refuse to compile older shader versions than the ones that match to the current version. As a practical matter however, you can pretty much shove any GLSL shader into any OpenGL implementation, as long as the shader's version is less than or equal to the version that the OpenGL implementation supports. This hasn't been tested on MacOSX Lion's implementation of GL 3.2 core.
There is one exception: core contexts. If you try to feed a shader through a core OpenGL context that uses functionality removed from the core, it will complain.
There is no way to force OpenGL to provide you with a particular OpenGL version. You can ask it to, with wgl/glXCreateContextAttribs. But that is allowed to give you any version higher than the one you ask for, so long as that version is backwards compatible with what you asked for.