I'm developing an application that can use any OpenGL version from 4.6 down to 2.0 by gradually disabling some features and optimizations. This means that it can live with 2.0 but prefers the latest supported version to be able to use all the available features from OpenGL 3.x-4.x.
Also, it handles all the differences between core and compatibility contexts, so it should work with any profile.
It seems that on Windows there won't be a problem, because I can just omit the version and the profile and automatically get a compatibility context with the latest supported version.
But things work differently on macOS and with Mesa. There I have to request a core forward compatible context of some specific version, even though I don't want a specific version, I want the latest one.
How do I handle this problem? Do I have to try all the versions 4.6, 4.5, 4.4, 4.3, 4.2, 4.1, 4.0, 3.3, 3.2, 3.1, 3.0, 2.1, 2.0 in a loop until the context is successfully created? Or is there a better solution?
If there is no better general solution, I would like to know how it works in practice with different drivers on different platforms.
If you ask for OpenGL version X.Y, the system can give you any supported version of OpenGL which is backwards compatible with X.Y. That is, to ask for X.Y means "I have written my code against GL version X.Y, so don't give me something that would break my code."
However, the core profile of OpenGL 3.2+ is not backwards compatible with 2.0. Indeed, this is the entire point of the core/compatibility distinction: the compatibility profile provides access to the higher features of the API while being backwards compatible with existing code. The core profile does not. For example, 2.0 lacks vertex array objects, and core profile OpenGL cannot work without them.
Now, all versions of OpenGL for each profile are backwards-compatible with all lower versions of the API for that profile. So 3.2 core profile is backwards-compatible with 4.6, and everything in-between. And the compatibility profile is backwards-compatible with all prior versions of OpenGL.
But implementations are not required to support the compatibility profile of OpenGL, only the core profile. As such, if you ask for OpenGL version 2.0, then the implementation will have to give you the highest version of OpenGL that is compatible with GL 2.0. If the implementation doesn't support the compatibility profile, then this will not be the highest core profile version of OpenGL supported.
If you want to support both core and any "compatibility" version of OpenGL, then you have to write specialized code for each pathway. You have to have a 2.0 version and a 3.2 core version of your code. And since you have two versions of your code, you'll have to check to see which version to use for that context.
Which means you don't need a way to do what you're asking to do. Just try to create a 3.2 core profile version, and if that doesn't work, create a 2.0 version.
I did some quick tests.
On Windows AMD Radeon (Ryzen 7):
Requesting any context version up to 2.1 results in a 4.6 Compatibility context. This is exactly what I want.
Requesting any context version above 2.1 results in the context of the requested version.
I assume it works the same on Linux proprietary drivers.
It probably works the same on Intel and NVidia but I can't test it now.
On Mesa for Windows 20.3.2
Requesting any context version up to 3.1 results in a 3.1 context.
Requesting any context version above 3.1 results in a 4.5 Core context. This is exactly what I want.
I assume it works the same on Linux open-source drivers
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context. This is exactly what I want.
On Android (Adreno 640)
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context.
I assume that it works the same with other vendors on Android.
It seems like only the first context creation is slow. In both cases, an additional attempt to create a context adds about 4 ms to the application's startup time on my system, whereas the whole context + window creation is about 300 ms with a native driver or 70 ms with Mesa.
I don't have a macOS to test so I'm going to use a conservative approach by trying forward-compatible 4.1, 3.3, 3.2, then 2.1. Anyway, most Macs support exactly 4.1, so for them, the context will be created with the first attempt.
This is what the documentation for iOS OpenGL ES recommends to do:
To support multiple versions of OpenGL ES as rendering options in your app, you should first attempt to initialize a rendering context of the newest version you want to target. If the returned object is nil, initialize a context of an older version instead.
So in GLFW pseudocode, my strategy for OpenGL looks like this:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
GLFWwindow* wnd;
#ifdef __APPLE__ // macOS
const std::array<char, 2> versions[] = {{4, 1}, {3, 3}, {3, 2}, {2, 1}};
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, true);
for(auto ver: versions)
{
if(ver[0] < 3) glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, false);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, ver[0]);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, ver[1]);
wnd = glfwCreateWindow(...);
if(wnd) break;
}
glfwMakeContextCurrent(wnd);
#else // Windows, Linux and other GLFW supported OSes
glfwWindowHint(GLFW_VISIBLE, false);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
std::string_view versionStr = glGetString(GL_VERSION);
if(versionStr[0] < '4' && versionStr.contains("Mesa"))
{
glfwDestroyWindow(wnd);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
}
glfwShowWindow(wnd);
#endif
The code for OpenGL ES would look similar but simpler. Mobile platforms will use a different library instead of GLFW (GLFM/SDL or native EGL). For iOS, I have to try ES 3.0 then ES 2.0. For Mesa and Android, I just request a 2.0 context and get the latest one (3.2). However, for Android, I assume that Mali and other vendors work the same.
Please, let me know in the comments if you can test my assumptions to confirm or deny them.
Related
Someone was asking me for a test with OpenGL 2.x since they have hardware that supports only up to OpenGL 2.1.
I figured I'd try it out by setting the window hints in GLFW to use the major/minor version of 2 and 0.
Problem is I'm still using #version 330 in my shaders, and it works. However, it would not let me use the hints of GL version 2 when I was leaving on a Core profile (by accident). This seems to indicate that my version choice is doing something, but not what I expect.
I want to restrict myself to 2.1 to see if my application would run, and if it doesn't, then see what I can change to make it work. Problem is I don't have any 2.1 hardware since my computers are all 2015 or later.
Is there a way I can emulate 2.1 (on Windows) somehow and have it crash/die if I try using features it doesn't support? Apparently the hints I'm using are not helping.
As far as I know the major/minor version flags don't set the version of your OpenGL context but the required feature set. So if you set the flags to 3.3 for example you will usually get a 4.5 or 4.6 context as those version are typically the latest OpenGL versions your GPU supports that is compatible with OpenGL 3.3. Getting a OpenGL 2.1 Core context should be impossible as the defining feature of the core context is that it doesn't support some OpenGL 1.0-2.1 functionality. So this isn't really surprising.
I think your best option here is to use headers that only contain OpenGL 2.1 functions. GLAD for examples allows you to specify which version you want to generate headers for.
I'm using openFrameworks on Windows, which uses GLFW and GLEW, and I'm having issues regarding extensions availability on different GPUs brands.
Basically, if I run my program on openGL 2, the extensions are available. But if I change to openGL 3.2 or higher, all the extensions became unavailable on Nvida (tested on a* GTX1080) and Intel (*UHD), but not on AMD (*Vega Mobile GL/GH and RX 5700).
This translates to not being able to use GL_ARB_texture_float, and therefore my compute shaders don't work as intended.
I'm using openGL 4.3, for the compute shader support and for support of the Intel GPU. All drivers are up to date and all GPU support GL_ARB_texture_float.
Also, enabling the extension on the GLSL does nothing.
This is how openFrameworks makes the context:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, settings.glVersionMajor);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, settings.glVersionMinor);
if((settings.glVersionMajor==3 && settings.glVersionMinor>=2) || settings.glVersionMajor>=4)
{
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
}
if(settings.glVersionMajor>=3)
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
}
Not sure what is going on, nor exactly how to search for an issue like this. Any pointers are welcome!
ARB_texture_float is an ancient extension that was incorporated into OpenGL proper in 3.0. That is, if you ask for 3.2, you can just use floating-point formats for textures. They're always available.
Furthermore, later GL versions add more floating-point formats.
Since there is no core profile of OpenGL less than version 3.2, I suspect that these implementations are simply not advertising extensions that have long since been part of core OpenGL if you ask for a core profile.
I'm trying to create an OpenGL context 3.2 on a Netbook running Ubuntu 13. Since the hardware isn't capable of hardware-supported Opengl 3.2, I'm wondering if the software rasterizer could provide such functionality.
I'm aware that software mode can be utterly slow, but I just need to test and practice some simple examples.
I couldn't find any definitive information on the Internet that would say it's possible or not, and my knowledge on Mesa is very limited. So my question is, is it possible to create a software-based OpenGL 3.2 context with Mesa or not?
Currently, it isn't. When using one of the software rasterizer backends (the old, deprecated swrast, or the more modern, gallium-based softpipe or llvmpipe drivers), only GL 2.1 will be advertised. The issue is that mesa's software rasterizers do not yet support multisampling, which is a requirement of GL 3.x. There might be also some other minor features missing which are required for GL 3.x.
However, you can still use most of the GL 3.2 features via the extension mechanism, without having a 3.2 context. This also means that you won't be able to get a core profile context, but this shouldn't be a problem either - nothing forces you to actually use the deprecated functionality.
I have purchased a graphics card which supports OpenGL 4.2. But I want to develop an application which should support OpenGL 2.0
Does my card will support OpenGL 2.0 apps(Backward compatibility)??
Then how to ensure backwards-compatibility
I have planned to use GLUT/GLFW C++ libraries.
https://developer.nvidia.com/opengl-driver - please read about compatibility and that no 'old' functionality will be removed from the drivers.
In general you can create your application in two modes:
Core: This is modern OpenGL, no fixed pipeline functionality. In freeGlut you can use glutInitContextFlags (GLUT_CORE_PROFILE); and glutInitContextVersion (4, 2); to use core opengl 4.2
Compatibility: all functionalities from OpenGL 1.1 up to 4.2 (in your case) are supported and all those features can be used in your code. By default apps use this profile, or you can create it via glutInitContextFlags (GLUT_COMPATIBILITY_PROFILE );
Your graphic card will have the backward compatibility with OpenGl 2.0 app. You do not need to do anything special
I started writing programs, in C (for now) using GLFW and OpenGL. The question I have is that, how do I know which version of OpenGL my program will use? My laptop says that my video card has OpenGL 3.3. Typing "glxinfo | grep -i opengl" returns:
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce 9600M GT/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 285.05.09
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
So is OpenGL 3.3 automatically being used ?
Just call glGetString(GL_VERSION) (once the context is initialized, of course) and put out the result (which is actually the same that glxinfo does, I suppose):
printf("%s\n", glGetString(GL_VERSION));
Your program should automatically use the highest possible version your hardware and driver support, which in your case seems to be 3.3. But for creating a core-profile context for OpenGL 3+ (one where deprecated functionality has been completely removed) you have to take special measures. But since version 2.7 GLFW has means for doing this, using the glfwOpenWindowHint function. But if you don't want to explicitly disallow deprecated functionality, you can just use the context given to you by the default context creation functions of GLFW, which will as said support the highest possible version for your hardware and drivers.
But also keep in mind that for using OpenGL functionality higher than version 1.1 you need to retrieve the corresponding function pointers or use a library that handles this for you, like GLEW.