LWJGL 3.1.6 OpenGL 4.1 crash on macOS High Sierra - opengl

I have a slightly modified version of the sample code found on the main LWJGL page. It works but it uses legacy OpenGL version 2.1. If I attempt to use the forward-compatible context described in GLFW doc, the version used is 4.1 (no matter what major/minor I hint), the window is created, but it crashes on the first call to glPushMatrix().
Forward compatibility enabled like so:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
Some info I print in the console:
LWJGL 3.1.6 build 14
OS name Mac OS X
OS version 10.13.4
OpenGL version 4.1 ATI-1.66.31
Logging:
[LWJGL] A function that is not available in the current context was called.
Problematic frame:
C [liblwjgl.dylib+0x1c494]
From here I don't know what to look for. Should this code be working? Or am I missing some ceremony? Many resources are outdated, making it harder to figure things out.

glPushMatrix is a function for not Core Profile context, but for OpenGL < 3.2.
If you want to use it (and other pre-core features) you need a Compatibility context, not a Forward compatible one, not a Core profile either.
The GLFW hints should be like these, without asking for a Core Profile.
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_COMPAT_PROFILE);
Likely the driver will give the highest available version, but with all of old features too.

Related

openGL Extensions availability and different GPU brands

I'm using openFrameworks on Windows, which uses GLFW and GLEW, and I'm having issues regarding extensions availability on different GPUs brands.
Basically, if I run my program on openGL 2, the extensions are available. But if I change to openGL 3.2 or higher, all the extensions became unavailable on Nvida (tested on a* GTX1080) and Intel (*UHD), but not on AMD (*Vega Mobile GL/GH and RX 5700).
This translates to not being able to use GL_ARB_texture_float, and therefore my compute shaders don't work as intended.
I'm using openGL 4.3, for the compute shader support and for support of the Intel GPU. All drivers are up to date and all GPU support GL_ARB_texture_float.
Also, enabling the extension on the GLSL does nothing.
This is how openFrameworks makes the context:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, settings.glVersionMajor);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, settings.glVersionMinor);
if((settings.glVersionMajor==3 && settings.glVersionMinor>=2) || settings.glVersionMajor>=4)
{
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
}
if(settings.glVersionMajor>=3)
{
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
}
Not sure what is going on, nor exactly how to search for an issue like this. Any pointers are welcome!
ARB_texture_float is an ancient extension that was incorporated into OpenGL proper in 3.0. That is, if you ask for 3.2, you can just use floating-point formats for textures. They're always available.
Furthermore, later GL versions add more floating-point formats.
Since there is no core profile of OpenGL less than version 3.2, I suspect that these implementations are simply not advertising extensions that have long since been part of core OpenGL if you ask for a core profile.

Request the most recent version of OpenGL context

I'm developing an application that can use any OpenGL version from 4.6 down to 2.0 by gradually disabling some features and optimizations. This means that it can live with 2.0 but prefers the latest supported version to be able to use all the available features from OpenGL 3.x-4.x.
Also, it handles all the differences between core and compatibility contexts, so it should work with any profile.
It seems that on Windows there won't be a problem, because I can just omit the version and the profile and automatically get a compatibility context with the latest supported version.
But things work differently on macOS and with Mesa. There I have to request a core forward compatible context of some specific version, even though I don't want a specific version, I want the latest one.
How do I handle this problem? Do I have to try all the versions 4.6, 4.5, 4.4, 4.3, 4.2, 4.1, 4.0, 3.3, 3.2, 3.1, 3.0, 2.1, 2.0 in a loop until the context is successfully created? Or is there a better solution?
If there is no better general solution, I would like to know how it works in practice with different drivers on different platforms.
If you ask for OpenGL version X.Y, the system can give you any supported version of OpenGL which is backwards compatible with X.Y. That is, to ask for X.Y means "I have written my code against GL version X.Y, so don't give me something that would break my code."
However, the core profile of OpenGL 3.2+ is not backwards compatible with 2.0. Indeed, this is the entire point of the core/compatibility distinction: the compatibility profile provides access to the higher features of the API while being backwards compatible with existing code. The core profile does not. For example, 2.0 lacks vertex array objects, and core profile OpenGL cannot work without them.
Now, all versions of OpenGL for each profile are backwards-compatible with all lower versions of the API for that profile. So 3.2 core profile is backwards-compatible with 4.6, and everything in-between. And the compatibility profile is backwards-compatible with all prior versions of OpenGL.
But implementations are not required to support the compatibility profile of OpenGL, only the core profile. As such, if you ask for OpenGL version 2.0, then the implementation will have to give you the highest version of OpenGL that is compatible with GL 2.0. If the implementation doesn't support the compatibility profile, then this will not be the highest core profile version of OpenGL supported.
If you want to support both core and any "compatibility" version of OpenGL, then you have to write specialized code for each pathway. You have to have a 2.0 version and a 3.2 core version of your code. And since you have two versions of your code, you'll have to check to see which version to use for that context.
Which means you don't need a way to do what you're asking to do. Just try to create a 3.2 core profile version, and if that doesn't work, create a 2.0 version.
I did some quick tests.
On Windows AMD Radeon (Ryzen 7):
Requesting any context version up to 2.1 results in a 4.6 Compatibility context. This is exactly what I want.
Requesting any context version above 2.1 results in the context of the requested version.
I assume it works the same on Linux proprietary drivers.
It probably works the same on Intel and NVidia but I can't test it now.
On Mesa for Windows 20.3.2
Requesting any context version up to 3.1 results in a 3.1 context.
Requesting any context version above 3.1 results in a 4.5 Core context. This is exactly what I want.
I assume it works the same on Linux open-source drivers
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context. This is exactly what I want.
On Android (Adreno 640)
Requesting any OpenGL ES version between 2.0-3.2 results in a 3.2 context.
I assume that it works the same with other vendors on Android.
It seems like only the first context creation is slow. In both cases, an additional attempt to create a context adds about 4 ms to the application's startup time on my system, whereas the whole context + window creation is about 300 ms with a native driver or 70 ms with Mesa.
I don't have a macOS to test so I'm going to use a conservative approach by trying forward-compatible 4.1, 3.3, 3.2, then 2.1. Anyway, most Macs support exactly 4.1, so for them, the context will be created with the first attempt.
This is what the documentation for iOS OpenGL ES recommends to do:
To support multiple versions of OpenGL ES as rendering options in your app, you should first attempt to initialize a rendering context of the newest version you want to target. If the returned object is nil, initialize a context of an older version instead.
So in GLFW pseudocode, my strategy for OpenGL looks like this:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_API);
GLFWwindow* wnd;
#ifdef __APPLE__ // macOS
const std::array<char, 2> versions[] = {{4, 1}, {3, 3}, {3, 2}, {2, 1}};
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, true);
for(auto ver: versions)
{
if(ver[0] < 3) glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, false);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, ver[0]);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, ver[1]);
wnd = glfwCreateWindow(...);
if(wnd) break;
}
glfwMakeContextCurrent(wnd);
#else // Windows, Linux and other GLFW supported OSes
glfwWindowHint(GLFW_VISIBLE, false);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
std::string_view versionStr = glGetString(GL_VERSION);
if(versionStr[0] < '4' && versionStr.contains("Mesa"))
{
glfwDestroyWindow(wnd);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
wnd = glfwCreateWindow(...);
glfwMakeContextCurrent(wnd);
}
glfwShowWindow(wnd);
#endif
The code for OpenGL ES would look similar but simpler. Mobile platforms will use a different library instead of GLFW (GLFM/SDL or native EGL). For iOS, I have to try ES 3.0 then ES 2.0. For Mesa and Android, I just request a 2.0 context and get the latest one (3.2). However, for Android, I assume that Mali and other vendors work the same.
Please, let me know in the comments if you can test my assumptions to confirm or deny them.

Forcing Opengl 2.0 and above in c++

I am a student learning c++ and opengl for 5 months now and we have touched some advanced topics over the course of time starting from basic opengl like glBegin/glEnd to VA to VBO to shaders etc. Our professor has made us build up our graphics engine over time form first class and every now and then he asks us to stop using one or the other deprecated features and move on to the newer versions.
Now as part of the current assignment, he asked us to get rid of everything prior to OpenGl ES 2.0. Our codebase is fairly large and I was wondering if I could set OpenGL to 2.0 and above only so that having those deprecated features would actually fail at compile time, so that I can make sure all those features are out of my engine.
When you initialize your OpenGL context, you can pass hints to the context to request a specific context version. For example, using the GLFW library:
glfwWindowHint(GLFW_CLIENT_API, GLFW_OPENGL_ES_API);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
GLFWwindow* window = glfwCreateWindow(res_width, res_height, window_name, monitor, NULL);
This will fail (in the case of GLFW, it returns a NULL window) if the OpenGL library doesn't support ES 2.0. Your platform's native EGL (or WGL, GLX, AGL, etc.) functions offer this functionality.

GLFW returning the wrong GL_VERSION

In GLFW I'm setting OpenGL context version via:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
However when I print it to the console after glfwMakeContextCurrent(window); and glewInit(); via:
Log::brightWhite("OpenGL version:\t");
Log::white("%s\n", glGetString(GL_VERSION));
Log::brightWhite("GLSL version:\t");
Log::white("%s\n", glGetString(GL_SHADING_LANGUAGE_VERSION));
I get the following:
Why is it 4.3 and not 2.0?
Because the implementation is free to give you any version it likes, as long it is supporting everything which is in core GL 2.0. You typically will get the highsest supported compatibilty profile version of the implementation. There is nothing wrong with that.
Note that forward and backward compatible contexts and profiles were added in later versions, so when requesting a 1.x/2.x context, this is the behavior you should expet. Note that on OSX, GL 3.X an above is only supported in core profile, so you will very likely end up with a 2.1 context there.

Cannot deploy GLFW 3.2

So this one is a doozie;
I've got a pretty large OpenGL solution, written in version 3.2 core with GLSL 1.5 in Windows 7. I am using GLEW and GLM as helper libraries. When I create a window, I am using the following lines:
// Initialize main window
glewExperimental = GL_TRUE;
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3); // Use OpenGL Core v3.2
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
if(!glfwOpenWindow(Game::WINDOW_X, Game::WINDOW_Y, 0, 0, 0, 0, 32, 0, GLFW_WINDOW))
{ ...
If I omit the three glfwOpenWindowHint functions, the application crashes my video drivers upon call to glDrawArrays(GL_TRIANGLES, 0, m_numIndices);
But here's the kicker. When someone else in my group tries to update and run the solution, they get a blank window with no geometry. Commenting out the three lines makes the program run fine for them. There is a pretty even split between working with the 3.2core hint and without. I haven't been able to determine any difference between nVidia, AMD, desktop, or laptop.
The best I could find was a suggestion to add glewExperimental = GL_TRUE; as Glew is said to have problems with core. It made no difference. The solution is too big to post code, but I can put up shaders, rendering code, etc as needed.
Thanks so much! This has been killing us for several days now.
Try asking for a forward-compatible GLFW window:
GLFW_OPENGL_FORWARD_COMPAT - Specify whether the OpenGL contextshould be forward-compatible (i.e. disallow legacy functionality). This should only beused when requesting OpenGL version 3.0 or above.
And try not setting the profile hint and let the system choose:
// Use OpenGL Core v3.2
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR, 3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR, 2);
glfwOpenWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
Also, make sure that you actually get a version you want:
int major, minor, rev;
glfwGetGLVersion(&major, &minor, &rev);
fprintf(stderr, "OpenGL version recieved: %d.%d.%d", major, minor, rev);
Not sure whether you also run for Macs, but read this anyway:
A.4 OpenGL 3.0+ on Mac OS X
Support for OpenGL 3.0 and above was introduced with Mac OS X 10.7,
and even then forward-compatible OpenGL 3.2 core profile contexts are
supported and there is no mechanism for requesting debug contexts.
Earlier versions of Mac OS X supports at most OpenGL version 2.1.
Because of this, on Mac OS X 10.7, the GLFW_OPENGL_VERSION_MAJOR and
GLFW_OPENGL_VERSION_MINOR hints will fail if given a version above
3.2, the GLFW_OPENGL_DEBUG_CONTEXT and GLFW_FORWARD_COMPAT hints are ignored, and setting the GLFW_OPENGL_PROFILE hint to anything except
zero or GLFW_OPENGL_CORE_PROFILE will cause glfwOpenWindow to fail.
Also, on Mac OS X 10.6 and below, the GLFW_OPENGL_VERSION_MAJOR and
GLFW_OPENGL_VERSION_MINOR hints will fail if given a version above
2.1, the GLFW_OPENGL_DEBUG_CONTEXT hint will have no effect, and setting the GLFW_OPENGL_PROFILE or GLFW_FORWARD_COMPAT hints to a
non-zero value will cause glfwOpenWindow to fail.
I ran into the same issue. I had to create a VAO before my VBO's and now it works on OS X.
GLuint vertex_array;
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);