Is there a compiler flag or another way of forcing OpenGL core profile only? I want to get an error when i use deprecated functions like glRotatef and so on.
EDIT1: I am using Linux, however, i am also interested in knowing how to do this in Windows
EDIT2: I would prefer to get an error at compile time, but runtime error would be ok as well.
You could compile your code using gl3.h instead of gl.h.
http://www.opengl.org/registry/api/gl3.h
Try wglCreateContextAttribsARB() with WGL_CONTEXT_CORE_PROFILE_BIT_ARB.
Or glXCreateContextAttribsARB with GLX_CONTEXT_CORE_PROFILE_BIT_ARB.
You might find this example useful as a testbed.
Depends on what creates your OpenGL context.
If you're using GLFW (which I sincerely recommend for standalone OGL window apps), then you can do before you create the window:
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR,3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR,1);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_CORE_PROFILE);
// the last line shouldn't be necessary
// as you request a specific GL context version -
// - at least my ATI will then default to core profile
Note that if you request a pre-3.0 GL context on modern hardware/drivers, you're likely to receive a newest possible context in compatibility mode instead. Check what your GPU returns from glGetString(GL_VERSION) to make sure.
If you use another API for creation of OpenGL context, check its reference manual for similar functions.
BTW:
I believe it's impossible to get an error in compile time - your compiler can't be aware what OpenGL context you will receive after your request (if any). The correct way of ensuring that you're not using out-of-version functionality is testing for glGetError().
Also, I recommend using the gl3w extension wrapper if you compile for Windows.
I have found another way to do it using the Unofficial OpenGL Software Development Kit:
http://glsdk.sourceforge.net/docs/html/index.html
By using the 'GL Load' component its possible to load a core profile of OpenGL and to remove compatibility enums and functions for versions of OpenGL 3.1 or greater. A short howto can be found here:
https://www.opengl.org/wiki/OpenGL_Loading_Library#Unofficial_OpenGL_SDK
Related
Recently I realized my application is using an OpenGL call which is available only on OpenGL 4.5 - glCreateTextures, and I realized it only because I was trying to run the application on a MacOS computer supporting only OpenGL 4.1 and it crashed there.
The application requests OpenGL 4.0 core profile, 3.2 core profile and 3.2 forward compatible core profile (in this order), but in spite of obtaining a 4.0 profile a call to glCreateTextures succeeds without any warning.
I would like my application to run on anything supporting 3.2, but I do not have a regular access to hardware which does not actually support 4.5. There might be other issues like this both in API and shader use lurking around which prevent compatibility with lower OpenGL versions and I would like to know about them.
How can I test my application to make sure it works with some particular version of OpenGL (3.2, 4.0) without actually running it on a hardware which does supports newer version?
In my case the application is running on JVM (written in Scala or Java) and uses LWJGL + GLFW bindings, but even knowing how do this for native C/C++ applications would be helpful, if there is a way, it should be probably possible to transform it into the JVM world.
This is a general problem with OpenGL and there not being a difference between loading a core function and an equally-named extension function.
In the case of glCreateTextures, this OpenGL function both comes from the ARB_direct_state_access extension as well as from OpenGL 4.5 (where that function became core) with equal name.
What actually happens in the concrete context of LWJGL is that it will check at runtime whether the context supports OpenGL 4.5 (there is a definitively function for that since OpenGL 3.0 which allows to check whether an OpenGL context supports a particular core version). And LWJGL will find that this check returns false when you request a core profile e.g. 3.3 core.
However, LWJGL also checks the existence of every advertized OpenGL extension and loads the function pointers of all exposed functions from those extensions.
And in your case, when you request e.g. a 3.3 core context, but your driver still exposing ARB_direct_state_access (which is perfectly fine) then LWJGL will load the function pointer for glCreateTextures just fine.
And since there is no difference in routing GL function calls in LWJGL when you call it through a core GL class or through an extension class (it's still the same underlying GL function after all), the call will succeed.
Now to solve this problem, there is a GitHub issue to allow an LWJGL client to exclude certain extensions from being loaded: https://github.com/LWJGL/lwjgl3/issues/683
Once this has landed in an LWJGL 3 release, functions that are not core in your requested GL core version will not be visible/available when you also exclude the extension.
Another way to at least reduce the risk of using a GL function that comes from a higher GL version than the one your app is targeting, is to simply not use any methods from a org.lwjgl.opengl.GLxyC class where x.y is higher than the GL version your app is targeting.
So in effect, when you say:
Recently I realized my application is using an OpenGL call which is available only on OpenGL 4.5 - glCreateTextures, and I realized it only because I was trying to run the application on a MacOS computer supporting only OpenGL 4.1 and it crashed there.
then one can argue that this is on you. Because it means that you were deliberately using an OpenGL function from a GL45 class, so you wanted to target OpenGL 4.5.
When abstracting my game-/render-engine I hit the issue that I need a way to know reliably what context I am operating on.
I am looking for a solution that works within the OpenGL specification. That is standard OpenGL, nothing provided by wrapper library xyz.
I am looking for a solution that works within the OpenGL specification
Nope, gotta step up a layer and ask the OS's window system binding via wglGetCurrentContext()/glXGetCurrentContext()/aglGetCurrentContext()/etc.
Short: No, there's no way to do this "within the OpenGL specification"
OpenGL specification doesn't care about HOW the context is created and managed.
The operating system and/or platform is responsible for this.
Long: If you control the application, then you should be able to detect when you are switching the OpenGL context. But if this is not the way you want to use, there's no other way... unless you setup different contexts with different settings (like the OpenGL version) or if they are running in separated threads. Again, this second one is not "OpenGL specification"... just tricks...
While I couldn't find anything for the OpenGL 3.3 Core Profile (most common version iirc), there's something available from version 4.3 onwards.
One can set a custom userpointer with glDebugMessageCallback and can retrieve that userpointer with glGetPointerv.
Thus one could abuse the userpointer to mark a context.
running glewinfo has a lot of information, but some of it is more confusing than helpful.
Here is my glewinfo from a laptop I have http://pastebin.com/K5p37w8a
It tells me my OpenGL version is 2.1, but when I continue reading, there are entries to GL_VERSION_3_0 up to GL_VERSION_4_0, and all of them say OK. But I can't call any of the functions listed there.
Others are tagging with OK [MISSING] this is the most confusing of them all, because either it is there, or it is missing, but it can't be both at the same time.
The glewinfo program shows you all of the entry points (functions) which are present, it doesn't tell you whether you can use the features, or whether those entry points work. A function could report as OK but your program could still crash if you call it! To figure out what features are available, you will have to look at the extension strings and the version number. You can get this information from glxinfo, you do not need GLEW.
In this case, you are using Mesa (an OpenGL implementation) with a compatibility profile (which is the default profile). In compatibility mode, Mesa is limited to OpenGL 2.1. However, if you request a core profile, Mesa will provide newer features and support a newer version of OpenGL. The same Mesa library is still used, which is why all of the OpenGL 4.0 entry points are available.
However, GLEW is somewhat broken when you use it with the core profile. The glewExperimental "fix" is a poor band-aid on a flawed implementation. For this reason, I do not recommend GLEW. glLoadGen is a good alternative.
I can't seem to be able to find the process address of these functions on my system. I'm using GLEW 1.9 which has support for everything. I am loading a 4.3 core profile for my context... My nVidia drivers are fully up to date. I downloaded a program called GPU Caps and it shows the extension as available. Any ideas?
Update - I had to enable glewExperimental to get it to work. I thought program separation was core since 4.1. If there are no insights I will mark this as solved.
I had to enable glewExperimental to get it to work.
Of course you did. GLEW is broken when it comes to loading OpenGL core contexts. You have to turn on that switch to make it work. It's a well-known workaround.
Or you could just use an OpenGL loading system that isn't broken for core (which would be pretty much anything besides GLEW that is still in active development). FYI: I wrote a couple of those.
I have done some simple OpenGL (old fixed pipeline ,without shaders ..etc) and want to start some serious "modern" OpenGL programming. (Should compile on Windows and Linux)
I have few questions.
1) In Windows , the "gl.h" doesnt have OpenGL2.0+ related API calls declared .(eg. glShaderSource() ) . How can I access these API calls?
I dont want to install graphics-card specific headers since, I want to compile this application in other machines.
2) In Linux ,If I install Mesa library can I access above OpenGL2+ APIs functions ?
There has been a long-held belief among some (due to the slowness of GL version updates in the late 90s/early 00s) that the way to get core OpenGL calls was to just include a header and a library. That loading function pointers manually was something you did for extensions, for "graphics-card specific" function. That isn't the case.
You should always use an extension loading library to get access to OpenGL functions, whether core or extension. GLEW is a pretty good one, and GL3W is decent if you can live with its limitations (3.0 core or better only).