Moving from fixed-pipeline to modern OpenGL - c++

I have done some simple OpenGL (old fixed pipeline ,without shaders ..etc) and want to start some serious "modern" OpenGL programming. (Should compile on Windows and Linux)
I have few questions.
1) In Windows , the "gl.h" doesnt have OpenGL2.0+ related API calls declared .(eg. glShaderSource() ) . How can I access these API calls?
I dont want to install graphics-card specific headers since, I want to compile this application in other machines.
2) In Linux ,If I install Mesa library can I access above OpenGL2+ APIs functions ?

There has been a long-held belief among some (due to the slowness of GL version updates in the late 90s/early 00s) that the way to get core OpenGL calls was to just include a header and a library. That loading function pointers manually was something you did for extensions, for "graphics-card specific" function. That isn't the case.
You should always use an extension loading library to get access to OpenGL functions, whether core or extension. GLEW is a pretty good one, and GL3W is decent if you can live with its limitations (3.0 core or better only).

Related

How to test if OpenGL application works against some particular OpenGL version?

Recently I realized my application is using an OpenGL call which is available only on OpenGL 4.5 - glCreateTextures, and I realized it only because I was trying to run the application on a MacOS computer supporting only OpenGL 4.1 and it crashed there.
The application requests OpenGL 4.0 core profile, 3.2 core profile and 3.2 forward compatible core profile (in this order), but in spite of obtaining a 4.0 profile a call to glCreateTextures succeeds without any warning.
I would like my application to run on anything supporting 3.2, but I do not have a regular access to hardware which does not actually support 4.5. There might be other issues like this both in API and shader use lurking around which prevent compatibility with lower OpenGL versions and I would like to know about them.
How can I test my application to make sure it works with some particular version of OpenGL (3.2, 4.0) without actually running it on a hardware which does supports newer version?
In my case the application is running on JVM (written in Scala or Java) and uses LWJGL + GLFW bindings, but even knowing how do this for native C/C++ applications would be helpful, if there is a way, it should be probably possible to transform it into the JVM world.
This is a general problem with OpenGL and there not being a difference between loading a core function and an equally-named extension function.
In the case of glCreateTextures, this OpenGL function both comes from the ARB_direct_state_access extension as well as from OpenGL 4.5 (where that function became core) with equal name.
What actually happens in the concrete context of LWJGL is that it will check at runtime whether the context supports OpenGL 4.5 (there is a definitively function for that since OpenGL 3.0 which allows to check whether an OpenGL context supports a particular core version). And LWJGL will find that this check returns false when you request a core profile e.g. 3.3 core.
However, LWJGL also checks the existence of every advertized OpenGL extension and loads the function pointers of all exposed functions from those extensions.
And in your case, when you request e.g. a 3.3 core context, but your driver still exposing ARB_direct_state_access (which is perfectly fine) then LWJGL will load the function pointer for glCreateTextures just fine.
And since there is no difference in routing GL function calls in LWJGL when you call it through a core GL class or through an extension class (it's still the same underlying GL function after all), the call will succeed.
Now to solve this problem, there is a GitHub issue to allow an LWJGL client to exclude certain extensions from being loaded: https://github.com/LWJGL/lwjgl3/issues/683
Once this has landed in an LWJGL 3 release, functions that are not core in your requested GL core version will not be visible/available when you also exclude the extension.
Another way to at least reduce the risk of using a GL function that comes from a higher GL version than the one your app is targeting, is to simply not use any methods from a org.lwjgl.opengl.GLxyC class where x.y is higher than the GL version your app is targeting.
So in effect, when you say:
Recently I realized my application is using an OpenGL call which is available only on OpenGL 4.5 - glCreateTextures, and I realized it only because I was trying to run the application on a MacOS computer supporting only OpenGL 4.1 and it crashed there.
then one can argue that this is on you. Because it means that you were deliberately using an OpenGL function from a GL45 class, so you wanted to target OpenGL 4.5.

My OpenGL Version doesn't support Buffer Binding

Why is this? I'd like to know because in a LOT of articles and various tutorials I see on the Internet it's hard NOT to see something which doesn't deal with buffer binding. The only bind function I have is glBindTexture. Does this mean my drivers are significantly outdated?
Edit
Sorry for lack of information. My OpenGL version 3.1 from an Intel integrated GPU. Also, the reason why I thought that I lacked functions such as glBindBuffer is mainly because it wouldn't show up in Qt as a function I could use in my auto-completion.
The only bind function I have is glBindTexture
Most likely you're fooled by the way OpenGL implementations export their functionality. opengl32.dll on Windows will and libGL.so most likely will give you only OpenGL-1.1 functionality (Windows Vista and / actually do give you OpenGL-1.4 and most Linux/BSD drivers will give you OpenGL-2.1). Anything beyond (and buffer objects go beyond) must be loaded through the so called extension system.
Most easy and reliable way to do this:
Go to http://glew.sf.net get the version matching your development environment
Install GLEW in development environment
Replace all occurences of #include <GL/gl.h> with #include <GL/glew.h>
Call glewInit() in your code after a OpenGL context has been made current.

App using 3D & 3rd-party plugins - forward compatible OpenGL or Direct3D?

I'm writing an app that's going to use 3rd-party created plugins to render all kinds of 3D trickery.
My main application is to create the context / render-object and a rendertarget/framebufferobject. The 3rd-party plugins are going to be rendering their fancy stuff to that, so they need access to that context / renderobject to perform their 3d-render-related calls.
I can choose to implement this using either OpenGL or Direct3D.
My decision will most probably be based on my understanding of the next problem :
Obviously, new versions of OpenGL / Direct3D will be coming out, and it would be nice if newly created plugins could benefit from newer versions of DX/OGL than the main program was compiled with. (if the computer running the application supports that newer version)
Using OpenGL (using OpenTK) I understood it's possible to create a forward-compatible context, as in "Give me the most up-to-date-version that is backward compatible with version X".
So when asked for a 3.2 context, if 4.0 is available it would return a 4.0 context.
For DirectX, I don't see anything like that, which would mean that if I create my main program with DirectX 11 for example, 3rd-party plugins would never be able to use newer versions when available ?
Am I getting this correct ?
Will OpenGL enable 3rd-party plugin writers to create plugins for newer versions of OpenGL, while DirectX will not allow me to do something like that ?
I'd be amazed if DirectX ever supported the sort of compatibility you're talking about within an application. Each version of the Direct3D APIs has basically been an independent (COM) object hierarchy with absolutely no acknowledgment that other generations of the system might exist, past or future. (Backwards compatibility at the platform level has generally been superb, of course, but you're after something quite different).
So either go with OpenGL (the support you mention at least sounds like it offers some hope), or maybe even consider a higher level API for plugins which somehow "compiles"/"adapts" to the actual target platform at runtime. That'd let you support OpenGL and Direct3D (although obviously shaders in particular would present severe difficulties; hence projects like AnySL).

Forcing OpenGL Core Profile Only

Is there a compiler flag or another way of forcing OpenGL core profile only? I want to get an error when i use deprecated functions like glRotatef and so on.
EDIT1: I am using Linux, however, i am also interested in knowing how to do this in Windows
EDIT2: I would prefer to get an error at compile time, but runtime error would be ok as well.
You could compile your code using gl3.h instead of gl.h.
http://www.opengl.org/registry/api/gl3.h
Try wglCreateContextAttribsARB() with WGL_CONTEXT_CORE_PROFILE_BIT_ARB.
Or glXCreateContextAttribsARB with GLX_CONTEXT_CORE_PROFILE_BIT_ARB.
You might find this example useful as a testbed.
Depends on what creates your OpenGL context.
If you're using GLFW (which I sincerely recommend for standalone OGL window apps), then you can do before you create the window:
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MAJOR,3);
glfwOpenWindowHint(GLFW_OPENGL_VERSION_MINOR,1);
glfwOpenWindowHint(GLFW_OPENGL_PROFILE,GLFW_OPENGL_CORE_PROFILE);
// the last line shouldn't be necessary
// as you request a specific GL context version -
// - at least my ATI will then default to core profile
Note that if you request a pre-3.0 GL context on modern hardware/drivers, you're likely to receive a newest possible context in compatibility mode instead. Check what your GPU returns from glGetString(GL_VERSION) to make sure.
If you use another API for creation of OpenGL context, check its reference manual for similar functions.
BTW:
I believe it's impossible to get an error in compile time - your compiler can't be aware what OpenGL context you will receive after your request (if any). The correct way of ensuring that you're not using out-of-version functionality is testing for glGetError().
Also, I recommend using the gl3w extension wrapper if you compile for Windows.
I have found another way to do it using the Unofficial OpenGL Software Development Kit:
http://glsdk.sourceforge.net/docs/html/index.html
By using the 'GL Load' component its possible to load a core profile of OpenGL and to remove compatibility enums and functions for versions of OpenGL 3.1 or greater. A short howto can be found here:
https://www.opengl.org/wiki/OpenGL_Loading_Library#Unofficial_OpenGL_SDK

Which OpenGL version by default installed along with MinGW?

I recently tried to lay my hands on OpenGL. Trying to grasp the API, I am using MinGW along with OpenGW. Now, I learned (or was given the advice) that I shouldn't use glBegin and glEnd anymore, since those are deprecated, but should start with OpenGL 3.1, instead. As I didn't know that the version used makes such a difference, I didn't pay much Attention as to which version I actually have installed on my computer. And, as far as I can see, there is no glVersion or similar call that I could use to determine that version.
Since I am using MinGW I went to its respective include folder and found in c:\MinGW\include\GL\gl.h:
/*
* Mesa 3-D graphics library
* Version: 4.0
[more lines]
*/
[more lines]
#define GL_VERSION_1_1 1
#if !defined(__WIN32__)
#define GL_VERSION_1_2 1
#define GL_VERSION_1_3 1
#define GL_ARB_imaging 1
#endif
[more lines]
#define GL_VERSION 0x1F02
which, to me, indicates, that the installed version is as low as 1.3. Is this the case or how could I verify my suspicion? Also, where would I find a later version (that is working fine along with MinGW) if I have 1.3 (or whatever version it is) only?
So, does someone know, if my suspicion is right and that MinGW comes with an outdated OpenGL version?
Edit I realise that this question might be taken as a duplicate of Which OpenGL version by default installed along with MinGW?, yet, I believe this question is specifically about MinGW and OpenGL, so I think that this fact allows for a (perhaps) more specific answer.
So, does someone know, if my suspicion is right and that MinGW comes with an outdated OpenGL version?
MinGW comes without "an OpenGL", your operating system (the graphics card driver, usually) provides OpenGL.
MinGW provides a header file (gl.h), and a corresponding library (libopengl32.a) which is a wrapper for opengl32.dll, a dynamic library which comes with Windows and contains the handles to OpenGL 1.0 an 1.1 functions... something around that, not sure about exact version numbers.
Then:
Most operating systems allow you to use the whole OpenGL in a similar way.
On Windows, however, in order to access the newer OpenGL functions than what's in the header and library wrapper (which you probably do have available - depending on your GPU and driver), you have to use system calls to load the function pointers to the OpenGL calls.
There are libraries which do that for you and let you use OpenGL 3/4 functionality easily. I recommend gl3w or GLEW. That's the usual way of using OpenGL on Windows.
There's glGetString(GL_VERSION).
Please note this question has been asked and answered before.
http://learningwebgl.com/blog/?p=11#troubleshooting has some good information - WebGL uses OpenGL. You can use http://www.realtech-vr.com/glview/ to confirm your test.