How do I read OpenGL dependencies? - opengl

I have a question about OpenGL dependencies. For example, for ARB_shader_atomic_counters, the dependency section says this:
Dependencies
This extension is written against the OpenGL 4.1 (core) specification
and the GLSL 4.10.6 specification.
OpenGL 3.0 is required.
How do I read this information? Does the graphics card and/or driver need to support OpenGL 4.1 or OpenGL 3.0?
The official docs say:
Extensions may be written against a particular specification version, but it is possible to implement them that extension on older OpenGL versions. Some extensions can be implemented even on OpenGL 1.1, while others have a higher base version. The minimum version that the extension can appear in is specified with the text "X are required"
Is this just a theoretical possibility, and few (if any) drivers will implement it? Or would some drivers provide this function on OpenGL 3 hardware? How can I find out whether it is implemented?

You really should not read too much into those if you can help it. That information is not particularly helpful to the average developer. You will likely never encounter a GL 3.0 implementation that implements this extension because it was designed around a Shader Model 5.0 (DX 11) hardware feature. In theory there is nothing preventing it from being implemented in 3.0, but in practice no hardware/driver combination that implements this extension is going to limit itself to 3.0.
If you were going to implement the extension or devise some alternative solution, then it would be tremendously helpful to know the absolute minimum API version necessary.
When it says it is written against a certain version of a specification, what that means is that anytime the extension specification says something to the effect of "Add the following language to section X, paragraph Y, ...", you will find the original unextended text in that specific spec. version. It also means that the extension makes certain assumptions about how things behave.
For example, if version X says that points are rasterized as hexagons and version Y says they are rasterized as circles, and an extension is written against version Y, then the extension is free to assume that points are rasterized as circles. If this assumption becomes a point of controversy, you will find a blurb about it in the "Issues" section.
As for determining whether the extension is implemented (which is the most important point from your perspective), that is what the GL_EXTENSIONS string is for. Be aware, however, that the way you query this string has changed over the years:
In a compatibility profile context or GL 3.1 or older:
// Returns a massive null terminated string containing every extension the
// implementation supports.
//
// ... do a string search to find your extension.
const GLchar* exts = glGetString (GL_EXTENSIONS);
In a core profile context:
GLint num_exts;
glGetIntegerv (GL_NUM_EXTENSIONS, &num_exts);
for (int i = 0; i < num_exts; i++) {
// Returns a null terminated string containing only one extension
const GLchar* ext = glGetStringi (GL_EXTENSIONS, i);
}
If you try to do the former in a core profile, GL will generate a GL_INVALID_ENUM error and do nothing else. This is the reason glewExperimental = GL_TRUE is necessary before calling glewInit (...) if you use GLEW in a core profile.

Related

How do I use Legacy OpenGL calls with QT6?

I'm new to Qt and am trying to import some older C++ openGL code. I'm currently using Qt 6.4. I've subclassed my OpenGL-using class to QOpenGlFunctions.
Many of the glFoo calls "work" but the class also uses calls like glEnableClientState, glVertexPointer, glNormalPointer, glTexCoordPointer, glDisableClientState, glColor4fv, & glMaterialfv which come up with errors like undefined reference to __imp_glTextCoordPointer. Looking at the documentation these appear to no long be supported by "default" but it looks like they are supported using older versions of QOpenGlFunctions such as QOpenGlFunction_1_4 (https://doc-snapshots.qt.io/qt6-dev/qopenglfunctions-1-4.html).
Trying to change my subclass from QOpenGLFunctions to QOpenGLFunctions_1_4 complains that really only QOpenGLFunctions_1_4_CoreBackend and QOpenGLFunctions_1_4_DeprecatedBackend exist but there appears to be no documentation on those and if I subclass to one of them I start seeing complaints about my constructor...
How do I actually access the functions from these older versions of the class?
This question was answered by Chris Kawa over at the Qt Forums and it worked for me! Here is his answer:
OpenGL 3.1 introduced profiles. Core profile does not support these old functions and Compatibility profile does.
So first you have to make sure you have a context in version either lower than 3.1 (which does not support profiles) or 3.1 and up set up to use Compatibility profile. AFAIK Qt does not support OpenGL < 3.1 anymore, so you only have the latter option.
If you're not sure what context you have you can simply do qDebug() << your_context and it will print out all the parameters, or you can query individual fields of it e.g. your_conext->format()->profile(). If your context is not set correctly the simplest way is to set it up like this before any OpenGL is initialized in your app:
QSurfaceFormat fmt;
fmt.setVersion(3,1);
fmt.setProfile(QSurfaceFormat::CompatibilityProfile);
fmt.setOptions(QSurfaceFormat::DeprecatedFunctions);
QSurfaceFormat::setDefaultFormat(fmt);
When you have the correct context you can access the deprecated functions like this:
QOpenGLFunctions_1_4* funcs = QOpenGLVersionFunctionsFactory::get<QOpenGLFunctions_1_4>(context());
if(funcs)
{
//do OpenGL 1.4. stuff, for example
funcs->glEnableClientState(GL_VERTEX_ARRAY);
}
else
{
// Not a valid context?
}
And for anyone as amateur as I am, context() comes from QOpenGLWidget::context() which returns a QOpenGLContext*

QOpenGLFunctions_4_3_Compatibility with QOpenGLContext::versionFunctions

I’m trying to compile Tesselation’s example from http://www.kdab.com/opengl-in-qt-5-1-part-5/ (source included) with the Compatibility profile and it’s simply incapable to load this profile:
The minor (and only) modifications I’ve made:
#terraintessellationscene.h:
QOpenGLFunctions_4_3_Compatibility* m_funcs;
#terraintessellationscene.cpp:
m_funcs = m_context->versionFunctions<QOpenGLFunctions_4_3_Compatibility>();
if ( !m_funcs )
{
qFatal("Requires OpenGL >= 4.0");
exit( 1 );
}
m_funcs->initializeOpenGLFunctions();
It just crashes when I call versionFunctions(). And it should work. I can compile this source and several other different project of different complexity using any Core Profile desktop profile. I'm going to try to use GLEW later on to see if this problem is related to QT or my usage of QT or my opengl driver.
I’m using lasted AMD’s 14.4 WHQL. I’ve tried to using the pre-built 5.3 source and my self-compiled static version of 5.3. I’ve tried two old AMD drivers to see if this was a problem with the OpenGL package provided by AMD and yet it fails. I really want to use versionFunctions.
Update 2
It seems QOpenGLContext create(), after setting the QSurfaceFormat properly (QOpenGLContext::setFormat(format), is completely rejecting the compatibility settings (format.setProfile( QSurfaceFormat::CompatibilityProfile) and creating a Core Profile context instead, completely ignoring my request for a compatibility context and yet returning true as a successful operation that couldn't follow QSurfaceFormat format rules.
Update 3
Well, I just discovered the issue and it is not related to my modifications, it’s in the original code at kdab:
// Create an OpenGL context
m_context = new QOpenGLContext;
m_context->setFormat( format );
m_context->create();
by creating the context before setFormat, I was able to correctly initiate a proper Compatibility Context, as QT was simply setting part of my settings related to setProfile( QSurfaceFormat::CoreProfile)
Update 4: Final solution
"Creating the context before setFormat() is wrong and works by accident since what you are requesting then is a plain OpenGL 2.0 context and the driver most likely gives you 4.3 compatibility. On other drivers or platforms this may fail so be careful.
The behavior you are seeing is caused by the AMD driver: it refuses to create a proper compatibility profile context unless the forward compatibility set is not set. Qt sets the fwdcompat bit by default, it can be turned off by setting the DeprecatedFunctions option on the QSurfaceFormat. So requesting for Compatibility together with DeprecatedFunctions will give what you want. Other vendors’ drivers do not have this problem, there not setting DeprecatedFunctions (i.e. setting forward compatibility) is ignored for compatibility profiles, as it should be."
via agocs # qt devnet
you are getting the core profile (thanks to Qt) while you want the compatibility profile
you should request the compatibility profile on initializing by setting the format as follows
window.cpp line 24
format.setProfile( QSurfaceFormat::CompatibilityProfile);
and only call the versionFunctions after context has been created

OpenGL/GLEW: How to choose the correct/existing enum without provoking a compile time error

I am currently using glew to detect some GPU features of the bound openGL context.
Imagine a texture class where I want to use the openGL 3.0 enums if available and fallback to extensions if opengl 3.0 is not in place but the extension is i.e:
uint32 chooseGlInternalFormat(uint32 _pixelType, uint32 _pixelFormat)
{
uint32 ret;
//...
if(GLEW_EXT_texture_integer || GLEW_VERSION_3_0)
{
bool bUseExt = !GLEW_VERSION_3_0; //if only the extension is available but not gl 3.0, fallback
ret = bUseIntEXT ? GL_LUMINANCE8UI_EXT : GL_R8UI;
}
//...
}
obviously this causes a compile time error since GL_R8UI won't exist if opengl 3.0 is not supported.- What is the common way to solve this?
Some larger applications take the newest enum specification and add their own enums based on it. If you only need it this one time you can just define your own enum for this single case.

GLEW and openGL deprecation

I am using openGL and glew to check the extensions.
Say I wanted to check if I could use multitexture extension:
GLEW_ARB_multitexture will return true if I can use it.
BUT
arb_multitexture was deprecated in openGL 3.0.
Will it still return true?
What about in 3.1 where it was removed?
Thanks.
It'll return true if your context exports the extension, which it will if it's not "forward-compatible" or "core profile", or possibly if it supports the "GL_ARB_compatibility" extension.
The bottom line is, if GLEW_ARB_multitexture is true, you can use the functionality. Any higher-level logic (like only using shaders if they're available) is up to you.

Using OpenGL extensions On Windows

I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
Easy solution: Use GLEW. See how here.
Hard solution:
If you have a really strong reason not to use GLEW, here's how to achieve the same without it:
Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry.
Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are:
glGenFramebuffersEXT()
glBindFramebufferEXT()
glFramebufferTexture2DEXT()
glCheckFramebufferStatusEXT()
glDeleteFramebuffersEXT()
Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card.
Example: The graphics card in my PC is a NVIDIA 6600 GT. So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it.
Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list.
Add a #include <glext.h> line in your code to include the header file into your code.
Open glext.h, find the API you wish to use and grab its corresponding ugly-looking declaration.
Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations:
typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *);
All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration.
For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string.
Example:
PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT;
PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT;
PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT;
PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT;
PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT;
Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API.
Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this.
Example:
glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT");
glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT");
glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT");
glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT");
glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT");
Don't forget to check the function pointers for NULL. If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL.
Example:
if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT
|| NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT)
{
// Extension functions not loaded!
exit(1);
}
That's it, we're done! You can now use these function pointers just as if the function calls existed.
Example:
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0);
Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
A 'Very strong reason' not to use GLEW might be that the library is not supported by your compiler/IDE. E.g: Borland C++ Builder.
In that case, you might want to rebuild the library from source. If it works, great, otherwise manual extension loading isnt as bad as it is made to sound.
#Kronikarz: From the looks of it, GLEW seems to be the way of the future. NVIDIA already ships it along with its OpenGL SDK. And its latest release was in 2007 compared to GLEE which was in 2006.
But, the usage of both libraries looks almost the same to me. (GLEW has an init() which needs to be called before anything else though.) So, you don't need to switch unless you find some extension not being supported under GLEE.
GL3W is a public-domain script that creates a library which loads only core functionality for OpenGL 3/4. It can be found on github at:
https://github.com/skaslev/gl3w
GL3W requires Python 2.6 to generate the libraries and headers for OpenGL; it does not require Python after that.