The FreeGLUT API documentation does not include an entry for glutInitContextVersion and when I google for it, all I find are a list of questions which don't directly address its usage or effects.
Is it documented anywhere?
glutInitContextVersion is not part of the official GLUT API (which is btw. completely outdated), but an unofficial extension added by freeglut. However, its usage is quite straight-forwadd as soon as one knows about how OpenGL's context versions work, which was defined in the ARB_create_context familiy of extensions.
The function will select which OpenGL version is requested when the context is actually created. Note that this does not require the implementation to return a context with exactly the version you are requesting, but it should return one that is compatible to the requested version, so that all features of that version are present.
There are a few things which seem to be undocuemted about freeglut handling this. From looking into the code (for the current stable version 2.8.1), one will see that freeglut implements the following logic:
If the implementation cannot fullfill the version constraints, but does support the ARB_create_context extension, it will generate some error and no context will be created. However, if a version is requested, but the implementation does not even support the relevant extensions, a legacy GL context is created, effectively ignoring the version request completely. This seems a bit inconsistent to me. However, as this stuff is not documented and not part of the GLUT spec, it is unclear if the behavior will stay the same in the future or not.
If you don't need some GLUT-specific features (which are basically all relying on deprecated OpenGL anyway), you'll might want to look at some more modern alternatives like GLFW.
Related
For my Application I need a renderer. The renderer uses the OpenGL 3.3 core profile as a basic. In newer OpenGL versions, there are some neat features, which are also available via extensions. If available I want to use newer features, based on the newest OpenGL version. Since it's a mess to test for available versions and adjust the loader, I decided to remain at core 3.3 and use extensions where available (as this are extensions for, right?).
Are Extensions as fast as the same funcionality in newer OpenGL versions?
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
E: This is not a question about software design rather than about how extensions are handled (mostly?) and about performance.
Actually, the OpenGL API spec XML description has the nice property of alias. A GL function aliasing another one is basically both syntactically and semantically identical to the function it is aliasing - and this feature is used a lot when extension function become promoted to core functionality and have their names changed.
There are GL Loaders out there which actually use that information. Two examples I know of are:
libepoxy
glad2 (glad2 is current in-development branch of the glad loader generator - but I'm using it since 2 years without any issue). You'll have to explicitely enable the aliasing feature. There is also a web service which lets you generate the needed files (note the "aliasing" button on the bottom). Also have a look at the GLAD documentation.
With such a loader, you do not have to care about whether a particular function comes from an extension or from the GL core functionality, you can just use them if they are available somehow.
Also note that with newer extensions, the OpenGL ARB more often is creating ARB extensions without the ARB suffix on the function and enum names, which means that it is describing the exact same entities in any case. This is basically done for extensions which were created after the features were incorporated into the core standard. They just create an extension for it so that a vendor, who might not be able to fulfill some other requirement of the new standard version, is still available to provide the isoltated feature.
One of the first examples of this was the GL_ARB_sync extension, which itself relates to this fact in issue #13:
13) Why don't the entry points/enums have ARB appended?
This functionality is going directly into the OpenGL 3.2 core
and also being defined as an extension for older > platforms at
the same time, so it does not use ARB suffixes, like other such
new features going directly into the GL core.
You wrote :
Let's take the GL_ARB_direct_state_access-extension. It's available in core since 4.5 via Direct State Access. Is the latter faster than the former? I.e. are the functions implemented in newer versions faster than extensions? Or do driver link to the same function anyway?
GL_ARB_direct_state_access falls into the same category as GL_ARB_sync. The GL functions are identified by name, and two entities having the same name means that they are referencing the very same thing. (You can't export two different functions with the same name in a library, and *glGetProcAddress also takes only the name string as input, so it can't decide which version you wanted if there were more than one).
However, it will still depend on your GL loading meachanism how this situation is dealt with, because it might not attempt on loading functions which are not implied by the GL version you got. glad2 for example will just work if you choose it to generate a >= 4.5 loader or to support the GL_ARB_direct_state_access extension.
Since it's a mess to test for available versions and adjust the loader, [...]
Well. That will greatly depend on the loader you are using. As I have shown, there are already options which will basically jsut work, not only in the case of absolute identical function names, but also with aliased functions.
Are Extensions as fast as the same funcionality in newer OpenGL
versions
An extension is sort of a preview of the feature. Many (most?) extensions are included in the standard when a new version arrives, so performance will be equal.
You should look at your target platform(s).
When I run OpenGL Extensions Viewer it tells that my HD3000 supports all features up to 3.1, 70% of 3.2 / 3.3 and 21% of 4.0.
So you can check in advance if the feature you need is implemented on your target platform(s) with the hardware and drivers you want to use. Most recent hardware will support 4.4 / 4.5 because it's been around for years. It's up to you how much you value backwards compatibility.
When I look at Intel graphics since Skylake and beyond it supports 4.4, Skylake is around since 08/2015. All AMD/NVidia hardware will also support 4.4 / 4.5. Note that the support level may very between OS and driver versions.
I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.
According to this document page 6 (released by AMD) (and this topics ?), there are some ways to use templates with OpenCL.
However, the first document reports this could be done by using some options with clBuildProgramWithSource which doesn't seem to exist... Anyway, assuming it is clBuildProgram rather than the previous one, I attempted to use the so called "-x" option with "clc++", but still, it is not recognized :
warning: ignoring build option: "-x"
In fact, according to the documentation stemming from Khronos, this option is not available!
This document may well be deprecated somehow, but are there other ways to use templates inside an OpenCL code?
The -x option is available only on the latest AMD OpenCL runtimes which support OpenCL 1.2 and the static C++ language extension. You won't find a word about it in the official Khronos docs because this is all an AMD initiative, and, ultimately, a vendor extension.
I assume you have the right runtime, so your kernel needs to be built with these options:
-x clc++
If you are able to build kernels with classes using this, you should then be able to use templates.
If this doesn't work, it means that either your runtime installation is botched, e.g. you're using the wrong compiler somehow, or it means you do not have the right runtime. If so, please give your platform info.
I've messed with the static C++ extension a while ago and I can testify that -x clc++ does work.
Also beware that using this extension will render your code not portable and locked in to AMD-compliant devices, as it is unlikely other vendors will introduce the exact same extension themselves (if ever).
Also, a note on Khronos docs - the ones returned by google are typically the OpenCL 1.0 versions which can be irritating. I recommend downloading the 1.1 or 1.2 standard as well as getting a local copy of the relevant HTML documentation for quick access, if you use OpenCL a lot. It helps.
The new SYCL Khronos standard offers native support for template meta programming on top of OpenCL platforms, including AMD OpenCL ones.
I'm looking at the OpenGL wiki page, and I was curious about the following line:
For reasons that are ultimately irrelevant to this discussion, you must
manually load functions via a platform-specific API call. This boilerplate
work is done with various OpenGL loading libraries; these make this process
smooth. You are strongly advised to use one. —OpenGL Wiki
Intuitively, you would think they would just provide a header for you to include. So, what are the historic reasons for this?
EDIT :
Thanks for the answers, I see now that that OpenGL supports multiple implementations of its functions, so there is no one single DLL/SO that everyone links to. I also found these quotes helpful:
The OpenGL library supports multiple implementations of its functions. From MSDN
To accommodate this variety, we specify ideal behavior instead of actual behavior for certain GL operations. From GL Spec
When you run your program, opengl32.dll gets loaded and it checks in the Windows registry if there is a true GL driver. If there is, it will load it. For example, ATI's GL driver name starts with atioglxx.dll and NVIDIA's GL driver is nvoglv32.dll. The actual names change from release versions GL FAQ
I also found that Intel doesn't provide up-to-date implementation for OpenGL, so even though I have an i7-2500, I only have OpenGL 3.0 :(.
It has nothing to do with history. That's just how it is.
OpenGL implementations, by and large, come from some form of shared library (DLL/SO). However, this is complicated by the fact that you don't own the OpenGL "library"; it's part of the system infrastructure of whatever platform your code is running on. So you don't even know what specific DLL you might be using on someone else's computer.
Therefore, in order to get OpenGL function pointers, you must use the existing infrastructure to load them.
One of the main reasons is extensions: On every platform, OpenGL has always supported platform-specific extensions. These couldn't be part of the official headers, as they were usually added by the vendors in-between spec updates. In addition, those vendor-specific extensions may live in completely weird DLL/SO, for instance, deep inside the driver. There is also no guarantee that the driver DLL exports them under their "canonical" name, so OpenGL relied very early on platform-specific stuff to load a function pointer. This makes the whole extension thing feasible.
On all platforms, you usually do get some OpenGL without using extensions (OpenGL 1.4 or so), but as the extension method was successful and is easy to implement, everyone uses it now (similar for OpenCL!)
On the Mac, you don't need to load function pointers, you just include a header and link against the OpenGL framework.
In relation to this question on Using OpenGL extensions, what's the purpose of these extension functions? Why would I want to use them? Further, are there any tradeoffs or gotchas associated with using them?
The OpenGL standard allows individual vendors to provide additional functionality through extensions as new technology is created. Extensions may introduce new functions and new constants, and may relax or remove restrictions on existing OpenGL functions.
Each vendor has an alphabetic abbreviation that is used in naming their new functions and constants. For example, NVIDIA's abbreviation (NV) is used in defining their proprietary function glCombinerParameterfvNV() and their constant GL_NORMAL_MAP_NV.
It may happen that more than one vendor agrees to implement the same extended functionality. In that case, the abbreviation EXT is used. It may further happen that the Architecture Review Board "blesses" the extension. It then becomes known as a standard extension, and the abbreviation ARB is used. The first ARB extension was GL_ARB_multitexture, introduced in version 1.2.1. Following the official extension promotion path, multitexturing is no longer an optionally implemented ARB extension, but has been a part of the OpenGL core API since version 1.3.
Before using an extension a program must first determine its availability, and then obtain pointers to any new functions the extension defines. The mechanism for doing this is platform-specific and libraries such as GLEW and GLEE exist to simplify the process.
Extensions are, in general, a way for graphics card vendors to add new functionality to OpenGL without having to wait until the next revision of the OpenGL spec. There are different types of extensions:
Vendor extension - only one vendor provides a certain type of functionality.
Example: NV_vertex_program
Multivendor extension - multiple vendors have gotten together and agreed on the functionality.
Example: EXT_vertex_program
ARB extension - the OpenGL Architecture Review Board has blessed the extension. You have a reasonable expectation that this type of extension will be around for a while.
Example: ARB_vertex_program
Extensions don't have to go through all of these steps. Sometimes an extension is only ever implemented by one vendor, before hardware designs go a different way and the extension is abandoned. Other times, an extension might make it as far as ARB status before everyone decides there's a better way. (The ARB_vertex_program approach, for instance, was set aside in favor of the high-level shading language approach of ARB_vertex_shader when it came time to roll shaders into the core OpenGL spec.) Even ARB extensions don't last forever; I wouldn't write something today requiring ARB_matrix_palette, for instance.
All of that having been said, it's a very good idea to keep up to date on extensions, in particular the latest ARB and EXT extensions. In the past it has been true that some of the 'fast paths' through the hardware were only accessible via extensions. Likewise, if you want to know what all functionality a piece of hardware can do, there's no better place to look than in a vendor-specific extension.
If you're just getting started in OpenGL, I'd recommend investigating:
ARB_vertex_buffer_object (vertices)
ARB_vertex_shader / ARB_fragment_shader / ARB_shader_objects / GLSL spec (shaders)
More advanced:
ARB/EXT_framebuffer_object (off-screen rendering)
This is all functionality that's been rolled into core, but it can be good to see it in isolation so you can get a better feel for where its boundaries lie. (The core OpenGL spec seamlessly mixes the old with the new, so this can be pretty important if you want to stay on the fast path, and avoid the legacy and sometimes implemented in software paths.)
Whatever you do, make sure you have appropriate checks for the extensions you decide to use, and fallbacks where necessary. Even though your card may have a given extension, there's no guarantee that the extension will be present on another vendor's card, or even on another operating system with the same card.
OpenGL Extensions are new features added to the OpenGL specification, they are added by the OpenGL standards body and by the various graphics card vendors. These are exposed to the programmer as new function calls or variables. Every new version of the OpenGL specification ships with newer functionality and (typically) includes all the previous functionality and extensions.
The real problem with OpenGL extensions exists only on Windows. Microsoft hasn't supported any extensions that have been released after OpenGL v1.1. The graphics card vendors overcome this by shipping their own version of this functionality through header files and libraries. However, using this can be a bit painful as the question you linked to shows. But this problem has mostly gone away with the popularity of GLEW, which takes care of wrapping all this into a easy-to-use package.
If you do use a very recent OpenGL extension, be aware that it may not be supported on older graphics hardware. Other than this, there's no other disadvantage to using these extensions. Most of the extensions which become standard are pretty darn useful and there's very little logic to not use them.