In OpenGL what is the difference between glUseProgram() and glUseShaderProgram()?
It seems in MESA and Nvidia provided glext.h, and in GLEW, both are defined, and both seem to do basically the same thing. I find documentation for glUseProgram() but not for glUseShaderProgram(). Are they truly interchangeable?
glUseShaderProgramEXT() is part of the EXT_separate_shader_objects extension.
This extension was changed significantly in the version that gained ARB status as ARB_separate_shader_objects. The idea is still the same, but the API looks quite different. The extension spec comments on that:
This extension builds on the proof-of-concept provided by EXT_separate_shader_objects which demonstrated that separate shader objects can work for GLSL.
This ARB version addresses several "loose ends" in the prior EXT extension.
The ARB version of the extension was then adopted as core functionality in OpenGL 4.1. If you're interested in using this functionality, using the core entry points in 4.1 is the preferred approach.
What all of this gives you is a way to avoid having to link the shaders for all the stages into a single program. Instead, you can create program objects that contain shaders for only a subset of the stages. You can then mix and match shaders from different programs without having to re-link them. To track which shaders from which programs are used, a new type of object called a "program pipeline" is introduced.
Explaining this in full detail is beyond the scope of this answer. You will use calls like glCreateProgramPipelines(), glBindProgramPipeline(), and glUseProgramStages(). You can find more details and example code on the OpenGL wiki.
Related
When I should load one or another in my own loader code? On Xorg + Mesa-based systems there is no guarantee of zero pointer returned if feature is not supported in current context, which results in catastrophic function calls, Khronos recommends to look up extension name with result of glGetString(GL_EXTENSIONS).
Different sources associate both of those with "ARB_multitexture"
The way OpenGL code typically works with a loader is that it has some baseline version of OpenGL functionality. If the implementation cannot provide even that version of GL, then the program simply cannot execute and shuts down. If it has some optional features that it can work with, then it uses extensions to test if those are available.
So you would load core OpenGL functions up to and including that version, but you would then rely on extensions for everything else.
Since glActiveTexture comes from OpenGL 1.2, if your minimum GL version is 2.1, then you would use glActiveTexture and not care about the extension version of the function. If the minimum version is 1.1, you could use glActiveTextureARB and ignore the core version even if the implementation supports it.
The problem you may eventually face is that some core functionality doesn't have an exact extension equivalent, or an extension equivalent at all. For example, the ARB extensions that provide access to GLSL via ARB_shader_objects and the rest. These APIs are very different from the core GL 2.0 functions. Not just by the ARB suffix, but even by the type of the shader objects. You can't transparently write code that works with both.
I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif
What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.
I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.
Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.
I'm working on an OpenGL project, and I'm looking for a triangulation/tessellation functionality. I see a lot of references to the GLUtessellator and related gluTess* functions (e.g., here).
I'm also using GLFW, which repeats over and over again in its guides that:
GLU has been deprecated and should not be used in new code, but some legacy code requires it.
Does this include the tessellation capability? Would it be wise to look into a different library to create complex polygons in OpenGL?
GLU is a library. While it makes OpenGL calls, it is not actually part of OpenGL. It is not defined by the OpenGL specification. So that specification cannot "deprecate" or remove it.
However, GLU does most of its work through OpenGL functions that were removed from core OpenGL. GLU should not be used if you are trying to use core OpenGL stuff.
Here follows a little addition. It should be noted here that there exist in regard to the original gluTess* implementation also more modern alternatives which follow the original concept in terms of simplicity and universality.
A notable alternative is Libtess2, a refactored version of the original libtess.
https://github.com/memononen/libtess2
It uses a different API which loosely resemble the OpenGL vertex array API. However, libtess2 seems to outperform the original GLU reference implementation "by order of magnitudes". ;-)
The current official OpenGL 4.0 Tessellation principle requires GPU hardware which is explicitly compatible. (This is most likely true for all DirectX 11 and newer compatible hardware.) More information regarding the current OpenGL Tessellation concept can be found here:
https://www.khronos.org/opengl/wiki/Tessellation
This new algorithm is better when it needs mesh quality, robustness, Delauney and other optimisations.
It generates mesh and outline in one loop for cases where gluTess needs several.
It supports many more modes, but is 100% compatible with the gluModies.
It is programmed and optimised for C++ x86/x64 with an installable COM interface.
It can also be used from C# without COM registration.
The same implementation is also available as a C# version which is about half as fast on (state of the art NET Framwork 4.7.2).
It computes with a Variant type that supports different formats: float, double and Rational for unlimited precision and robustness. In this mode, the machine Epsilon = 0, no errors can occur. Absolute precision.
If you want to achieve similar qualities with the gluTess algorithm, you need 3 times longer, including complex corrections to remove T-junctions and optimise the mesh.
The code is free at:
https://github.com/c-ohle/CSG-Project
I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.
I'm looking at the OpenGL wiki page, and I was curious about the following line:
For reasons that are ultimately irrelevant to this discussion, you must
manually load functions via a platform-specific API call. This boilerplate
work is done with various OpenGL loading libraries; these make this process
smooth. You are strongly advised to use one. —OpenGL Wiki
Intuitively, you would think they would just provide a header for you to include. So, what are the historic reasons for this?
EDIT :
Thanks for the answers, I see now that that OpenGL supports multiple implementations of its functions, so there is no one single DLL/SO that everyone links to. I also found these quotes helpful:
The OpenGL library supports multiple implementations of its functions. From MSDN
To accommodate this variety, we specify ideal behavior instead of actual behavior for certain GL operations. From GL Spec
When you run your program, opengl32.dll gets loaded and it checks in the Windows registry if there is a true GL driver. If there is, it will load it. For example, ATI's GL driver name starts with atioglxx.dll and NVIDIA's GL driver is nvoglv32.dll. The actual names change from release versions GL FAQ
I also found that Intel doesn't provide up-to-date implementation for OpenGL, so even though I have an i7-2500, I only have OpenGL 3.0 :(.
It has nothing to do with history. That's just how it is.
OpenGL implementations, by and large, come from some form of shared library (DLL/SO). However, this is complicated by the fact that you don't own the OpenGL "library"; it's part of the system infrastructure of whatever platform your code is running on. So you don't even know what specific DLL you might be using on someone else's computer.
Therefore, in order to get OpenGL function pointers, you must use the existing infrastructure to load them.
One of the main reasons is extensions: On every platform, OpenGL has always supported platform-specific extensions. These couldn't be part of the official headers, as they were usually added by the vendors in-between spec updates. In addition, those vendor-specific extensions may live in completely weird DLL/SO, for instance, deep inside the driver. There is also no guarantee that the driver DLL exports them under their "canonical" name, so OpenGL relied very early on platform-specific stuff to load a function pointer. This makes the whole extension thing feasible.
On all platforms, you usually do get some OpenGL without using extensions (OpenGL 1.4 or so), but as the extension method was successful and is easy to implement, everyone uses it now (similar for OpenCL!)
On the Mac, you don't need to load function pointers, you just include a header and link against the OpenGL framework.