In OpenGL a general procedure is to bind a named buffer object (or vertex array object, framebuffer, ...), invoke some operations on the currently bound object and then bind some kind of "default" object:
glBindBufferObject(GL_ARRAY_BUFFER, bufferObjectName)
glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW)
glBindBufferObject(GL_ARRAY_BUFFER, 0)
(LWJGL in Kotlin)
I just discovered that there are "named" version for many functions. The example from above would become something like
glNamedBufferData(bufferObjectName, data, GL_STATIC_DRAW)
which is way more suitable in an object-oriented context.
So why use something like in the first example? Every tutorial I came upon used the first approach, so is there some performance lost or similar downsides in the "named" approach?
The named functions are part of the ARB_direct_state_access extension which was introduced in OpenGL 4.5. Before that only the normal methods existed.
The main reason for using the older version is when targeting pre OpenGL 4.5 hardware. Most tutorials are also written against older versions of OpenGL.
Related
I'm rewriting my rendering library, and greatly enjoy how glTexStorage2D() works, as well as glBufferStorage(). They're wonderful, allocating the space and defining the object in an immutable way, which works very well with the interface I designed.
However, I just learned it is an OpenGL 4.2 feature. I'm targeting OpenGL 3.3 (I think?) but hoping for a way to avoid glTexImage2D() which uses mutable storage that has proven difficult to use safely.
Also... I'm gravely unfamiliar with how OpenGL's version system works, or what it even means to "target" one version over the other...
Is it as simple as just, this functionality is unavailable to me below 4.2 and that's that? Or are there alternative functions I could use? Or if I attempt to just create a 4.5 OpenGL context (the latest version I know of), would the OpenGL driver be required to emulate any features it doesn't have access to on the GPU? Or is that not even remotely how any of that works? I have very little knowledge about it, so maybe this all sounds foolish.
I just really like some of the newer OpenGL 4.x features like immutable storage and tessellation shaders, but don't want to exclude too many people from running my programs. Would I have to implement an alternative texture allocation method using glTexImage() if a platform isn't capable of running OpenGL 4.2?
I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif
What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.
I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.
Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.
It is ok to use functions from OpenGL 1.1 in code, for render quads with textures (for buttons) or lines (for pathes)? Functions like: glBegin, glVertex or glEnd?
P.S. For 3D models I use VBO from newer version of OpenGL.
Supporting compatibility profiles in OpenGL-3.3 and later is optional, so don't expect legacy functions to be available if your program also makes use of modern features. In general you should not use glBegin/glVertex/glEnd in new code. Even for pathologically simple shapes using modern OpenGL primitives will be simpler and easier to read. The only "downside" is, that you also have to specify a shader and shader setup may be a bit tedious if you're not abstracting it away.
In OpenGL what is the difference between glUseProgram() and glUseShaderProgram()?
It seems in MESA and Nvidia provided glext.h, and in GLEW, both are defined, and both seem to do basically the same thing. I find documentation for glUseProgram() but not for glUseShaderProgram(). Are they truly interchangeable?
glUseShaderProgramEXT() is part of the EXT_separate_shader_objects extension.
This extension was changed significantly in the version that gained ARB status as ARB_separate_shader_objects. The idea is still the same, but the API looks quite different. The extension spec comments on that:
This extension builds on the proof-of-concept provided by EXT_separate_shader_objects which demonstrated that separate shader objects can work for GLSL.
This ARB version addresses several "loose ends" in the prior EXT extension.
The ARB version of the extension was then adopted as core functionality in OpenGL 4.1. If you're interested in using this functionality, using the core entry points in 4.1 is the preferred approach.
What all of this gives you is a way to avoid having to link the shaders for all the stages into a single program. Instead, you can create program objects that contain shaders for only a subset of the stages. You can then mix and match shaders from different programs without having to re-link them. To track which shaders from which programs are used, a new type of object called a "program pipeline" is introduced.
Explaining this in full detail is beyond the scope of this answer. You will use calls like glCreateProgramPipelines(), glBindProgramPipeline(), and glUseProgramStages(). You can find more details and example code on the OpenGL wiki.
I am currently learning OpenGL via the 5th Superbible. It teaches you the core profile. But I am really confused.
I know that khronos removed the fixed function pipeline in 3.3 and declared some functions as deprecated. But the Superbible now just replaces those deprecated functions with their own functions.
Why should khronos remove something like glRotate or the matrixstack just so that I have to use 3rd party libraries (or my own) instead of the official ones?
Maybe the superbible is flawed?
glRotate() etc was removed because internally openGL deals with the matrices so it is a cleaner design to just have you supply the matrices directly.
Almost all openGL apps of any complexity are going to be doing a bunch of other matrix stuff anyway and will have their own matrix classes it's easier for openGL to just take the result rather than insist on creating them from a bunch of rotate/translate/scale calls.
They could have supplied their own matrix classes - but there are a lot of 3rd party libs you can use. One of openGL's policies (failings) is that it does rely on 3rd party libs to do anything outside the actual graphics. So beginner programs are a tricky mix of GLUT, GLEW, SDL, etc to get anything on the screen - while DirectX has everything out of the box.
Khronos removed these functions from the core profiles, but they are still available in the compatibility ones.
The main reason is one of performance:
In most applications nowadays, the amount of information which must be passed back and forth between the renderer and the application is magnitude larger than ten years ago. So the ARB came up with the buffers (vertex arrays and vertex buffer objects) to maximize the use of the bandwidth available between the main system and the rendering hardware. However, if you start using the VBO mechanism to transfert data, then most of the legacy functions become useless.
That said, besides the need to support legacy applications, which is a sufficient reason for a compatibility profile, I think that this API is still useful for learning purpose.
As for your main question, the above is only valid for the full fledged version of OpenGL, not the ES one, which doesn't support the old primitives, and in this context an emulation layer is necessary.