Using Legacy OpenGL and Modern OpenGL in same application - c++

I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif

What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.

I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.

Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.

Related

Is it ok to use OpenGL 1.1?

It is ok to use functions from OpenGL 1.1 in code, for render quads with textures (for buttons) or lines (for pathes)? Functions like: glBegin, glVertex or glEnd?
P.S. For 3D models I use VBO from newer version of OpenGL.
Supporting compatibility profiles in OpenGL-3.3 and later is optional, so don't expect legacy functions to be available if your program also makes use of modern features. In general you should not use glBegin/glVertex/glEnd in new code. Even for pathologically simple shapes using modern OpenGL primitives will be simpler and easier to read. The only "downside" is, that you also have to specify a shader and shader setup may be a bit tedious if you're not abstracting it away.

Deprecated OpenGL functions

I am currently learning OpenGL via the 5th Superbible. It teaches you the core profile. But I am really confused.
I know that khronos removed the fixed function pipeline in 3.3 and declared some functions as deprecated. But the Superbible now just replaces those deprecated functions with their own functions.
Why should khronos remove something like glRotate or the matrixstack just so that I have to use 3rd party libraries (or my own) instead of the official ones?
Maybe the superbible is flawed?
glRotate() etc was removed because internally openGL deals with the matrices so it is a cleaner design to just have you supply the matrices directly.
Almost all openGL apps of any complexity are going to be doing a bunch of other matrix stuff anyway and will have their own matrix classes it's easier for openGL to just take the result rather than insist on creating them from a bunch of rotate/translate/scale calls.
They could have supplied their own matrix classes - but there are a lot of 3rd party libs you can use. One of openGL's policies (failings) is that it does rely on 3rd party libs to do anything outside the actual graphics. So beginner programs are a tricky mix of GLUT, GLEW, SDL, etc to get anything on the screen - while DirectX has everything out of the box.
Khronos removed these functions from the core profiles, but they are still available in the compatibility ones.
The main reason is one of performance:
In most applications nowadays, the amount of information which must be passed back and forth between the renderer and the application is magnitude larger than ten years ago. So the ARB came up with the buffers (vertex arrays and vertex buffer objects) to maximize the use of the bandwidth available between the main system and the rendering hardware. However, if you start using the VBO mechanism to transfert data, then most of the legacy functions become useless.
That said, besides the need to support legacy applications, which is a sufficient reason for a compatibility profile, I think that this API is still useful for learning purpose.
As for your main question, the above is only valid for the full fledged version of OpenGL, not the ES one, which doesn't support the old primitives, and in this context an emulation layer is necessary.

What Are The Changes To OpenGL From 1.x, 2.x, 3.x And 4.x?

What have changed that makes OpenGL different? I heard of people not liking OpenGL since OpenGL 3.x, but what happend? I want to learn OpenGL but I don't know which one. I want great graphics with the newer versions, but what's so bad?
Generally, every major version of OpenGL is roughly equivalent to a hardware generation. Which means that generally if you can run OpenGL 3.0 card, you can also run OpenGL 3.3 (if you have a sufficiently new driver).
OpenGL 2.x is the DX9-capable generation of hardware, OpenGL 3.x is the DX10, and OpenGL 4.x the DX11 generation of hardware. There is no 100% exact overlap, but this is the general thing.
OpenGL 1.x revolves around immediate mode, which is conceptually very easy to use, and a strictly fixed function pipeline. The entry barrier is very low, because there is hardly anything you have to learn, and hardly anything you can do wrong.
The downside is that you have considerably more library calls, and CPU-GPU parallelism is not optimal in this model. This does not matter so much on old hardware, but becomes more and more important to get the best performance out of newer hardware.
Beginning with OpenGL 1.5, and gradually more and more in 2.x, there is slight paradigm shift away from immediate mode towards retained mode, i.e. using buffer objects, and a somewhat programmable pipeline. Vertex and fragment shaders are available, with varying feature sets and programmability.
Much of the functionality in these versions was implemented via (often vendor-specific) extensions, and sometimes only half-way or in several distinct steps, and not few features had non-obvious restrictions or pitfalls for the casual programmer (e.g. register combiners, lack of branching, limits on instructions and dependent texture fetches, vtf support supporting zero fetches).
With OpenGL 3.0, fixed function was deprecated but still supported as a backwards-compatibility feature. Almost all of "modern OpenGL" is implemented as core functionality as of OpenGL 3.x, with clear requirements and guarantees, and with an (almost) fully programmable pipeline. The programming model is based entirely on using retained mode and shaders. Geometry shaders are available in addition to vertex and fragment shaders.
Version 3 has received a lot of negative critique, but in my opinion this is not entirely fair. The birth process was admittedly a PR fiasco, but what came out is not all bad. Compared with previous versions, OpenGL 3.x is bliss.
OpenGL 4.x has an additional tesselation shader stage which requires hardware features not present in OpenGL 3.x compatible hardware (although I daresay that's rather a marketing reason, not a technical one). There is support for a new texture compression format that older hardware cannot handle as well.
Lastly, OpenGL 4.x introduces some API improvements that are irrespective of the underlying hardware. Those are also available under OpenGL 3.x as 100% identical core extensions.
All in all, my recommendation for everyone beginning to learn OpenGL is to start with version 3.3 right away (or 3.2 if you use Apple).
OpenGL 3.x compatible hardware is nearly omni-present nowadays. There is no sane reason to assume anything older, and you save yourself a lot of pain. From an economic point of view, it does not make sense to support anything older. Entry level GL4 cards are currently at around $30. Therefore, someone who cannot afford a GL3 card will not be able to pay for your software either (it is twice as much work to maintain 2 code paths, though).
Also, you will eventually have no other choice but to use modern OpenGL, so if you start with 1.x/2.x you will have to unlearn and learn anew later.
On the other hand, diving right into version 4.x is possible, but I advise against it for the time being. Whatever is not dependent on hardware in the API is also available in 3.x, and tesselation (or compute shader) is something that is usually not strictly necessary at once, and something you can always add on later.
For an exact list of changes I suggest you download the specification documents of the latest of each OpenGL major version. At the end of each of these there are several appendices documenting the changes between versions in detail.
The many laptops with Intel integrated graphics designed before approx a year ago do not do OpenGL 3. That includes some expensive business machines, e.g., $1600 Thinkpad x201, still for sale on Amazon as of today (4/3/13) (although Lenovo has stopped making them),
OpenGL 3.1 removed the "fixed function pipeline". That means that writing vertex and fragment shaders is no longer optional: If you want to display anything, you must write them. This makes it harder for the beginner to write "hello world" in OpenGL.
The OpenGL Superbible Rev 5 does a good job of teaching you to use modern OpenGL without falling back on the fixed function pipeline. That's where I would start if I were learning OpenGL from scratch.
Their rev 4 still covers the fixed function pipeline if you want to start with a more "historical" approach.

How do I support different OpenGL versions?

I have two different systems one with OpenGL 1.4 and one with 3. My program uses Shaders which are part of OpenGL 3 and are only supported as ARB extension in the 1.4 implementation.
Since I cant use the OpenGL 3 functions with OpenGL 1.4 is there a way to support both OpenGL versions without writing the same OpenGL code twice (ARB/EXT and v3)?
Unless you really have to support 10 year old graphics cards for some reason, I strongly recommend targetting OpenGL 2.0 instead of 1.4 (in fact, I'd even go as far as targetting version 2.1).
Since using "shaders that are core in 3.0" necessarily means that the graphics card must be capable of at least some version of GLSL, this rules out any hardware that is not capable of providing at least OpenGL 2.0. Which means that if someone has OpenGL 1.4 and can run your shaders, he is using 8-10 year old drivers. There is little to gain (apart from a support nightmare) from that.
Targetting OpenGL 2.1 is reasonable, there are hardly any systems nowadays which don't support that (even assuming a minimum of OpenGL 3.2 may be an entirely reasonable choice).
The market price for an entry level OpenGL 3.3 compatible card with roughly 1000x the processing power of a high end OpenGL 1.4 card was around $25 some two years ago. If you ever intend to sell your application, you have to ask yourself whether someone who cannot afford (or does not want to afford) this would be someone you'd reasonably expect to pay for your software.
Having said that, supporting OpenGL 2.x and OpenGL >3.1 at the same time is a nightmare, because there are non-trivial changes in the shading language which go far beyond #define in varying and which will bite you regularly.
Therefore, I have personally chosen to never again target anything lower than version 3.2 with instanced arrays and shader objects. This works with all hardware that can be reasonably expected having the processing power to run a modern application, and it includes the users who were too lazy to upgrade their driver to 3.3, providing the same features in a single code path. OpenGL 4.x features are loadable as extension if available, which is fine.
But, of course, everybody has to decide for himself/herself which shoe fits best.
Enough of my blah blah, back to the actual question:
About not duplicating code for extensions/core, you can in many cases use the same names, function pointers, and constants. However, be warned: As a blanket statement, this is illegal, undefined, and dangerous.
In practice, most (not all!) extensions are identical to the respective core functionality, and work just the same. But how to know which ones you can use and which ones will eat your cat? Look at gl.spec -- a function which has an alias entry is identical and indistinguishable from its alias. You can safely use these interchangeably.
Extensions which are problematic often have an explanatory comment somewhere as well (such as "This is not an alias of PrimitiveRestartIndexNV, since it sets server instead of client state."), but do not rely on these, rely on the alias field.
Like #Nicol Bolas already told you, it's inevitable to create two codepaths for OpenGL-3 core and OpenGL-2. OpenGL-3 core deliberately breaks with compatibility. However stakes are not as bad as it might seem, because most of the time the code will differ only in nuances and both codepaths can be written in a single source file and using methods of conditional compilation.
For example
#ifdef OPENGL3_CORE
glVertexAttribPointer(Attribute::Index[Position], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
glVertexAttribPointer(Attribute::Index[Normal], 3, GL_FLOAT, GL_FALSE, attribute.position.stride(), attribute.position.data());
#else
glVertexPointer(3, GL_FLOAT, attribute.position.stride(), attribute.position.data());
glNormalPointer(GL_FLOAT, attribute.normal.stride(), attribute.normal.data());
#endif
GLSL shaders can be reused similarily. Use of macros to change orrucances of predefined, but depreceated identifiers or introducing later version keywords e.g.
#ifdef USE_CORE
#define gl_Position position
#else
#define in varying
#define out varying
#define inout varying
vec4f gl_Position;
#endif
Usually you will have a set of standard headers in your program's shader management code to build the final source, passed to OpenGL, of course again depending on the used codepath.
It depends: do you want to use OpenGL 3.x functionality? Not merely use the API, but use the actual hardware features behind that API.
If not, then you can just write against GL 1.4 and rely on the compatibility profile. If you do, then you will need separate codepaths for the different levels of hardware you intend to support. This is standard, just for supporting different levels of hardware functionality.

GPU Usage in a non-GLSL OpenGL Application

I read from the OpenGL Wiki that current Modern GPUs are only programmable using shaders.
Modern GPUs no longer support fixed
function. Everything is done with
shaders. In order to preserve
compatibility, the GL driver generates
a shader that simulates the fixed
function. It is recommended that all
new modern programs use shaders. New
users need not learn fixed function
related operations of GL such as
glLight, glMaterial, glTexEnv and many
others.
Is that mean that if we are not implementing shader/GLSL in OpenGL, we actually don't access the GPU at all and only do the computation using the CPU?
No. It means that all fixed function stuff is automatically converted to shaders by the drivers.
Everything is done with shaders. In
order to preserve compatibility, the
GL driver generates a shader that
simulates the fixed function.
These shaders still run on the GPU (as all shaders do). They are just automatically made for you.