Will I need to use glTexImage if OpenGL 4.2 is unavailable? - c++

I'm rewriting my rendering library, and greatly enjoy how glTexStorage2D() works, as well as glBufferStorage(). They're wonderful, allocating the space and defining the object in an immutable way, which works very well with the interface I designed.
However, I just learned it is an OpenGL 4.2 feature. I'm targeting OpenGL 3.3 (I think?) but hoping for a way to avoid glTexImage2D() which uses mutable storage that has proven difficult to use safely.
Also... I'm gravely unfamiliar with how OpenGL's version system works, or what it even means to "target" one version over the other...
Is it as simple as just, this functionality is unavailable to me below 4.2 and that's that? Or are there alternative functions I could use? Or if I attempt to just create a 4.5 OpenGL context (the latest version I know of), would the OpenGL driver be required to emulate any features it doesn't have access to on the GPU? Or is that not even remotely how any of that works? I have very little knowledge about it, so maybe this all sounds foolish.
I just really like some of the newer OpenGL 4.x features like immutable storage and tessellation shaders, but don't want to exclude too many people from running my programs. Would I have to implement an alternative texture allocation method using glTexImage() if a platform isn't capable of running OpenGL 4.2?

Related

Using Legacy OpenGL and Modern OpenGL in same application

I have a work laptop that only supports OpenGL 2.1 and i have desktop in my home with OpenGL 4.4. I'm working on a project in my Desktop. So i make my program that compatible with Modern OpenGL. But i want to develop this project in my work laptop. My question is can i make this project compatible with both Legacy and Modern OpenGL?
Like this.
#ifdef MODERN_OPENGL
some code..
glBegin(GL_TRIANGLES);
...
glEnd();
#else
glGenBuffers(&vbo);
...
#endif
What you suggest is perfectly possible, however if you do it through preprocessor macros you're going to end up in conditional compilation hell. The best bet for your approach is to compile into shared libraries, one compiled for legacy and one for modern and load the right variant on demand. However when approaching it from that direction you can just as well ditch the preprocessor juggling and simply move render path variants into their own compilation units.
Another approach is to decide on what render path to use at runtime. This is my preferred approach and I usually implement it through a function pointer table (vtable). For example the volume rasterizer library I offer has full support for OpenGL-2.x and modern core profiles and will dynamically adjust its code paths and the shaders' GLSL code to match the capabilities of the OpenGL context it's being used in.
If you're worried about performance, keep in mind that literally every runtime environment that allows for polymorphic function overwriting has to go through that bottleneck. Yes, it does amount to some cost, but OTOH it's so common that modern CPUs' instruction prefetch and indirect jump circuitry has been optimized to deal with that.
EDIT: Important note about what "legacy" OpenGL is and what not
So here is something very important I forgot to write in the first place: Legacy OpenGL is not glBegin/glEnd. It's about having a fixed function pipeline by default and vertex arrays being client side.
Let me reiterate that: Legacy OpenGL-1.1 and later does have vertex arrays! What this effectively means is, that large amounts of code that are concerned with the layout and filling the content of vertex arrays will work for all of OpenGL. The differences are in how vertex array data is actually submitted to OpenGL.
In legacy, fixed function pipeline OpenGL you have a number of predefined attributes and function which you use to point OpenGL toward the memory regions holding the data for these attributes before making the glDraw… call.
When shaders were introduced (OpenGL-2.x, or via ARB extension earlier) they came along with the very same glVertexAttribPointer functions that are still in use with modern OpenGL. And in fact in OpenGL-2 you can still point them toward client side buffers.
OpenGL-3.3 core made the use of buffer objects mandatory. However buffer objects are also available for older OpenGL versions (core in OpenGL-1.5) or through an ARB extension; you can even use them for the non-programmable GPUs (which means effectively first generation Nvidia GeForce) of the past century.
The bottom line is: You can perfectly fine write code for OpenGL that's compatible with a huge range for version profiles and require only very little version specific code to manage the legacy/modern transistion.
I would start by writing your application using the "new" OpenGL 3/4 Core API, but restrict yourself to the subset that is supported in OpenGL 2.1. As datenwolf points out above, you have vertex attribute pointers and buffers even in 2.1
So no glBegin/End blocks, but also no matrix pushing/popping/loading, no pushing/popping attrib state, no lighting. Do everything in vertex and fragment shaders with uniforms.
Restricting yourself to 2.1 will be a bit more painful than using the cool new stuff in OpenGL 4, but not by much. In my experience switching away from the matrix stack and built-in lighting is the hardest part regardless of which version of OpenGL, and it's work you were going to have to do anyway.
At the end you'll have a single code version, and it will be easier to update if/when you decide to drop 2.1 support.
Depending on which utility library / extension loader you're using, you can check at runtime which version is supported by current context by checking GLAD_GL_VERSION_X_X, glfwGetWindowAttrib(window, GLFW_CONTEXT_VERSION_MAJOR/MINOR) etc., and create appropriate renderer.

What Are The Changes To OpenGL From 1.x, 2.x, 3.x And 4.x?

What have changed that makes OpenGL different? I heard of people not liking OpenGL since OpenGL 3.x, but what happend? I want to learn OpenGL but I don't know which one. I want great graphics with the newer versions, but what's so bad?
Generally, every major version of OpenGL is roughly equivalent to a hardware generation. Which means that generally if you can run OpenGL 3.0 card, you can also run OpenGL 3.3 (if you have a sufficiently new driver).
OpenGL 2.x is the DX9-capable generation of hardware, OpenGL 3.x is the DX10, and OpenGL 4.x the DX11 generation of hardware. There is no 100% exact overlap, but this is the general thing.
OpenGL 1.x revolves around immediate mode, which is conceptually very easy to use, and a strictly fixed function pipeline. The entry barrier is very low, because there is hardly anything you have to learn, and hardly anything you can do wrong.
The downside is that you have considerably more library calls, and CPU-GPU parallelism is not optimal in this model. This does not matter so much on old hardware, but becomes more and more important to get the best performance out of newer hardware.
Beginning with OpenGL 1.5, and gradually more and more in 2.x, there is slight paradigm shift away from immediate mode towards retained mode, i.e. using buffer objects, and a somewhat programmable pipeline. Vertex and fragment shaders are available, with varying feature sets and programmability.
Much of the functionality in these versions was implemented via (often vendor-specific) extensions, and sometimes only half-way or in several distinct steps, and not few features had non-obvious restrictions or pitfalls for the casual programmer (e.g. register combiners, lack of branching, limits on instructions and dependent texture fetches, vtf support supporting zero fetches).
With OpenGL 3.0, fixed function was deprecated but still supported as a backwards-compatibility feature. Almost all of "modern OpenGL" is implemented as core functionality as of OpenGL 3.x, with clear requirements and guarantees, and with an (almost) fully programmable pipeline. The programming model is based entirely on using retained mode and shaders. Geometry shaders are available in addition to vertex and fragment shaders.
Version 3 has received a lot of negative critique, but in my opinion this is not entirely fair. The birth process was admittedly a PR fiasco, but what came out is not all bad. Compared with previous versions, OpenGL 3.x is bliss.
OpenGL 4.x has an additional tesselation shader stage which requires hardware features not present in OpenGL 3.x compatible hardware (although I daresay that's rather a marketing reason, not a technical one). There is support for a new texture compression format that older hardware cannot handle as well.
Lastly, OpenGL 4.x introduces some API improvements that are irrespective of the underlying hardware. Those are also available under OpenGL 3.x as 100% identical core extensions.
All in all, my recommendation for everyone beginning to learn OpenGL is to start with version 3.3 right away (or 3.2 if you use Apple).
OpenGL 3.x compatible hardware is nearly omni-present nowadays. There is no sane reason to assume anything older, and you save yourself a lot of pain. From an economic point of view, it does not make sense to support anything older. Entry level GL4 cards are currently at around $30. Therefore, someone who cannot afford a GL3 card will not be able to pay for your software either (it is twice as much work to maintain 2 code paths, though).
Also, you will eventually have no other choice but to use modern OpenGL, so if you start with 1.x/2.x you will have to unlearn and learn anew later.
On the other hand, diving right into version 4.x is possible, but I advise against it for the time being. Whatever is not dependent on hardware in the API is also available in 3.x, and tesselation (or compute shader) is something that is usually not strictly necessary at once, and something you can always add on later.
For an exact list of changes I suggest you download the specification documents of the latest of each OpenGL major version. At the end of each of these there are several appendices documenting the changes between versions in detail.
The many laptops with Intel integrated graphics designed before approx a year ago do not do OpenGL 3. That includes some expensive business machines, e.g., $1600 Thinkpad x201, still for sale on Amazon as of today (4/3/13) (although Lenovo has stopped making them),
OpenGL 3.1 removed the "fixed function pipeline". That means that writing vertex and fragment shaders is no longer optional: If you want to display anything, you must write them. This makes it harder for the beginner to write "hello world" in OpenGL.
The OpenGL Superbible Rev 5 does a good job of teaching you to use modern OpenGL without falling back on the fixed function pipeline. That's where I would start if I were learning OpenGL from scratch.
Their rev 4 still covers the fixed function pipeline if you want to start with a more "historical" approach.

Which version of OpenGL should I learn? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Which version of OpenGL to use?
I have been wanting to learn a 3d graphics language for some time now and I have finally decided to learn OpenGL.
However, I work on a Mac and officially this highest version of OpenGL for mac is 2.1 but it can support 3.3 unofficially through tests that I have done.
I would like to develop applications that would work on multiple platforms but what version would be the best to learn?
A good compromise between portability and still learning the "modern OpenGL way", is roughly "the OpenGL ES 2.0 subset of OpenGL 2.1". That gives you portability to
OSX, as you mention
Windows, obviously
Linux with open source drivers (for higher OpenGL versions and better performance you need the proprietary drives which you might prefer anyway, but some people like to avoid those)
Smartphone platforms like iOS and Android.
OpenGL 1.x is even more portable (e.g. older iOS and Android releases support only OpenGL ES 1.x) but the classical fixed-function programming model is somewhat different than the modern one based on buffer objects and shaders, and use of immediate mode easily leads to performance issues when rendering lots of vertices. So probably not worth it, IMHO.
My recommendation would be to learn no less than version 3.2. If 3.3 is supported (even unofficially), go for that.
OpenGL 3.3 is already rather "last generation" than "bleeding edge". You have to search hard to find a card that does not support OpenGL 3.3, and you get 4.x capable cards in the $30 range.
Under version 2.x, you must go through a lot of pain to ensure that even the most basic functionality that you use every day is available, and you end up writing two or three code paths depending on what extension you must use and on what some limit is.
Under version 3.3, most features that you want to use every day are core (guaranteed standard), and most limits have a guaranteed minimum value that is enough for most things anyway. The features that are not core in 3.3 are few (and you won't die if you don't have them), and you can pretty much just plug them in optionally if they're there, and forget about them if they aren't.
There is a huge change in paradigms between 2.1 and 3.3 (which you will have to re-learn later if you start with 2.x first!), and there are notable changes in GLSL between 3.1 and 3.2 which make writing shader code that works for both an ordeal, or impossible.
Upwards of version 3.2, everything is smooth. New features are available or they aren't... use them or don't... but you can in principle write one piece of code to run on all versions.
If your goal is maximum interoperability, I would rather take a look at WebGL, or it's close relative, OpenGL ES. The concepts of OpenGL ES (at least in the 2.0 version) are quite close to those of OpenGL 4 (buffer-based data transfer, universal shaders etc.).
I think that by learning 2.1 you would learn some outdated concepts you will soon have to re-learn, like the direct mode, or rather the whole fixed-function pipeline which was pruned in later versions.
You can safely start learning the 3.x too, as you will learn the current concepts and features. Do not worry about the "officially supported" version.

Porting a project to OpenGL3

I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+...
Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ?
Thanks a lot for your help.
The last update to glut was version 3.7, roughly 10 years ago. Taking that into account, I doubt that it'll ever support OpenGL 3.x (or 4.x).
The people working on OpenGlut seem to be considering the possibility of OpenGL 3.x support, but haven't done anything with it yet.
FLTK has a (partial) glut simulation, but it's partial enough that a program that "makes heavy use of glut" may not work with it in the first place. Since FLTK is in active development, I'd guess it'll eventually support OpenGL 3.x (or 4.x), but I don't believe it's provided yet, and it may be open to question how soon it will be either.
Edit: As far as CUDA goes, the obvious (though certainly non-trivial) answer would be to use OpenCL instead. This is considerably more compatible both with hardware (e.g., with ATI/AMD boards) and with newer versions of OpenGL.
That leaves glu. Frankly, I don't think there is a clear or obvious answer to this. OpenGL is moving away from supporting things like glu, and instead dropping support for even more of the vaguely glu-like functionality that used to be part of the core OpenGL spec (e.g., all the matrix manipulation primitives). Personally, I think this is a mistake, but whether it's good or bad, it's how things are. Unfortunately, glu is a bit like glut -- the last update to the spec was in 1998, and corresponds to OpenGL 1.2. That doesn't make an update seem at all likely. Unfortunately, I don't know of any really direct replacements for it either. There are clearly other graphics libraries that provide (at least some) similar capabilities, but all of them I can think of would require substantial rewriting.

OpenGL: What's the deal with deprecation?

OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.