Porting a project to OpenGL3 - c++

I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+...
Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ?
Thanks a lot for your help.

The last update to glut was version 3.7, roughly 10 years ago. Taking that into account, I doubt that it'll ever support OpenGL 3.x (or 4.x).
The people working on OpenGlut seem to be considering the possibility of OpenGL 3.x support, but haven't done anything with it yet.
FLTK has a (partial) glut simulation, but it's partial enough that a program that "makes heavy use of glut" may not work with it in the first place. Since FLTK is in active development, I'd guess it'll eventually support OpenGL 3.x (or 4.x), but I don't believe it's provided yet, and it may be open to question how soon it will be either.
Edit: As far as CUDA goes, the obvious (though certainly non-trivial) answer would be to use OpenCL instead. This is considerably more compatible both with hardware (e.g., with ATI/AMD boards) and with newer versions of OpenGL.
That leaves glu. Frankly, I don't think there is a clear or obvious answer to this. OpenGL is moving away from supporting things like glu, and instead dropping support for even more of the vaguely glu-like functionality that used to be part of the core OpenGL spec (e.g., all the matrix manipulation primitives). Personally, I think this is a mistake, but whether it's good or bad, it's how things are. Unfortunately, glu is a bit like glut -- the last update to the spec was in 1998, and corresponds to OpenGL 1.2. That doesn't make an update seem at all likely. Unfortunately, I don't know of any really direct replacements for it either. There are clearly other graphics libraries that provide (at least some) similar capabilities, but all of them I can think of would require substantial rewriting.

Related

Will I need to use glTexImage if OpenGL 4.2 is unavailable?

I'm rewriting my rendering library, and greatly enjoy how glTexStorage2D() works, as well as glBufferStorage(). They're wonderful, allocating the space and defining the object in an immutable way, which works very well with the interface I designed.
However, I just learned it is an OpenGL 4.2 feature. I'm targeting OpenGL 3.3 (I think?) but hoping for a way to avoid glTexImage2D() which uses mutable storage that has proven difficult to use safely.
Also... I'm gravely unfamiliar with how OpenGL's version system works, or what it even means to "target" one version over the other...
Is it as simple as just, this functionality is unavailable to me below 4.2 and that's that? Or are there alternative functions I could use? Or if I attempt to just create a 4.5 OpenGL context (the latest version I know of), would the OpenGL driver be required to emulate any features it doesn't have access to on the GPU? Or is that not even remotely how any of that works? I have very little knowledge about it, so maybe this all sounds foolish.
I just really like some of the newer OpenGL 4.x features like immutable storage and tessellation shaders, but don't want to exclude too many people from running my programs. Would I have to implement an alternative texture allocation method using glTexImage() if a platform isn't capable of running OpenGL 4.2?

How to ensure backwards-compatibility of my Windows OpenGL application?

I have developed a program which makes use of many of OpenGL's aspects - ranging from both rather new to deprecated functionalities, and want to ensure that it works correctly on the great majority of machines - especially on ones with outdated graphics cards.
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
What is the best way to maximize the (backwards)compatibility of an OpenGL application?
Define "compatibility"? If you want an application to run on as much hardware as possible, then you basically have to give up on shaders entirely and stick to about GL 1.4. The main confounding issue here are Intel driver bugs; many pieces of older Intel hardware will claim support for GL 2.0 or 2.1, but they have innumerable failings in this support.
How can I test my program for compatibility with older hardware without actually having a test machine with older hardware?
You don't. Compatibility with old hardware is about more than just sticking to a standard. It's about making sure that your program doesn't encounter driver bugs. And the only way to do that is to actually test on the hardware of interest.
What ways are there to find the underlying causes of the issues which may be encountered during compatibility testing?
Test the same code on recent hardware. If it has the same failures, then the problem is likely in your code. If it works fine on recent hardware but fails on older stuff, then the problem is almost certainly a driver bug with old hardware drivers.
Develop a workaround.
Well, the best way to maximize the backwards compatibility and to get a powerful tool on tracking down target machine's functionality (imho) is to use something like GLEW: The OpenGL Extension Wrangler Library. It will load OpenGL version-specific functions for you and you can test if they are supported by user's system (or, more correctly, by video drivers).
This library is very simple in use, it is well documented and you can google a lot of examples.
So if target machine doesn't have some new opengl functions, you load module named "opengl_old.cpp" (for example), or if it don't have some functionality which is already deprecated (like glBegin(), glEnd()), you'd better go on with "opengl_new.cpp".
Basically the most changes are done in OpenGL 3.0 (and furthermore 3.3) with shaders introduced as the only non-deprecated graphics pipeline, so you can make two opengl modules in your program: one for OpenGL 1&2 and one for OpenGL 3&4. At least I solved this problem in this way in my own code.
To test some functionality you can specify concrete version of OpenGL API to be loaded, when creating context.

Using a combination of FreeGlut with SDL

I'm currently in the process of writing a game engine which is about to go through a major rewrite. First off, I'm considering what library to use in conjunction with the engine. Obviously, I'm going with OpenGL here and am going to do what I can to make it forward as well as backward compatible.
The main issue, though, is that from most of my research, I've found that great libraries like SDL (except for 1.3 - which, I don't believe is stable? I may be wrong about this) only support up to OpenGL 3 and not 4.2. FreeGlut, however, does support the latest and greatest, and seems like a good way to go for the basics of an engine.
The only thing, however, is setting up something such as Keyboard I/O and sound input audio, along with other things. Thus, I'm considering to see whether or not it's possible to use glut to initialize opengl and use opengl with it, and then have SDL do window management along with keyboard I/O, sound, etc.
Of course, there's always the option of using Qt with OpenGL, but I'd like to definitely have control over my main loop if possible (is this possible with Qt and OpenGL?).
I've heard of SFML, too, but ultimately I'd like to stick with libraries written in C as I plan to write a C library to take care of most of the primitive rendering (for the sake of pure speed and memory management, procedurally).
Thus, I'm at a loss as what to do here. IS Qt a good choice for this, or is there another C-like alternative (such as FreeGlut) which allows main-loop control (like SDL) and offers the necessary customization I'm looking for?
The main issue, though, is that from most of my research, I've found that great libraries like SDL (except for 1.3 - which, I don't believe is stable? I may be wrong about this) only support up to OpenGL 3 and not 4.2. FreeGlut, however, does support the latest and greatest, and seems like a good way to go for the basics of an engine.
Your research is lacking.
First, FreeGLUT should never be used for anything that you would call an "engine". Whatever you mean by that, FreeGLUT is not the tool for the job. It's designed for creating demos, which is why it owns the main loop. I understand that FreeGLUT does have a way to allow you some control over the main loop, but the standard way to use FreeGLUT doesn't do that.
Second, you are correct that SDL 1.2 is not capable of creating an OpenGL 3.2+ core context. However, you don't have to be able to create a core context to use GL 3.2+; compatibility contexts work just fine at those versions. The only platform that has no compatibility context is MacOSX's 3.2 support. So I wouldn't worry about it.
You could try GLFW. It's sort of like FreeGLUT only more game-centric. It gives you control of the render loop and so forth. It provides better input handling than FreeGLUT, as well as some light image loading functions (only TGA files). It even has a threading API (though I wouldn't suggest using these functions. GLFW 2.0 will drop them since both C++11 and C11 have native thread APIs).
However, it has no systems in place for audio.
I've heard of SFML, too, but ultimately I'd like to stick with libraries written in C as I plan to write a C library to take care of most of the primitive rendering (for the sake of pure speed and memory management, procedurally).
I'm going to ignore the fallacy about C++ not having the "pure speed and memory management;" that's a common canard that I'll ignore. The important point is this: SFML, as far as your rendering code is concerned, exists solely to create and manage the window. Your rendering code doesn't even have to talk to it. You call some SFML functions, create a couple of SFML objects, and your "C library" OpenGL code won't even have to know those C++ objects are there.
However, if you absolutely cannot work in C++ at all, you can always use Allegro version 5. It has a C API, and it provides support for OpenGL core contexts, input, audio, and most of what SFML does. It also has pretty decent documentation, and is modular (though in a different way from SFML).

Which version of OpenGL should I learn? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Which version of OpenGL to use?
I have been wanting to learn a 3d graphics language for some time now and I have finally decided to learn OpenGL.
However, I work on a Mac and officially this highest version of OpenGL for mac is 2.1 but it can support 3.3 unofficially through tests that I have done.
I would like to develop applications that would work on multiple platforms but what version would be the best to learn?
A good compromise between portability and still learning the "modern OpenGL way", is roughly "the OpenGL ES 2.0 subset of OpenGL 2.1". That gives you portability to
OSX, as you mention
Windows, obviously
Linux with open source drivers (for higher OpenGL versions and better performance you need the proprietary drives which you might prefer anyway, but some people like to avoid those)
Smartphone platforms like iOS and Android.
OpenGL 1.x is even more portable (e.g. older iOS and Android releases support only OpenGL ES 1.x) but the classical fixed-function programming model is somewhat different than the modern one based on buffer objects and shaders, and use of immediate mode easily leads to performance issues when rendering lots of vertices. So probably not worth it, IMHO.
My recommendation would be to learn no less than version 3.2. If 3.3 is supported (even unofficially), go for that.
OpenGL 3.3 is already rather "last generation" than "bleeding edge". You have to search hard to find a card that does not support OpenGL 3.3, and you get 4.x capable cards in the $30 range.
Under version 2.x, you must go through a lot of pain to ensure that even the most basic functionality that you use every day is available, and you end up writing two or three code paths depending on what extension you must use and on what some limit is.
Under version 3.3, most features that you want to use every day are core (guaranteed standard), and most limits have a guaranteed minimum value that is enough for most things anyway. The features that are not core in 3.3 are few (and you won't die if you don't have them), and you can pretty much just plug them in optionally if they're there, and forget about them if they aren't.
There is a huge change in paradigms between 2.1 and 3.3 (which you will have to re-learn later if you start with 2.x first!), and there are notable changes in GLSL between 3.1 and 3.2 which make writing shader code that works for both an ordeal, or impossible.
Upwards of version 3.2, everything is smooth. New features are available or they aren't... use them or don't... but you can in principle write one piece of code to run on all versions.
If your goal is maximum interoperability, I would rather take a look at WebGL, or it's close relative, OpenGL ES. The concepts of OpenGL ES (at least in the 2.0 version) are quite close to those of OpenGL 4 (buffer-based data transfer, universal shaders etc.).
I think that by learning 2.1 you would learn some outdated concepts you will soon have to re-learn, like the direct mode, or rather the whole fixed-function pipeline which was pruned in later versions.
You can safely start learning the 3.x too, as you will learn the current concepts and features. Do not worry about the "officially supported" version.

OpenGL: What's the deal with deprecation?

OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.