How to use OpenGL 2.0 rather than a later version? - c++

The goal is to make a game which will be compatible with many Graphics cards, and cross-platform. I decided to go with OpenGL 2.0 and Glut.
I quickly came to realize however, that there are no specific DLL for version OpenGL version 1.0, 2.0, 2.1... This lead me to wonder, how exactly do you choose which OpenGL version you need?
Also, I am aware that Windows Visual Studio only comes with OpenGL version 1.1. That is why I decided to use Glut, so that I could use functions from a later version of OpenGL such as 2.0.
The question remains, how do I use a certain version of OpenGL?

You simply don't use any of the new features introduced since 2.0

Also, I am aware that Windows Visual Studio only comes with OpenGL version 1.1. That is why I decided to use Glut, so that I could use functions from a later version of OpenGL such as 2.0.
GLUT will not help in this respect (or, I'm tempted to say, in any other respect either). What you're looking for is GLEE or GLEW. Most implementations allow you to do this on your own as well; these libraries just make it easier -- but they do make it a lot easier.

In OpenGL 2.x and earlier you just create a context. That context must be backwards compatible with any version of OpenGL you care to use. In OpenGL 3.0 and latter where strict API backwards compatibility was done away with there is a new method of context creation that allows you to specify the OpenGL version as attributes.

Related

Can I limit OpenGL version to 4.3?

I have an OpenGL 4.5 capable GPU and I wish to test if my application runs on an OpenGL 4.3 capable system. Can I set my GPU to use OpenGL 4.3?
Can you forcibly limit your OpenGL implementation to a specific version? No. Implementations are permitted to give you any version which is 100% compatible with the one you asked for. And 4.5 is compatible with 4.3.
However, using the right OpenGL loading library, you can forcibly limit your header. Several libraries allow you to generate version-specific headers, which will provide APIs and enums for just that version and nothing else. And any extensions you wish to use.

Testing OpenGL compatibility

I'm having concern about my code which was developed with OpenGL 3.3 hardware level support in mind. However I would like to use certain extesntions, and want to make sure they all require OpenGL version 3.3 or lower before I use them. Let take an example, ARB_draw_indirect. It says that "OpengGL 3.1 is required", however there is according to this document which seems to also state that the extension is not available in 3.3. The problem here is that both of my dev machines have hardware support for 4.5, which means that everything works, including those features that requires OpenGL 4.x. So I would like to know how can I test 3.3 compatibility without having to purchase 3.3 class graphic card? To put it simple, my dev hardware is too powerful compare to what I plan to support, and thus not sure how to test my code.
EDIT: On a related note, there is EXT_direct_state_access extension, and DSA is in the OpenGL 4.5 core profile. The main difference is EXT suffix in function names. Suppose that I want to stick to OpenGL 3.3 support, should I use function names with EXT suffix? Example: TextureParameteriEXT( ) vs TextureParameteri( )
When looking at the versions, it sounds like you're mixing up two aspects. To make this clearer, there are typically two main steps in adding new functionality to OpenGL:
An extension is defined. It can be used on implementations that support the specific extension. When you use it, the entry points will have EXT, ARB, or a vendor name as part of their name.
For some extensions, it is later decided that they should become core functionality. Sometimes the new core functionality might be exactly like the extension, sometimes it might get tweaked while being promoted to core functionality. Once it's core functionality, the entry points will look like any other OpenGL entry point, without any special pre-/postfix.
Now, if you look at an extension specification, and you see:
OpengGL 3.1 is required
This refers to step 1 above. Somebody defined an extension, and says that the extension is based on OpenGL 3.1. So any OpenGL implementation that supports at least version 3.1 can provide support for this extension. It is still an extension, which means it's optional. To use it, you need to:
Have at least OpenGL 3.1.
Test that the extension is available.
Use the EXT/ARB/vendor prefix/postfix in the entry points.
It most definitely does not mean that the functionality is part of core OpenGL 3.1.
If somebody says:
DSA is in the OpenGL 4.5 core profile
This refers to step 2 above. The functionality originally defined in the DSA extension was promoted to core functionality in 4.5. To use this core functionality, you need to:
Have at least OpenGL 4.5.
Not use the EXT/ARB/vendor prefix/postfix in the entry points.
Once an extension is core functionality in the OpenGL version you are targeting, this is typically what you should do. So in this case, you do not use the prefix/postfix anymore.
Making sure that you're only using functionality from your targeted OpenGL version is not as simple as you'd think. Creating an OpenGL context of this version is often not sufficient. Implementations can, and often do, give you a context that supports a higher version. Your specified version is treated as a minimum, not as the exact version you get.
If you work on a platform that requires extension loaders (Windows, Linux), I believe there are some extension loaders that let you generate header files for a specific OpenGL version. That's probably you best option. Other than that, you need to check the specs to verify that the calls you want to use are available in the version you're targeting.

Confused about openGL version

I updated my graphics card driver to support openGL 4 so that deprecated functions like glBegin wont work. However, when I run a simple triangle program, glBegin still works like before. Is glBegin still supported by openGL 4 or did I miss some step in upgrading to openGL 4?
Simply using a driver that supports OpenGL 4.x does not mean that you will lose the functionality of earlier versions. Beginning with OpenGL 3.2 the concept of Core and Compatibility profiles were introduced, and this is where the separation between modern and deprecated actually comes into play.
In a Core profile, the things you mentioned such as glBegin are invalid. However, in a Compatibility profile, you can continue to mix-and-match deprecated parts of the API with new parts. The vast majority of new OpenGL features are not guaranteed to work in conjunction with the deprecated parts of the API, in large part because most new features are related to GLSL and the programmable pipeline in some way.
Now things get a little bit more complicated when you discuss a platform like Mac OS X. Beginning with OS X 10.7, Apple began supporting OpenGL 3.2. However, they designed their implementation in such a way that the ONLY way to access OpenGL 3.2 functionality was to get a Core profile. They continue to support a legacy OpenGL 2.1 implementation so that old software does not have to be re-written, but in order to take advantage of any OpenGL 3.2+ features on OS X you have to forefit all deprecated functionality.
In fact, platforms are generally designed so that you actually have to do extra work during context creation in order to get a Core profile. Unless you specifically request Core, you will get Compatibility (or in the case of OS X, an implementation of OpenGL 2.1). It is a way of making the whole deprecation model as painless as possible for existing software.
"deprecated" doesn't necessarily means that "it will not work", it means "you should not use it because the standard say so", the vendor is free to implement what it wants to sell with the hardware; and many brands still offer deprecated OpenGL contexts and functions in their own libraries.

glGenerateMipmap or glGenerateMipmapEXT

I need glGenerateMipmap* function, but I would like to know if there are any differences between:glGenerateMipmap and glGenerateMipmapEXT?
I understand that EXT was before ARB so EXT version should work on older hardware? Are there any differences in behavior?
another question:
Can I use:
myGLGenerateMipmap = loadProcAddress("glGenerateMipmap")
or I should check support for the GL_EXT_framebuffer_object first?
please note that I would not like to use GLEW/GLEE or any other libraries...
Up until OpenGL 3.0, this function was not a part of the OpenGL spec. proper. The version that is included in OpenGL 3.0 is actually derived from the GL_ARB_framebuffer_object specification.
If your driver lists the GL_ARB_framebuffer_object extension, or you know you have a legitimate OpenGL 3.0+ implementation, you are guaranteed to have this functionality through the proc. address glGenerateMipmap. This is the procedure you shoud use, in such a case.
glGenerateMipmapEXT comes from the awful EXT version of the FBO specification. I would avoid it like the plague, unless you have neither OpenGL 3.0 nor GL_ARB_framebuffer_object. You will not have this procedure either, however, if your driver does not report GL_EXT_framebuffer_object.
This is where extension wranglers make life easier... but I can see not wanting to add another dependency to your software. You are going to have to study up on the art of reading extension specifications and follow the version change history on the OpenGLĀ® Registry

Porting a project to OpenGL3

I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+...
Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ?
Thanks a lot for your help.
The last update to glut was version 3.7, roughly 10 years ago. Taking that into account, I doubt that it'll ever support OpenGL 3.x (or 4.x).
The people working on OpenGlut seem to be considering the possibility of OpenGL 3.x support, but haven't done anything with it yet.
FLTK has a (partial) glut simulation, but it's partial enough that a program that "makes heavy use of glut" may not work with it in the first place. Since FLTK is in active development, I'd guess it'll eventually support OpenGL 3.x (or 4.x), but I don't believe it's provided yet, and it may be open to question how soon it will be either.
Edit: As far as CUDA goes, the obvious (though certainly non-trivial) answer would be to use OpenCL instead. This is considerably more compatible both with hardware (e.g., with ATI/AMD boards) and with newer versions of OpenGL.
That leaves glu. Frankly, I don't think there is a clear or obvious answer to this. OpenGL is moving away from supporting things like glu, and instead dropping support for even more of the vaguely glu-like functionality that used to be part of the core OpenGL spec (e.g., all the matrix manipulation primitives). Personally, I think this is a mistake, but whether it's good or bad, it's how things are. Unfortunately, glu is a bit like glut -- the last update to the spec was in 1998, and corresponds to OpenGL 1.2. That doesn't make an update seem at all likely. Unfortunately, I don't know of any really direct replacements for it either. There are clearly other graphics libraries that provide (at least some) similar capabilities, but all of them I can think of would require substantial rewriting.