Same program, target both Open GL and Open GL ES 2.0 - opengl

Can one run the same program (unmodified) on both the "desktop" OpenGL and OpenGL ES 2.0 platforms, provided that this program only performs 2D accelerated rendering?
An average Windows desktop PC and Raspberry Pi will run the program.
The GL context is obtained via the functions provided by the excellent SDL2 library. And for the drawing routines a texture atlas will be used.
It would be convenient if a program could be developed/debugged on a PC and then simply be recompiled to run on the raspberry Pi. This would be no issue if both OGL platforms were mostly compatible.
Since I'm a beginner when it comes to OpenGL, I of course started experimenting with the "hello triangle" program first.
To my big surprise this triangle program works on both the desktop and the Raspberry Pi (GLES2). Save for some #include file differences.
Dazzled by all the different OpenGL headers - and function pointer gymnastics - now I'm not sure anymore if my desktop somehow provided GLES2 (which seems unlikely to me) or that the "desktop" OpenGL version I have is simply "compatible enough" with GLES2.
It's especially unclear to me whether GLES2 is just a stripped down version of OpenGL or completely different.
Is avoiding "advanced" or missing extensions/features enough to ensure compatibility across these platforms? Or are there more things to take into account?

OpenGLES is (more or less) a stripped down version of OpenGL (which mainly removes the old legacy cruft, and any features that may hurt battery life). There are a few very minor differences here and there, but that's largely the case.
On the Pi, you are limited to GLES. However on desktop, both NVidia and ATI support OpenGL and OpenGLES. I would imagine your SDL based app is targetting an EGL context with GLES, and since SDL wraps all of the platform specific Windowing APIs, that should just work on desktop.
If you really want to know the gory details, read on...
The way you initialise OpenGL (and GLES) is to query the graphics driver for a pointer to a function. So lets take a rubbish example:
// This will typically be in gl.h
// some magic to prevent C++ name mangling, and use
// C naming conventions.
#ifdef __cplusplus
# define EXTERN extern "C"
#else
# define EXTERN extern
#endif
// declare a function pointer type that matches void glFinish()
typedef void (*GL_FINISH_PTR_TYPE)();
// now declare a function pointer that will hold the address.
// The actual pointer will be buried away within OpenGL.lib,
// or GLES2.lib (i.e. within a C file somewhere)
EXTERN GL_FINISH_PTR_TYPE glFinish;
Basically that process will be repeated for the entire GL API.
Then within some source file, we'll declare the pointer...
#include "gl.h"
// here is our function pointer, which we will set to NULL initially
GL_FINISH_PTR_TYPE glFinish = NULL;
Now the following tends to be platform specific, but on each platform we want to pull the address for those GL methods from the driver. So on windows it looks a bit like:
#ifdef _WIN32
void initGL()
{
// extract pointer for glFinish from the driver
glFinish = (GL_FINISH_PTR_TYPE)wglGetProcAddress("glFinish");
}
#endif
and on the raspberry pi, it would look like:
#ifdef RASPBERRY_PI
void initGL()
{
// extract pointer for glFinish from the driver
glFinish = (GL_FINISH_PTR_TYPE)eglGetProcAddress("glFinish");
}
#endif
So the only things that differ between the platforms are:
The APIs needed to create the window
The APIs needed to create the OpenGL (or GLES context)
The function you need to call to extract the function pointers.
Handily all of the platform specific stuff is wrapped up nicely within SDL, so you really don't need to care about it. Just target SDL, and so long as you restrict yourself to the GLES API calls, it should just work on both platforms...
caveat: You may encounter minor driver differences between NVidia/ATI/Intel hardware (e.g. max supported textures size, etc), and obviously differing hardware will have differing specs for amount of RAM, etc. If your app works on the lesser hardware platform, it will usually work on the better hardware as well...

Related

How can I make my compiler flag an error when trying to compile deprecated OpenGL functions?

I learnt to use legacy OpenGl (v1 / 2) around a year back but now I am trying to make something that is a bit more up to date (i.e. >OpenGL 3.3).
I want to use a lot of my old code, however I could really do with the compiler flagging up an error when it tries to compile something legacy (e.g. glBegin() ... glEnd()).
I compiled on a mac a while back and it flagged up this error when trying to compile, but now I'm using a raspberry pi running raspbian.
Thanks for your help in advance!
Depending on your use-case, you might be able to use the OpenGL ES header instead of the standard OpenGL header. The OpenGL ES header doesn't contain the deprecated functions.
Another possibility would be to use a loader like gl3w which will also make your code more portable.
I'd recommend using the OpenGL loader generator glad to generate a loader for core profile of the OpenGL version you want to target. The resulting headers will not contain any of the deprected compatibility profile functions and GLenum definitions.
However, be aware that this will not catch all deprecated GL usage at compile time. For example, a core profiles mandates that a VAO != 0 is bound when rendering, that vertex arrays come from VBOs and not client-side memory, and a shader program != 0 is used. Such issues can't really be detected at compile time. I recommend to use the OpenGL Debug Output functionality to catch those remaining issues at runtime. Most GL implementations will produce very useful error or warning messages that way.

How do header files like OpenGL.h work

I understand that header files are processed prior to compilation of the remainder of the source file in which it is included to make the process of developing code easier. I also know that they allow working declarations. however, I don't see functions in use in the header file OpenGL.h like I do in all the tutorials I have been researching. The OpenGL.h is very obscure to me with #define extern. I don't know what is happening. for instance
#define CGL_VERSION_1_0 1
#define CGL_VERSION_1_1 1
#define CGL_VERSION_1_2 1
#define CGL_VERSION_1_3 1
extern CGLError CGLQueryRendererInfo(GLuint display_mask,
CGLRendererInfoObj *rend, GLint *nrend);
extern CGLError CGLDestroyRendererInfo(CGLRendererInfoObj rend);
extern CGLError CGLDescribeRenderer(CGLRendererInfoObj rend, GLint
rend_num, CGLRendererProperty prop, GLint *value);
I have no idea what is happening here, and have come across other c++ includes that share similar obscurity. I would like to write a library of my own and I feel that popper header files are outlined in this manner.
to me it seems like all that is happening is keywords or variables are being made, or functions that don't have a code block. I have taken 2 courses in c++, introduction and c++ but they did not touch on this topic in very much detail.
I am just trying to deobfuscate what is happening.
Usually, library headers do not contain implementations (exceptions are, for example, header-only libraries, especially with loads of c++ template code). Headers just provide information on how to call library functions, that is, data types and signatures. The implementation is usually contained in a static or shared library.
Strictly speaking, OpenGL is not even a library, but a specification, while OpenGL implementation is usually provided as a shared library. That is, the implementation of OpenGL functions is stored as a bunch of binary data holding compiled code. If you really want the sources, you need to check which implementation of OpenGL are you using (it could be nvidia drivers, for example, and I doubt that the real sources are available).
In order to understand, how this compiled code gets linked with your code and how headers are involved in this process, I recommend you to read more about C++ compillation process and static and dynamic linking.
Even though the name OpenGL.h might suggest otherwise, you're not looking at the OpenGL header file. This is the header for CGL, which is the window system interface for OpenGL on Mac OS.
The window system interface is a platform dependent layer that forms the "glue" between OpenGL and the window system on the platform. This API is used to configure and create contexts, drawing surfaces, etc. It corresponds to WGL on Windows, GLX on Linux, EGL on Android, EAGL on iOS, etc.
If you want to see the actual OpenGL headers, look for gl.h and gl3.h in the same directory. gl.h is for legacy OpenGL 2.1, gl3.h is for the core profile of OpenGL 3.x and later.
Those headers contain the declarations of the OpenGL API entry points, as well as definitions for enums. The functions need to be declared so that you can call them in your code. In C++, you cannot call undeclared functions.
The code for the functions is in the OpenGL framework, which you link against. A framework on Mac OS is a package that contains headers, libraries, and some related files. It's the libraries within the framework that contain the actual implementation of the API entry points.
In OpenGL, you have to retrieve a pointer to each OpenGL function ere it can be used; they are thus loaded in run-time. That pointer becomes the function itself albeit typedef'd in order to appear as a function. There are libraries that do this for you like glew, glLoadGen, glbinding to name the most prominent ones. OpenGL.h would hold function pointers and maybe some contextual information on how to initialise OpenGL.
Headers only contain function prototypes; but with OpenGL it's different because you would instead hold only the pointer to the function rather than the actual function itself.

Defining GL_GLEXT_PROTOTYPES vs getting function pointers

I am writing a program which depends on OpenGL 2.0 or above. Looking at the specs of GL 2.0 I see that the extension defined in ARB_shader_objects has been promoted which I suppose mean that the ARB prefix is no more required for GL version 2.0 and above and any implementation supporting > GL2.0 will have this as part of the core implementation.
Having said that when I compile my program gcc on Linux gives warning: implicit declaration of function. One way to get these functions is to declare them in the program itself and then get the function pointers via *GetProcAddress function.
The other way is to define GL_GLEXT_PROTOTYPES before including glext.h which circumvents the problem of getting the function pointers for each of the functions which are by default now present in GL2.0 or above. Could someone please suggest if that is a recommended and right way? The base line is that my program requires OpenGL 2.0 or above and I don't want to support anything less than GL2.0.
Just in case someone suggests to use glee or glew, I don't want to use/ have option to use glee or glew libraries for achieving the same.
There are two issues here.
GL_ARB_shader_objects indeed was promoted to core in GL2.0, but the API has been slightly changed for the core version, so it is not just the same function names without the ARB prefix, e.g. there is glCreateShader() instead of glCreateShaderObjectARB(), and the two functions glGetShaderInfoLog() and glGetProgramInfoLog() replacing glGetInfoLogARB() and some other minor differences of this sort.
The second issue is assuming that the GL library exports all
the core functions. On Linux that is usually the case (not only for core functions, but basically for everything), but there is no standard guaranteeing that. The OpenGL ABI for Linux just requires:
3.4. The libraries must export all OpenGL 1.2, GLU 1.3, GLX 1.3, and ARB_multitexture entry points statically.
There are proposals for an update but I haven't heard anything about that recently.
Windows only exports OpenGL 1.1 core, as the opengl32.dll is part of the OS and the ICD is in a separate dll. You have to query the function pointers for virtually everything there.
So the most portable way is definitively to query the stuff, no matter if you do it manually or use some library like glew.

OpenGL: glGenBuffer vs glGenBuffersARB

What is the difference between the functions glGenBuffers()/glBufferData()/etc, and the functions with ARB appended to the function name glGenBuffersARB()/glBufferDataARB()/etc. I tried searching around but no one ever points out the difference, merely they just use one or the other.
Also, is it common for either function to be unavailable on some computers? What's the most common way of getting around that kind of situation without falling back to immediate mode?
glGenBuffers() is a core OpenGL function in OpenGL 1.5 and later; glGenBuffersARB() was an extension implementing the same functionality in earlier versions.
Unless you're developing for an ancient system, there's no longer any reason to use the ARB extension.

How to detect if an OpenGL debugger is being used?

Is there any way of detecting from my Windows OpenGL application if a debugger (such as gDEBugger) is being used to catch OpenGL calls? I'd like to be able to detect this and terminate my application if a debugger is found, to avoid shader code and textures from being ripped. The application is developed in C++ Builder 6.
Even if you could find a way to do this it would be a futile attempt because the shader source can be asked for by simply calling glGetShaderSource() at any moment.
An external process can inject a thread into your process using CreateRemoteThread() and then copy back the result with ReadProcessMemory(). This process can be made really simple (just a couple of lines) with the detours library by microsoft.
Alternatively, if creating a remote thread is too much of a hassle, a disassembler such as Ollydgb can be used to inject the a piece of code into the normal execution path which simply saves the shader code into a file just before it is invoked.
Finally, The text of the shader needs to be somewhere in your executable before you activate it and it can probably be extracted just by using a static inspection of the executable with a tool like IDAPro. It really doesn't matter if you encrypt it or compress it or whatever, if its there at some point and the program can extract it then so can a determined enough cracker. You will never win.
Overall, there is no way to detect each and every such "debugger". A custom OpenGL32.dll (or equivalent for the platform) can always be written; and if there is no other way, a virtual graphics card can be designed specifically for purposes of ripping your textures.
However, Graphic Remedy does have some APIs for debugging provided as custom OpenGL commands. They are provided as OpenGL extensions; so, if GetProcAddress() returns NULL on those function calls, you can be reasonably sure it's not gDEBugger. However, there are already several debuggers out there, and, as already mentioned, it's trivial to write one specifically designed for ripping out resources.
Perhaps the closest you can get is load C:\windows\system32\opengl32.dll directly, however that can break your game horribly on future releases of Windows so I'd advise against it. (And this still wouldn't protect you against those enterprising enough to replace system-wide OpenGL32.dll, or who are perhaps using other operating systems).
I'm not 100% sure but I believe that Graphic Remedy replace the Windows opengl32 dll with their own opengl32.dll file for hooking gl calls.
So if it is the case, you just have to check the dll version and terminate if it's not what you expect.