How do header files like OpenGL.h work - c++

I understand that header files are processed prior to compilation of the remainder of the source file in which it is included to make the process of developing code easier. I also know that they allow working declarations. however, I don't see functions in use in the header file OpenGL.h like I do in all the tutorials I have been researching. The OpenGL.h is very obscure to me with #define extern. I don't know what is happening. for instance
#define CGL_VERSION_1_0 1
#define CGL_VERSION_1_1 1
#define CGL_VERSION_1_2 1
#define CGL_VERSION_1_3 1
extern CGLError CGLQueryRendererInfo(GLuint display_mask,
CGLRendererInfoObj *rend, GLint *nrend);
extern CGLError CGLDestroyRendererInfo(CGLRendererInfoObj rend);
extern CGLError CGLDescribeRenderer(CGLRendererInfoObj rend, GLint
rend_num, CGLRendererProperty prop, GLint *value);
I have no idea what is happening here, and have come across other c++ includes that share similar obscurity. I would like to write a library of my own and I feel that popper header files are outlined in this manner.
to me it seems like all that is happening is keywords or variables are being made, or functions that don't have a code block. I have taken 2 courses in c++, introduction and c++ but they did not touch on this topic in very much detail.
I am just trying to deobfuscate what is happening.

Usually, library headers do not contain implementations (exceptions are, for example, header-only libraries, especially with loads of c++ template code). Headers just provide information on how to call library functions, that is, data types and signatures. The implementation is usually contained in a static or shared library.
Strictly speaking, OpenGL is not even a library, but a specification, while OpenGL implementation is usually provided as a shared library. That is, the implementation of OpenGL functions is stored as a bunch of binary data holding compiled code. If you really want the sources, you need to check which implementation of OpenGL are you using (it could be nvidia drivers, for example, and I doubt that the real sources are available).
In order to understand, how this compiled code gets linked with your code and how headers are involved in this process, I recommend you to read more about C++ compillation process and static and dynamic linking.

Even though the name OpenGL.h might suggest otherwise, you're not looking at the OpenGL header file. This is the header for CGL, which is the window system interface for OpenGL on Mac OS.
The window system interface is a platform dependent layer that forms the "glue" between OpenGL and the window system on the platform. This API is used to configure and create contexts, drawing surfaces, etc. It corresponds to WGL on Windows, GLX on Linux, EGL on Android, EAGL on iOS, etc.
If you want to see the actual OpenGL headers, look for gl.h and gl3.h in the same directory. gl.h is for legacy OpenGL 2.1, gl3.h is for the core profile of OpenGL 3.x and later.
Those headers contain the declarations of the OpenGL API entry points, as well as definitions for enums. The functions need to be declared so that you can call them in your code. In C++, you cannot call undeclared functions.
The code for the functions is in the OpenGL framework, which you link against. A framework on Mac OS is a package that contains headers, libraries, and some related files. It's the libraries within the framework that contain the actual implementation of the API entry points.

In OpenGL, you have to retrieve a pointer to each OpenGL function ere it can be used; they are thus loaded in run-time. That pointer becomes the function itself albeit typedef'd in order to appear as a function. There are libraries that do this for you like glew, glLoadGen, glbinding to name the most prominent ones. OpenGL.h would hold function pointers and maybe some contextual information on how to initialise OpenGL.
Headers only contain function prototypes; but with OpenGL it's different because you would instead hold only the pointer to the function rather than the actual function itself.

Related

Why should you use an external OpenGL loader function instead of GLAD's built in loader?

I've been using GLAD with SFML for some time and I've been using GLAD's built-in function loader, gladLoadGL which worked just fine for me. Now I'm looking at GLFW and it's saying both in their guide and on the Khronos opengl wiki that you should be using gladLoadGLLoader((GLADloadproc) glfwGetProcAddress) instead. Is there any particular reason for it?
Is there any particular reason for it?
Using gladLoadGL in conjunction with for example GLFW is resulting in having two code parts in the same program which basically do the same thing, without having any benefit.
For example, look at what GLFW does on Windows (it is similar on the other platforms):
_glfw.wgl.instance = LoadLibraryA("opengl32.dll");
It dynamically loads the GL library behind your back. And it provides an abstraction for querying OpenGL function pointers (the core ones, and extension ones, using both wglGetProcAddress and raw GetProcAdress).
The GL loader glad generates does the same things:
libGL = LoadLibraryW(L"opengl32.dll");
Now one might argue that loading the same shared library twice isn't a big deal, as this should result in re-using the same internal handles and is dealt by reference counting, but even so, it is just unnecessary code, and it still consumes memory and some time during initialization.
So unless you have some very specific reason for why you would need glad's code - maybe in a modified form to really do something else (like using a different GL library then the one which your system would use by default), there is no use case for this code - and it seems a reasonable recommendation to not include code which isn't needed.
As a side note: I often see projects using GLFW and GL loaders like GLAD or GLEW linking opengl32.lib or libGL.so at link time - this is absolutely unnecessary also, as the code will always load the libraries at runtime manually, and there should not be any GL symbols left at link time which the linker could resolve from the GL lib anyway.

Defining GL_GLEXT_PROTOTYPES vs getting function pointers

I am writing a program which depends on OpenGL 2.0 or above. Looking at the specs of GL 2.0 I see that the extension defined in ARB_shader_objects has been promoted which I suppose mean that the ARB prefix is no more required for GL version 2.0 and above and any implementation supporting > GL2.0 will have this as part of the core implementation.
Having said that when I compile my program gcc on Linux gives warning: implicit declaration of function. One way to get these functions is to declare them in the program itself and then get the function pointers via *GetProcAddress function.
The other way is to define GL_GLEXT_PROTOTYPES before including glext.h which circumvents the problem of getting the function pointers for each of the functions which are by default now present in GL2.0 or above. Could someone please suggest if that is a recommended and right way? The base line is that my program requires OpenGL 2.0 or above and I don't want to support anything less than GL2.0.
Just in case someone suggests to use glee or glew, I don't want to use/ have option to use glee or glew libraries for achieving the same.
There are two issues here.
GL_ARB_shader_objects indeed was promoted to core in GL2.0, but the API has been slightly changed for the core version, so it is not just the same function names without the ARB prefix, e.g. there is glCreateShader() instead of glCreateShaderObjectARB(), and the two functions glGetShaderInfoLog() and glGetProgramInfoLog() replacing glGetInfoLogARB() and some other minor differences of this sort.
The second issue is assuming that the GL library exports all
the core functions. On Linux that is usually the case (not only for core functions, but basically for everything), but there is no standard guaranteeing that. The OpenGL ABI for Linux just requires:
3.4. The libraries must export all OpenGL 1.2, GLU 1.3, GLX 1.3, and ARB_multitexture entry points statically.
There are proposals for an update but I haven't heard anything about that recently.
Windows only exports OpenGL 1.1 core, as the opengl32.dll is part of the OS and the ICD is in a separate dll. You have to query the function pointers for virtually everything there.
So the most portable way is definitively to query the stuff, no matter if you do it manually or use some library like glew.

How to define OpenGL extensions correctly?

I get OpenGL extensions using wglGetProcAddress. But on different machines it use different parameters: e.g. for using glDrawArrays I should call wglGetProcAddress with "glDrawArrays" or "glDrawArraysEXT". How todefine what to use?
There's two pretty good OpenGL extension loading libraries out there - GLee and GLEW. GLEW is currently more up to date that GLee. Even if you don't want to use either of them, they're both open source, so you could do worse than taking a peek on how they do things.
You may also want to check http://www.opengl.org/sdk/ which is a decent collection of OpenGL documentation online.
"glDrawArrays" or "glDrawArraysEXT"
Both! Even if they're named similar, and more often than not procedure signature and token values are identical, they are different extensions, where details may be very well different.
It's ultimately up to the programmer to decide, which functions are used. And if a program uses an …EXT variant of a function, then this very function must be loaded even if there may be a …ARB or core function of same name; they may differ in signature and/or used tokens and state, so you can't mindlessly replace one for another.

Are Preprocessor Definitions compiled into a library?

Are they (preprocessor definitions) compiled into a static/dynamic library? For example, the FBX SDK needs KFBX_DLLINFO. A library that makes use of FBX SDK must include that. Now the client application, as far as I can tell from my limited experimentation, does not need to declare the definition again.
Now I can't think of a more practical scenario, but what if the client application 'needs' the definition to excluded (for example _CRT_SECURE_NO_WARNINGS compiled with a library, but what if I need those warnings?
In short: no.
In long:
For the most part, you can think of preprocessor definitions as a textual substitution mechanism. They are processed before compilation occurs (pre-compilation), so they transform the source code just before the compiler translates it to machine code, intermediate files, or whatever its target is. By the time you have a binary lib/obj/dll/exe/so file, the preprocessor definitions are long gone.
If you include a header in your code that was packaged as part of the library (e.g. in order to reference methods, types, enums, etc. defined by the library), then you are including preprocessor definitions that the library defines in that header.
In your case, if you include an FBX header, you might also be pulling in the preprocessor definition of KFBX_DLLINFO. The FBX binary library you're linking against was almost certainly built with that same header, so you are building against the same definition. This is a common pattern with libraries written in C/C++: common, shared header files along with a static or dynamic lib to build against.
Preprocessor definitions only exist during the compilation. They don't exist anymore in the compiled binary, be it static or dynamic library, or executable.
As Chris explains, #defines are a textual substitution mechanism. The expansion was traditionally performed as a pre-compilation step, with the main C++-language compiler not having (or wanting) access to the pre-substitution text. For this reason, #defines can do things that aren't possible with the language-based constraints of C++, such as concatenate values to form new identifiers. These days, compilers tend to embed the macro processing functionality, and may include some information about pre-processor symbols in the debugging symbol tables compiled into executables. It's not very desirable or practical to access this debug information for some client usage, as debug formats and content can change between compiler versions, aren't very portable, may not be terribly well debugged :-/, and accessing them may be slow and clumsy.
If I understand you correctly, you're wondering whether #defines from some lower-level library that your library is using will be automatically available to an "application" programmer using your library. No, they won't. You need to either provide your own definitions for those values that your library's API exposes to the application programmer (mapping to the lower-level library values internally if they differ), or ship the lower-level library header as well.
For an example of remapping:
Your library.h:
#ifndef INCLUDED_MY_LIBRARY_H
#define INCLUDED_MY_LIBRARY_H
enum Time_Out
{
Sensible,
None
};
void do_stuff(Time_Out time_out);
#endif
Your library.c:
#include "my_library.h"
#include "lower_level_library.h"
void do_stuff(Time_Out time_out)
{
Lower_Level_Lib::do_stuff(time_out == Sensible ? Lower_Level_Lib::Short_Timeout,
: Lower_Level_Lib::No_Timeout);
LOWER_LEVEL_LIB_MACRO("whatever");
}
As illustrated, usage of the Lower_Level_Lib hasn't been exposed in my_library.h, so the app programmer doesn't need to know about or include lower_level_library.h. If you find you need/want to put lower_level_library.h into my_library.h in order to use its types, constant, variables, or functions therein, then you will need to provide the app programmer with that library header too.

Alternatives to preprocessor directives

I am engaged in developing a C++ mobile phone application on the Symbian platforms. One of the requirement is it has to work on all the Symbian phones right from 2nd edition phones to 5th edition phones. Now across editions there are differences in the Symbian SDKs. I have to use preprocessor directives to conditionally compile code that are relevant to the SDK for which the application is being built like below:
#ifdef S60_2nd_ED
Code
#elif S60_3rd_ED
Code
#else
Code
Now since the application I am developing is not trivial it will soon grow to tens of thousands of lines of code and preprocessor directives like above would be spread all over. I want to know is there any alternative to this or may be a better way to use these preprocessor directives in this case.
Please help.
Well ... That depends on the exact nature of the differences. If it's possible to abstract them out and isolate them into particular classes, then you can go that route. This would mean having version-specific implementations of some classes, and switch entire implementations rather than just a few lines here and there.
You'd have
MyClass.h
MyClass_S60_2nd.cpp
MyClass_S60_3rd.cpp
and so on. You can select which CPP file to compile either by wrapping the entire inside using #ifdefs as above, or my controlling at the build-level (through Makefiles or whatever) which files are included when you're building for various targets.
Depending on the nature of the changes, this might be far cleaner.
I've been exactly where you are.
One trick is, even if you're going to have conditions in code, don't switch on Symbian versions. It makes it difficult to add support for new versions in future, or to customise for handsets which are unusual in some way. Instead, identify what the actual properties are that you're relying on, write the code around those, and then include a header file which does:
#if S60_3rd_ED
#define CAF_AGENT 1
#define HTTP_FILE_UPLOAD 1
#elif S60_2nd_ED
#define CAF_AGENT 0
#if S60_2nd_ED_FP2
#define HTTP_FILE_UPLOAD 1
#else
#define HTTP_FILE_UPLOAD 0
#endif
#endif
and so on. Obviously you can group the defines by feature rather than by version if you prefer, have completely different headers per configuration, or whatever scheme suits you.
We had defines for the UI classes you inherit from, too, so that there was some UI code in common between S60 and UIQ. In fact because of what the product was, we didn't have much UI-related code, so a decent proportion of it was common.
As others say, though, it's even better to herd the variable behaviour into classes and functions where possible, and link different versions.
[Edit in response to comment:
We tried quite hard to avoid doing anything dependent on resolution - fortunately the particular app didn't make this too difficult, so our limited UI was pretty generic. The main thing where we switched on screen resolution was for splash/background images and the like. We had a script to preprocess the build files, which substituted the width and height into a file name, splash_240x320.bmp or whatever. We actually hand-generated the images, since there weren't all that many different sizes and the images didn't change often. The same script generated a .h file containing #defines of most of the values used in the build file generation.
This is for per-device builds: we also had more generic SIS files which just resized images on the fly, but we often had requirements on installed size (ROM was sometimes quite limited, which matters if your app is part of the base device image), and resizing images was one way to keep it down a bit. To support screen rotation on N92, Z8, etc, we still needed portrait and landscape versions of some images, since flipping aspect ratio doesn't give as good results as resizing to the same or similar ratio...]
In our company we write a lot of cross-platform code (gamedevelopment for win32/ps3/xbox/etc).
To avoid platform-related macroses as much as possible we generally use the next few tricks:
extract platfrom-related code into platform-abstraction libraries that has the same interface across different platforms, but not the same implementation;
split code into different .cpp files for different platforms (ex: "pipe.h", "pipe_common.cpp", "pipe_linux.cpp", "pipe_win32.cpp", ...);
use macroses and helper functions to unify platform-specific function calls (ex: "#define usleep(X) Sleep((X)/1000u)");
use cross-platform third-party libraries.
You can try to define a common interface for all the platforms, if possible. Then, implement the interface for each platform.
Select the right implementation using preprocessor directives.
This way, you will have the platform selection directive in fewer places in your code (ideally, in one place, explicitly in the header file declaring the interface).
This means something like:
commoninterface.h /* declaring the common interface API. Platform identification preprocessor directives might be needed for things like common type definitions */
platform1.c /*specific implementation*/
platform2.c /*specific implementation*/
Look at SQLite. They have the same problem. They move the platform-dependent stuff to separate files and effectively compile only needed stuff by having the preprocessor directives that exclude an entire file contents. It's a widely used approach.
No Idea about alternative, But what you can do is, use different files to include for different version of OS. example
#ifdef S60_2nd_ED
#include "graphics2"
#elif S60_3rd_ED
#include "graphics3"
#else
#include "graphics"
You could something like they do for the assembly definition in the linux kernel. Each architecture has its own directory (asm-x86 for instance). All these folders cluster the same high level header files presenting the same interface. When the kernel is configured, a link named asm is created targeting the appropriate asm-arch directory. This way, all the C files include files like .
There are several differences between S60 2nd ed and 3rd ed applications that are not limited to code: application resource files differ, graphic formats and tools to pack them are different, mmp-files differ in many ways.
Based on my experience, don't try to automate it too much, but have a separate build scripts for 2nd ed and 3rd ed. In code level, separate differences to own classes that have common abstract API, use flags only in rare cases.
You should try and avoid spreading #ifs through the code.
Rather; use the #if in the header files to define alternative macros and then in the code use the single macro.
This method allows you to keep the code slightly more readable.
Example:
Plop.h
======
#if V1
#define MAKE_CALL(X,Y) makeCallV1(X,Y)
#elif V2
#define MAKE_CALL(X,Y) makeCallV2("Plop",X,222,Y)
....
#endif
Plop.cpp
========
if (pushPlop)
{
MAKE_CALL(911,"Help");
}
To help facilitate this split version specific code into their own functions, then use macros to activiate the functions as shown above. Also you can wrap the changing parts of the SDK in your own class to try and provide a consistent interface then all your differences are managed within the wrapper class leaving your code that does the work more tidy.