List opengl extensions USED in runtime - c++

How I can get a list with openGl extensions that I am used in my program at runtime in C++. Clarifying, I don't want extensions available in my platform, I don't want extensions I can execute, I want a list of extensions I am using in my code. This is to check if this extensions are availables before start the execution. I am looking GLEW but GLEW is for
determining which OpenGL extensions are supported on the target platform.
And what I want is a way to get what extensions I am using. If anyone knows not runtime way please tell me because maybe is useful too.
Also, I want tho know how to determine minimum opengl version to use.

Actually the standard way to handle extensions is in-situ: test for availability before trying to call it.
if (GLEW_ARB_vertex_program)
{
... // Do something with ARB_vertex_program
}
More complexes OpenGl programs will perform tests on several extensions availability and make decision from that:
if (GLEW_ARB_vertex_array_object && other stuff)
{
renderingTechnic = VERTEX;
}
else
{
renderingTechnic = BEGIN_END;
}
... etc
If you want the list of actually used extension it's become tricky.
You have to detect cases like this:
if (GLEW_ARB_vertex_program && 0) // YOU SHALL NOT PASS
{
... // Do something with ARB_vertex_program
}
In this case, you will never enter in the then-statement but your tool may have difficulty to detect it.
Code coverage tools are here to perform this kind of job, but embed ones to perform the comparison with available extension at runtime is not an option here.
Take a look in the symbol table of your output is neither a solution:
The one for you exe will contain none of OpenGl functions.
The one for the OpenGl library will contain all of possible (not available nether used) functions.

If your codebase is anything but humongous, you can simply do this with a bit of searching for "gl_EXT_", "gl_ARB_", etc. and manual inspection and a dash of discipline afterwards to try and document all the extensions you use.
For minimum version; again, this is something pretty basic that you need to know already when writing the code. But here I think using GLEW can help you. If memory serves, GLEW uses preprocessor macros that you can define to set the version of OpenGL standard you are expecting. You start by setting this to a low value (e.g. 1.1) and see if your code compiles and runs. If it does, that's probably the minimum version you need. If it doesn't, you increment the version and try again.

The usual approach is to decide which extensions you're going to use before you're starting the actual coding. And if you change it later on, you immediately put the availability tests somewhere close to initialization so that you can either terminate with a runtime error message or fall back to an alternate code path.
Of course the preferred way is to not use extensions at all and stick to a plain OpenGL version profile. Yes, anything after OpenGL-1.2 is loaded through the extension mechanism, but that doesn't make it extensions. So if you know you absolutely need OpenGL-3.0 for your program to work, but nothing else then you can just test for that and be done.

Related

Getting Windows Version from Preprocessor (C++ Win32)

I'm working on a Win32 application and am currently trying to implement multi-display features. For this, I want to use EnumDisplayMonitors, a feature that is only present in Windows 2000-upwards. I've done my best to maintain backwards compatibility up to Windows 95 thus far.
I want to, therefore, ignore this related section of code when the program runs on older versions of Windows. However, its mere presence anywhere in the code, even if it's not executed, crashes the program. I want to use #ifdef to ignore this section of code, however I can't seem to figure out a way to get the current Windows version from the Preprocessor. I've seen suggestions to use WINVER or _WIN32_WINNT but both need to be set by me as far as I can understand, defeating the whole point. Any attempts to use them predictably did not work as intended.
Is there a way to get the current version of Windows from the preprocessor? If not, is there another way to disable this function completely depending on the OS? Again, regardless whether it's used, its mere presence crashes on Windows 95.
The correct way to handle this is not with the preprocessor at all. You need to use the Win32 API GetProcAddress() function at runtime instead, eg:
typedef BOOL (WINAPI *LPFN_EDM)(HDC, LPCRECT, MONITORENUMPROC, LPARAM);
LPFN_EDM lpEnumDisplayMonitors = (LPFN_EDM) GetProcAddress(GetModuleHandle("user32.dll"), "EnumDisplayMonitors");
if (lpEnumDisplayMonitors)
{
// use lpEnumDisplayMonitors as needed...
}
else
{
// use something else...
}
This is even described in Microsoft's documentation:
Operating System Version
Identifying the current operating system is usually not the best way to determine whether a particular operating system feature is present. This is because the operating system may have had new features added in a redistributable DLL. Rather than using the Version API Helper functions to determine the operating system platform or version number, test for the presence of the feature itself.
To determine the best way to test for a feature, refer to the documentation for the feature of interest. The following list discusses some common techniques for feature detection:
You can test for the presence of the functions associated with a feature. To test for the presence of a function in a system DLL, call the LoadLibrary function to load the DLL. Then call the GetProcAddress function to determine whether the function of interest is present in the DLL. Use the pointer returned by GetProcAddress to call the function. Note that even if the function is present, it may be a stub that just returns an error code such as ERROR_CALL_NOT_IMPLEMENTED.
You can determine the presence of some features by using the GetSystemMetrics function. For example, you can detect multiple display monitors by calling GetSystemMetrics(SM_CMONITORS). There are several versions of the redistributable DLLs that implement shell and common control features. For information about determining which versions are present on the system your application is running on, see the topic Shell and Common Controls Versions.
If your compiler toolchain supports Delay Loading, then you can use that instead of calling GetProcAddress() manually. Set your project to delay-load user32.dll and then call EnumDisplayMonitors() normally after first validating the OS supports what you need, such as checking if Windows is Win2K or later via GetVersionEx(), or checking if multiple monitors are installed, eg:
OSVERSIONINFO osvi;
osvi.dwOSVersionInfoSize = sizeof(osvi);
GetVersionEx(&osvi);
if (osvi.dwMajorVersion >= 5) // Win2K = 5.0
{
// use EnumDisplayMonitors as needed...
}
else
{
// use something else...
}
if (GetSystemMetrics(SM_CMONITORS) > 1)
{
// use EnumDisplayMonitors as needed...
}
else
{
// use something else...
}
Compile targeting the lowest Windows you support with WINVER/_WIN32_WINNT (whatever your SDK supports), then load whatever functions you need dynamically at runtime (LoadLibrary etc.) after knowing where you are actually running.

Use CPU fallback if OpenCV's Cuda extensions are not available

In my code I'm trying to capitalize the power of a possibly present cuda capable GPU. While this code works well on computers that have cuda available (and where OpenCV was compiled with cuda support), I have troubles implementing a fallback to CPU. Even building fails, since the imports I'm using
#include "opencv2/core/cuda.hpp"
#include "opencv2/cudaimgproc.hpp"
#include "opencv2/cudaarithm.hpp"
are not found. I'm quite a novice regarding C++ program architecture. How would I need to model my code to support such a fallback functionality?
If you are implementing a fallback you probably want to switch to it at runtime. But the fact that you are getting compiler error messages suggests that you are compiling with different flags. In general, you probably want something like this:
if (HasCuda()) {
RunCudaCode(...);
} else {
RunCpuCode(...);
}
Alternatively, you could build two shared libraries one with and one without Cuda and load the one that you need based on HasCuda(). However, that approach only makes sense if your binary is huge and you're running into memory issues.
It might be necessary to have a similar block in your startup code that initializes Cuda.

I am looking for steps to update a game to SDL 2.0

I am working on a project with my friend where we have to update existing version of a game which uses SDL 1.2 to use SDL 2.0 header files and functions.
I would like to know what is the standard procedure we follow while updating already existing source code to newer libraries.
The code has 28 source files with 11 header files and makes extensive use of keyboard and mouse events and sounds as well.
All the source files use c++ and sdl.Most source files are around 200 lines of code.
i have a time period of about 3 months to make the changes.I would like to know how to write a basic summary of my schedule for that time period on a week-by week basis or 2 weeks basis.
Can anyone provide me proper steps for the same so I can make a schedule for the same?
I don't know of standard procedures, but here are my thoughts:
In general, I can think of two approaches to porting an existing software to either another language, use of another library, or any change of the such; with minimal structural change
or with major structural change.
Minimal structural change means that you leave the structure of your program intact, and start replacing every single bit of it by its equivalent in the other library. For example, if porting from glut to SDL, you replacing the glut window creation with SDL's, keyboard handling with SDL's etc, all independently.
This method is faster, in the sense that all the changes are small and local. You are also less likely to introduce bugs. The result may not be so efficient though, as the structure of the program is not originally designed for this new library.
Major structural change means that based on the new library, you rewrite a large part of the program that is dependent on that library. For example, if switching from C++'s STL to C's standard libraries, rewriting string with its equivalent in C that keeps allocating/freeing/copying doesn't make sense because your approach should be fundamentally different.
This approach requires a lot more work but in the end will give you a much better quality.
Back to your particular case, I'm not sure how different SDL 2 is from SDL 1.2, but my guess is it's not fundamentally different. Therefore, I believe the first method should work in your case. That is, you should work out the equivalents in SDL 2 of the stuff you did in SDL 1.2, and replace them in your code accordingly.

Using new Windows features with fallback

I've been using dynamic libraries and GetProcAddress stuff for quite some time, but it always seems tedious, intellisense hostile, and ugly way to do things.
Does anyone know a clean way to import new features while staying compatible with older OSes.
Say I want to use a XML library which is a part of Vista. I call LoadLibraryW and then I can use the functions if HANDLE is non-null.
But I really don't want to go the #typedef (void*)(PFNFOOOBAR)(int, int, int) and PFNFOOOBAR foo = reinterpret_cast<PFNFOOOBAR>(GetProcAddress(GetModuleHandle(), "somecoolfunction"));, all that times 50, way.
Is there a non-hackish solution with which I could avoid this mess?
I was thinking of adding coolxml.lib in project settings, then including coolxml.dll in delayload dll list, and, maybe, copying the few function signatures I will use in the needed file. Then checking the LoadLibraryW return with non null, and if it's non-null then branching to Vista branch like in a regular program flow.
But I'm not sure if LoadLibrary and delay-load can work together and if some branch prediction will not mess things up in some cases.
Also, not sure if this approach will work, and if it wont cause problems after upgrading to the next SDK.
IMO, LoadLibrary and GetProcAddress are the best way to do it.
(Make some wrapper objects which take care of that for you, so you don't pollute your main code with that logic and ugliness.)
DelayLoad brings with it security problems (see this OldNewThing post) (edit: though not if you ensure you never call those APIs on older versions of windows).
DelayLoad also makes it too easy to accidentally depend on an API which won't be available on all targets. Yes, you can use tools to check which APIs you call at runtime but it's better to deal with these things at compile time, IMO, and those tools can only check the code you actually exercise when running under them.
Also, avoid compiling some parts of your code with different Windows header versions, unless you are very careful to segregate code and the objects that are passed to/from it.
It's not absolutely wrong -- and it's completely normal with things like plug-in DLLs where two entirely different teams probably worked on the two modules without knowing what SDK version each other targeted -- but it can lead to difficult problems if you aren't careful, so it's best avoided in general.
If you mix header versions you can get very strange errors. For example, we had a static object which contained an OS structure which changed size in Vista. Most of our project was compiled for XP, but we added a new .cpp file whose name happened to start with A and which was set to use the Vista headers. That new file then (arbitrarily) became the one which triggered the static object to be allocated, using the Vista structure sizes, but the actual code for that object was build using the XP structures. The constructor thought the object's members were in different places to the code which allocated the object. Strange things resulted!
Once we got to the bottom of that we banned the practise entirely; everything in our project uses the XP headers and if we need anything from the newer headers we manually copy it out, renaming the structures if needed.
It is very tedious to write all the typedef and GetProcAddress stuff, and to copy structures and defines out of headers (which seems wrong, but they're a binary interface so not going to change) (don't forget to check for #pragma pack stuff, too :(), but IMO that is the best way if you want the best compile-time notification of issues.
I'm sure others will disagree!
PS: Somewhere I've got a little template I made to make the GetProcAddress stuff slightly less tedious... Trying to find it; will update this when/if I do. Found it, but it wasn't actually that useful. In fact, none of my code even used it. :)
Yes, use delay loading. That leaves the ugliness to the compiler. Of course you'll still have to ensure that you're not calling a Vista function on XP.
Delay loading is the best way to avoid using LoadLibrary() and GetProcAddress() directly. Regarding the security issues mentioned, about the only thing you can do about that is use the delay load hooks to make sure (and optionally force) the desired DLL is being loaded during the dliNotePreLoadLibrary notification using the correct system path, and not relative to your app folder. Using the callbacks will also allow you to substitute your own fallback implementations in the dliFailLoadLib/dliFailGetProc notifications when the desired API function(s) are not available. That way, the rest of your code does not have to worry about platform differences (or very little).

How to detect if an OpenGL debugger is being used?

Is there any way of detecting from my Windows OpenGL application if a debugger (such as gDEBugger) is being used to catch OpenGL calls? I'd like to be able to detect this and terminate my application if a debugger is found, to avoid shader code and textures from being ripped. The application is developed in C++ Builder 6.
Even if you could find a way to do this it would be a futile attempt because the shader source can be asked for by simply calling glGetShaderSource() at any moment.
An external process can inject a thread into your process using CreateRemoteThread() and then copy back the result with ReadProcessMemory(). This process can be made really simple (just a couple of lines) with the detours library by microsoft.
Alternatively, if creating a remote thread is too much of a hassle, a disassembler such as Ollydgb can be used to inject the a piece of code into the normal execution path which simply saves the shader code into a file just before it is invoked.
Finally, The text of the shader needs to be somewhere in your executable before you activate it and it can probably be extracted just by using a static inspection of the executable with a tool like IDAPro. It really doesn't matter if you encrypt it or compress it or whatever, if its there at some point and the program can extract it then so can a determined enough cracker. You will never win.
Overall, there is no way to detect each and every such "debugger". A custom OpenGL32.dll (or equivalent for the platform) can always be written; and if there is no other way, a virtual graphics card can be designed specifically for purposes of ripping your textures.
However, Graphic Remedy does have some APIs for debugging provided as custom OpenGL commands. They are provided as OpenGL extensions; so, if GetProcAddress() returns NULL on those function calls, you can be reasonably sure it's not gDEBugger. However, there are already several debuggers out there, and, as already mentioned, it's trivial to write one specifically designed for ripping out resources.
Perhaps the closest you can get is load C:\windows\system32\opengl32.dll directly, however that can break your game horribly on future releases of Windows so I'd advise against it. (And this still wouldn't protect you against those enterprising enough to replace system-wide OpenGL32.dll, or who are perhaps using other operating systems).
I'm not 100% sure but I believe that Graphic Remedy replace the Windows opengl32 dll with their own opengl32.dll file for hooking gl calls.
So if it is the case, you just have to check the dll version and terminate if it's not what you expect.