How to detect if an OpenGL debugger is being used? - opengl

Is there any way of detecting from my Windows OpenGL application if a debugger (such as gDEBugger) is being used to catch OpenGL calls? I'd like to be able to detect this and terminate my application if a debugger is found, to avoid shader code and textures from being ripped. The application is developed in C++ Builder 6.

Even if you could find a way to do this it would be a futile attempt because the shader source can be asked for by simply calling glGetShaderSource() at any moment.
An external process can inject a thread into your process using CreateRemoteThread() and then copy back the result with ReadProcessMemory(). This process can be made really simple (just a couple of lines) with the detours library by microsoft.
Alternatively, if creating a remote thread is too much of a hassle, a disassembler such as Ollydgb can be used to inject the a piece of code into the normal execution path which simply saves the shader code into a file just before it is invoked.
Finally, The text of the shader needs to be somewhere in your executable before you activate it and it can probably be extracted just by using a static inspection of the executable with a tool like IDAPro. It really doesn't matter if you encrypt it or compress it or whatever, if its there at some point and the program can extract it then so can a determined enough cracker. You will never win.

Overall, there is no way to detect each and every such "debugger". A custom OpenGL32.dll (or equivalent for the platform) can always be written; and if there is no other way, a virtual graphics card can be designed specifically for purposes of ripping your textures.
However, Graphic Remedy does have some APIs for debugging provided as custom OpenGL commands. They are provided as OpenGL extensions; so, if GetProcAddress() returns NULL on those function calls, you can be reasonably sure it's not gDEBugger. However, there are already several debuggers out there, and, as already mentioned, it's trivial to write one specifically designed for ripping out resources.
Perhaps the closest you can get is load C:\windows\system32\opengl32.dll directly, however that can break your game horribly on future releases of Windows so I'd advise against it. (And this still wouldn't protect you against those enterprising enough to replace system-wide OpenGL32.dll, or who are perhaps using other operating systems).

I'm not 100% sure but I believe that Graphic Remedy replace the Windows opengl32 dll with their own opengl32.dll file for hooking gl calls.
So if it is the case, you just have to check the dll version and terminate if it's not what you expect.

Related

calling pre-compiled executables

I am working on a project that makes use of the ffmpeg library within the framework of Qt on an Intel Windows 8.1 machine. My application uses a QProcess to call the ffmpeg.exe with a list of list of arguments that works perfectly. I was just wondering if it would be more efficient to use the ffmpeg source with the C++ code and call functions directly using using libav.h?
When i use the QProcess it creates a separate thread so the rest of my program is unaffected by the process. If i was to use the functions directly from libav.h i would need to create my own QThread and run the function in that.
Any advice would be helpful.
Steve
Here is my advice, first of all I do not know if linking ffmpeg source code directly will require you to use a QThread, it is possible that ffmpeg already manages threads on his own (which would be good), I also do not know precisely if linking directly is going to be more efficient in terms of CPU and RAM.
For sure it's not going to be much more efficient; running the same code in an external process or in another thread are not so different in terms of hardware resources.
Beside that if you are looking for a better and deeper control on what is being played on screen, so for example if linking directly you think you may get some useful functions (like a fast forward or zoom in zoom out) then it could be worth a try.
Bye

List opengl extensions USED in runtime

How I can get a list with openGl extensions that I am used in my program at runtime in C++. Clarifying, I don't want extensions available in my platform, I don't want extensions I can execute, I want a list of extensions I am using in my code. This is to check if this extensions are availables before start the execution. I am looking GLEW but GLEW is for
determining which OpenGL extensions are supported on the target platform.
And what I want is a way to get what extensions I am using. If anyone knows not runtime way please tell me because maybe is useful too.
Also, I want tho know how to determine minimum opengl version to use.
Actually the standard way to handle extensions is in-situ: test for availability before trying to call it.
if (GLEW_ARB_vertex_program)
{
... // Do something with ARB_vertex_program
}
More complexes OpenGl programs will perform tests on several extensions availability and make decision from that:
if (GLEW_ARB_vertex_array_object && other stuff)
{
renderingTechnic = VERTEX;
}
else
{
renderingTechnic = BEGIN_END;
}
... etc
If you want the list of actually used extension it's become tricky.
You have to detect cases like this:
if (GLEW_ARB_vertex_program && 0) // YOU SHALL NOT PASS
{
... // Do something with ARB_vertex_program
}
In this case, you will never enter in the then-statement but your tool may have difficulty to detect it.
Code coverage tools are here to perform this kind of job, but embed ones to perform the comparison with available extension at runtime is not an option here.
Take a look in the symbol table of your output is neither a solution:
The one for you exe will contain none of OpenGl functions.
The one for the OpenGl library will contain all of possible (not available nether used) functions.
If your codebase is anything but humongous, you can simply do this with a bit of searching for "gl_EXT_", "gl_ARB_", etc. and manual inspection and a dash of discipline afterwards to try and document all the extensions you use.
For minimum version; again, this is something pretty basic that you need to know already when writing the code. But here I think using GLEW can help you. If memory serves, GLEW uses preprocessor macros that you can define to set the version of OpenGL standard you are expecting. You start by setting this to a low value (e.g. 1.1) and see if your code compiles and runs. If it does, that's probably the minimum version you need. If it doesn't, you increment the version and try again.
The usual approach is to decide which extensions you're going to use before you're starting the actual coding. And if you change it later on, you immediately put the availability tests somewhere close to initialization so that you can either terminate with a runtime error message or fall back to an alternate code path.
Of course the preferred way is to not use extensions at all and stick to a plain OpenGL version profile. Yes, anything after OpenGL-1.2 is loaded through the extension mechanism, but that doesn't make it extensions. So if you know you absolutely need OpenGL-3.0 for your program to work, but nothing else then you can just test for that and be done.

How to Prevent I/O Access in C++ or Native Compiled Code

I know this may be impossible but I really hope there's a way to pull it off. Please tell me if there's any way.
I want to write a sandbox application in C++ and allow other developers to write native plugins that can be loaded right into the application on the fly. I'd probably want to do this via DLLs on Windows, but I also want to support Linux and hopefully Mac.
My issue is that I want to be able to prevent the plugins from doing I/O access on their own. I want to require them to use my wrapped routines so that I can ensure none of the plugins write malicious code that starts harming the user's files on disk or doing things undesireable on the network.
My best guess on how to pull off something like this would be to include a compiler with the application and require the source code for the plugins to be distributed and compiled right on the end-user platform. Then I'd need an code scanner that could search the plugin uncompiled code for signatures that would show up in I/O operations for hard disk or network or other storage media.
My understanding is that the STD libaries like fstream wrap platform-specific functions so I would think that simply scanning all the code that will be compiled for platform-specific functions would let me accomplish the task. Because ultimately, any C native code can't do any I/O unless it talks to the OS using one of the OS's provided methods, right??
If my line of thinking is correct on this, does anyone have a book or resource recommendation on where I could find the nuts and bolts of this stuff for Windows, Linux, and Mac?
If my line of thinking is incorrect and its impossible for me to really prevent native code (compiled or uncompiled) from doing I/O operations on its own, please tell me so I don't create an application that I think is secure but really isn't.
In an absolutely ideal world, I don't want to require the plugins to distribute uncompiled code. I'd like to allow the developers to compile and keep their code to themselves. Perhaps I could scan the binaries for signatures that pertain to I/O access????
Sandboxing a program executing code is certainly harder than merely scanning the code for specific accesses! For example, the program could synthesize assembler statements doing system calls.
The original approach on UNIXes is to chroot() the program but I think there are problems with that approach, too. Another approach is a secured environment like selinux, possible combined with chroot(). The modern approach used to do things like that seems to run the program in a virtual machine: upon start of the program fire up a suitable snapshot of a VM. Upon termination just rewind to tbe snaphot. That merely requires that the allowed accesses are somehow channeled somewhere.
Even a VM doesn't block I/O. It can block network traffic very easily though.
If you want to make sure the plugin doesn't do I/O you can scan it's DLL for all it's import functions and run the function list against a blacklist of I/O functions.
Windows has the dumpbin util and Linux has nm. Both can be run via a system() function call and the output of the tools be directed to files.
Of course, you can write your own analyzer but it's much harder.
User code can't do I/O on it's own. Only the kernel. If youre worried about the plugin gaining ring0/kernel privileges than you need to scan the ASM of the DLL for I/O instructions.

Point Cloud Library's multiple-inheritance with single inheritance restriction

The C++ plugin API in which I work is bad enough without STL/exception handling but it also forbids multiple-inheritance. In other words, I can build with it if I don't mind my plugin crashing the host application on startup or I can go single and it will crash on the first direct instance of multiple inheritance in PCL (of which there is only one instance in my plugin code, but that is all it takes one supposes, and, yes, it is a required instance).
I assume that any multiple inheritances used within the PCL libs are isolated (since they appear to use this feature often) but as soon as I use something with it directly - crash.
There seem to be very few options. I can try to find another library for point cloud surface meshing with commercial usage licensing (ha!) or actually write a separate executable using PCL that is called from the plugin to do the work and pass the results back to the plugin (horrendous, platform dependent, and not an integrated solution). This entire entreprise is becoming loathsome. So much time and effort expended researching, preparing, learning, adjusting projects, carefully setting this up only to find that it won't work under these conditions.
If you have an alternative BSD library option to mention that would be great. If you think that I should go for a CL/DOS-based application to be launched to do the processing that would be great to hear arguments for as well. I support both Windows and MacOS X.
Going the external executable route. I can save the point cloud to the pcd format from the application, run the executable to load and process the file to output the results in obj format for the application to use. It is still a horrid solution but at least it works.

Loading DLL from a location in memory

As the question says, I want to load a DLL from a location in memory instead of a file, similarly to LoadLibrary(Ex). I'm no expert in WinAPI, so googled a little and found this article together with MemoryModule library that pretty much meets my needs.
On the other hand the info there is quite old and the library hasn't been updated for a while too. So I wanted to know if there are different, newer and better ways to do it. Also if somebody has used the library mentioned in the article, could they provide insight on what I might be facing when using it?
Just for the curious ones, I'm exploring the concept of encrypting some plug-ins for applications without storing the decrypted version on disk.
Implementing your own DLL loader can get really hairy really fast. Reading this article it's easy to miss what kind of crazy edge cases you can get yourself into. I strongly recommend against it.
Just for a taste - consider you can't use any conventional debugging tools for the code in the DLL you're loading since the code you're executing is not listed in the region of any DLL known by the OS.
Another serious issue is dealing with DEP in windows.
Well, you can create a RAM Drive according to these instructions, then copy the DLL you can in memory to a file there and the use LoadLibrary().
Of course this is not very practical if you plan to deploy this as some kind of product because people are going to notice a driver being installed, a reboot after the installation and a new drive letter under My Computer. Also, this does nothing to actually hide the DLL since its just sitting there in the RAM Drive for everybody to watch.
Another thing I'm interested about is Why you actually want to do this? Perhaps your end result can be achieved by some other means other than Loading the DLL from memory. For instance when using a binary packer such as UPX, the DLL that you have on disk is different from the one that is eventually executed. Immediately after the DLL is loaded normally with LoadLibrary, The unpacker kicks in and rewrites the memory which the DLL is loaded to with the uncompressed binary (the DLL header makes sure that there is enough space allocated)
Similar question was raised in here:
Load native C++ .dll from RAM in debugger friendly manner
One of the answers proposes dllloader sample application shown in github:
https://github.com/tapika/dllloader
It supports .dll debugging out of box.