Point Cloud Library's multiple-inheritance with single inheritance restriction - c++

The C++ plugin API in which I work is bad enough without STL/exception handling but it also forbids multiple-inheritance. In other words, I can build with it if I don't mind my plugin crashing the host application on startup or I can go single and it will crash on the first direct instance of multiple inheritance in PCL (of which there is only one instance in my plugin code, but that is all it takes one supposes, and, yes, it is a required instance).
I assume that any multiple inheritances used within the PCL libs are isolated (since they appear to use this feature often) but as soon as I use something with it directly - crash.
There seem to be very few options. I can try to find another library for point cloud surface meshing with commercial usage licensing (ha!) or actually write a separate executable using PCL that is called from the plugin to do the work and pass the results back to the plugin (horrendous, platform dependent, and not an integrated solution). This entire entreprise is becoming loathsome. So much time and effort expended researching, preparing, learning, adjusting projects, carefully setting this up only to find that it won't work under these conditions.
If you have an alternative BSD library option to mention that would be great. If you think that I should go for a CL/DOS-based application to be launched to do the processing that would be great to hear arguments for as well. I support both Windows and MacOS X.

Going the external executable route. I can save the point cloud to the pcd format from the application, run the executable to load and process the file to output the results in obj format for the application to use. It is still a horrid solution but at least it works.

Related

calling pre-compiled executables

I am working on a project that makes use of the ffmpeg library within the framework of Qt on an Intel Windows 8.1 machine. My application uses a QProcess to call the ffmpeg.exe with a list of list of arguments that works perfectly. I was just wondering if it would be more efficient to use the ffmpeg source with the C++ code and call functions directly using using libav.h?
When i use the QProcess it creates a separate thread so the rest of my program is unaffected by the process. If i was to use the functions directly from libav.h i would need to create my own QThread and run the function in that.
Any advice would be helpful.
Steve
Here is my advice, first of all I do not know if linking ffmpeg source code directly will require you to use a QThread, it is possible that ffmpeg already manages threads on his own (which would be good), I also do not know precisely if linking directly is going to be more efficient in terms of CPU and RAM.
For sure it's not going to be much more efficient; running the same code in an external process or in another thread are not so different in terms of hardware resources.
Beside that if you are looking for a better and deeper control on what is being played on screen, so for example if linking directly you think you may get some useful functions (like a fast forward or zoom in zoom out) then it could be worth a try.
Bye

How to Prevent I/O Access in C++ or Native Compiled Code

I know this may be impossible but I really hope there's a way to pull it off. Please tell me if there's any way.
I want to write a sandbox application in C++ and allow other developers to write native plugins that can be loaded right into the application on the fly. I'd probably want to do this via DLLs on Windows, but I also want to support Linux and hopefully Mac.
My issue is that I want to be able to prevent the plugins from doing I/O access on their own. I want to require them to use my wrapped routines so that I can ensure none of the plugins write malicious code that starts harming the user's files on disk or doing things undesireable on the network.
My best guess on how to pull off something like this would be to include a compiler with the application and require the source code for the plugins to be distributed and compiled right on the end-user platform. Then I'd need an code scanner that could search the plugin uncompiled code for signatures that would show up in I/O operations for hard disk or network or other storage media.
My understanding is that the STD libaries like fstream wrap platform-specific functions so I would think that simply scanning all the code that will be compiled for platform-specific functions would let me accomplish the task. Because ultimately, any C native code can't do any I/O unless it talks to the OS using one of the OS's provided methods, right??
If my line of thinking is correct on this, does anyone have a book or resource recommendation on where I could find the nuts and bolts of this stuff for Windows, Linux, and Mac?
If my line of thinking is incorrect and its impossible for me to really prevent native code (compiled or uncompiled) from doing I/O operations on its own, please tell me so I don't create an application that I think is secure but really isn't.
In an absolutely ideal world, I don't want to require the plugins to distribute uncompiled code. I'd like to allow the developers to compile and keep their code to themselves. Perhaps I could scan the binaries for signatures that pertain to I/O access????
Sandboxing a program executing code is certainly harder than merely scanning the code for specific accesses! For example, the program could synthesize assembler statements doing system calls.
The original approach on UNIXes is to chroot() the program but I think there are problems with that approach, too. Another approach is a secured environment like selinux, possible combined with chroot(). The modern approach used to do things like that seems to run the program in a virtual machine: upon start of the program fire up a suitable snapshot of a VM. Upon termination just rewind to tbe snaphot. That merely requires that the allowed accesses are somehow channeled somewhere.
Even a VM doesn't block I/O. It can block network traffic very easily though.
If you want to make sure the plugin doesn't do I/O you can scan it's DLL for all it's import functions and run the function list against a blacklist of I/O functions.
Windows has the dumpbin util and Linux has nm. Both can be run via a system() function call and the output of the tools be directed to files.
Of course, you can write your own analyzer but it's much harder.
User code can't do I/O on it's own. Only the kernel. If youre worried about the plugin gaining ring0/kernel privileges than you need to scan the ASM of the DLL for I/O instructions.

"Standard I/O only" privileges under Windows

I would like to setup an online judge (automated testing software; takes potentially malicious code and runs a couple of tests on it) on Windows, but such software is usually written for *nix systems, because it's much easier to sandbox code there. Currently it looks like I'll have to write it myself.
How to compile C++ code in a way to prevent any behaviour except stdin/stdout?
How to run an executable in an environment, which allows it to do stdio only?
I've considered deleting some .lib and header files from Visual Studio standard setup, but I'm afraid it's still techincally possible to execute WinAPI calls.
Also, I could create one more OS user, set some rights in Administration control panel, and runas executables from this user to obtain a "secure" environment, but I'm no good in administration, and don't know if it's possible to give the program stdio rights only.
Since this sort of problem will be the target of some rather "bad" code in all sorts of different aspects, I would suggest that ONE possible solution is to use a virtual machine to run the "foreign" code. So, rather than building your server software that does stuff on the real hardware (and potentially messes up or takes over the machine for malicious purposes), you run the code on a virtual machine that has limited resources and strict rules. Once the "result" is complete, you shut down that VM, and start over with a "fresh" VM (created by cloning an previously constructed VM).
And yes, deleting lib's and headers certainly won't stop someone from using calls/functions you don't want to be used. It will make it a tiny bit harder, but only a tiny bit. Most of the "harmful" calls are in the system win32.dll that you also need for system I/O and such things.

C++: Any way to 'jail function'?

Well, it's a kind of a web server.
I load .dll(.a) files and use them as program modules.
I recursively go through directories and put '_main' functors from these libraries into std::map under name, which is membered in special '.m' files.
The main directory has few directories for each host.
The problem is that I need to prevent usage of 'fopen' or any other filesystem functions working with directory outside of this host directory.
The only way I can see for that - write a warp for stdio.h (I mean, write s_stdio.h that has a filename check).
May be it could be a deamon, catching system calls and identifying something?
add
Well, and what about such kind of situation: I upload only souses and then compile it directly on my server after checking up? Well, that's the only way I found (having everything inside one address space still).
As C++ is low level language and the DLLs are compiled to machine code they can do anything. Even if you wrap the standard library functions the code can do the system calls directly, reimplementing the functionality you have wrapped.
Probably the only way to effectively sandbox such a DLL is some kind of virtualisation, so the code is not run directly but in a virtual machine.
The simpler solution is to use some higher level language for the loadable modules that should be sandboxed. Some high level languages are better at sandboxing (Lua, Java), other are not so good (e.g. AFAIK currently there is no official restricted environment implemented for Python).
If you are the one loading the module, you can perform a static analysis on the code to verify what APIs it calls, and refuse to link it if it doesn't check out (i.e. if it makes any kind of suspicious call at all).
Having said that, it's a lot of work to do this, and not very portable.

What are the trade-offs between procedurally copying a file versus using ShellExecute and cp?

There are at least two methods for copying a file in C/C++: procedurally and using ShellExecute. I can post an explanation of each if needed but I'm going to assume that these methods are known. Is there an advantage to using one method over the other?
Procedurally will give you better error checking/reporting and will work cross-platform -- ShellExecute is Windows API only.
You could also use a third-party filesystem library to make the task less annoying -- boost::filesystem is a good choice.
Manual methods give you complete control over how you detect and respond to errors. You can program different responses to access control, out of space, hostile file from aliens, whatever. If you call ShellExec (or the moral equivalent on some other platform) you are left with error messages on stderr. Not so hot for an application with a Window-ed UI.
why not use in built programs like system()/ShellExecute:
Your program will not be platform independent
Each call to such function creates a separate executing unit
Pros:
The code is well tested, so more reliable
In such cases, a well tested library is what which is more desirable.
Call to external applications should be avoided, since often (especially under Windows) they do not return easily understandable error codes and report errors in an undesirable way (stdout, error windows, ...). Moreover, you do not have the full control over the copy, and starting a new application just to do a file copy is often overkill, especially on platforms like Windows, where processes are quite heavyweight objects.
A good compromise could be to use the API that your OS provides you (e.g. CopyFile on Windows), which delivers you surely working code and well-defined error codes.
However, if you want to be cross-platform often the simplest thing to do is just to write your own copy code: after all it's not rocket science, it's a simple task that can be accomplished in a few rows of standard C++ code.
In case with shell you are gaining the convenience of having best available implementation(s) of functionality implemented by OS vendor at cost of added complexity of cross process integration. Think about handling errors, asynchronous nature of file operations, accidental losses of return error level, limited or non-existent control over progress of completion, inability to respond to "abort"/"retry"/"continue" interactive requests etc.