What are the drawbacks of using BOOST_ASIO_DISABLE_IOCP? - c++

I have read the documentation and I understand that it is possible to use the BOOST_ASIO_DISABLE_IOCP preprocessor definition to be able to call cancel() on a socket in Windows XP. The Boost library will then use a select-based solution instead and everything should work fine.
If this statements are true, what are the drawbacks of the select-based approach? Why we shouldn't always define BOOST_ASIO_DISABLE_IOCP?
EDIT 1
I have compiled the DLL with BOOST_ASIO_DISABLE_IOCP defined without problems. Unfortunately, after the integration with the final application, I'm getting memory access errors. Is there any additional configuration I am missing?

IOCP should provide much better perfomance.
By the way, do you really have to use cancel? Note that after you cancel i/o operations on a socket, you have no idea what the actual state of you data flow is, so you'll need a sophisticated way to get synchronized with your peer. Thus, usually the right way to go is to close the socket.
Unfortunately, after the integration with the final application, I'm
getting memmroy access errors.
Perhaps, you've got several modules that use Boost.Asio headers, but haven't defined BOOST_ASIO_DISABLE_IOCP for all of them, causing ODR violation?

Related

Is there a thread safe way to get a callstack of the current thread in c/c++ on Windows?

I have been trying to get a callstack of the current thread to improve an existing tracing library used in our library code. I want to have file and line numbers or at the very least function/method name, just raw adresses will not do.
The problem I have is that StackWalker and other solutions based on dbghelp.h functions are not thread safe, and I randomly get crashes when using StackWalker even if I use a mutex inside my library. I also tried to use boost::stacktrace, but it would not work and boost is very unpopular in my organisation.
My goal is at first to make it work on Windows, then I would work on a linux/posix implementation which will probably be much easier. I'm not very knowledgable about win32 api, does any of you know of any Api to achive this? In the end, I'd like to make everything open source. I'd also like to make this library dead simple and small, so that anybody can use it.
Thanks!

How to Prevent I/O Access in C++ or Native Compiled Code

I know this may be impossible but I really hope there's a way to pull it off. Please tell me if there's any way.
I want to write a sandbox application in C++ and allow other developers to write native plugins that can be loaded right into the application on the fly. I'd probably want to do this via DLLs on Windows, but I also want to support Linux and hopefully Mac.
My issue is that I want to be able to prevent the plugins from doing I/O access on their own. I want to require them to use my wrapped routines so that I can ensure none of the plugins write malicious code that starts harming the user's files on disk or doing things undesireable on the network.
My best guess on how to pull off something like this would be to include a compiler with the application and require the source code for the plugins to be distributed and compiled right on the end-user platform. Then I'd need an code scanner that could search the plugin uncompiled code for signatures that would show up in I/O operations for hard disk or network or other storage media.
My understanding is that the STD libaries like fstream wrap platform-specific functions so I would think that simply scanning all the code that will be compiled for platform-specific functions would let me accomplish the task. Because ultimately, any C native code can't do any I/O unless it talks to the OS using one of the OS's provided methods, right??
If my line of thinking is correct on this, does anyone have a book or resource recommendation on where I could find the nuts and bolts of this stuff for Windows, Linux, and Mac?
If my line of thinking is incorrect and its impossible for me to really prevent native code (compiled or uncompiled) from doing I/O operations on its own, please tell me so I don't create an application that I think is secure but really isn't.
In an absolutely ideal world, I don't want to require the plugins to distribute uncompiled code. I'd like to allow the developers to compile and keep their code to themselves. Perhaps I could scan the binaries for signatures that pertain to I/O access????
Sandboxing a program executing code is certainly harder than merely scanning the code for specific accesses! For example, the program could synthesize assembler statements doing system calls.
The original approach on UNIXes is to chroot() the program but I think there are problems with that approach, too. Another approach is a secured environment like selinux, possible combined with chroot(). The modern approach used to do things like that seems to run the program in a virtual machine: upon start of the program fire up a suitable snapshot of a VM. Upon termination just rewind to tbe snaphot. That merely requires that the allowed accesses are somehow channeled somewhere.
Even a VM doesn't block I/O. It can block network traffic very easily though.
If you want to make sure the plugin doesn't do I/O you can scan it's DLL for all it's import functions and run the function list against a blacklist of I/O functions.
Windows has the dumpbin util and Linux has nm. Both can be run via a system() function call and the output of the tools be directed to files.
Of course, you can write your own analyzer but it's much harder.
User code can't do I/O on it's own. Only the kernel. If youre worried about the plugin gaining ring0/kernel privileges than you need to scan the ASM of the DLL for I/O instructions.

Segfault with libssl/libcrypto

This is more of a hypothetical whilst I'm debugging some code. Lets say I have an application (called X) that calls out to a lib to send an email over a TLS encrypted SMTP connection, whilst at the same time X is talking to another lib which is establishing another TLS socket through the same libcrypto lib, what's the likelyhood of getting some specific (and weird) condition where one function call would fail with a segfault?
I'm kind of grasping at straws, this code worked fine up until we added the Skype SDK which connects over TLS to the Skype servers, since then we can actually get the issue to be repeatable but I'm a bit baffled as to why. (I'm probably overlooking the obvious, but I'll start with the really weird possibility)
Quite genrally speaking it could be possible - but well written library should be robust to multiple access. You might want to look through documentation to see if their API is reentrant (or even thred-safe).
If it is thread-safe, then (assuming that libcrypto authors didn't make mistake) you can be sure that it's not the reason of the problem.
If it is reentrant, then anything using this lib in two (or more) threads should be synchronized on access (eg. using mutexes), but if parts of code are not written by you and you have no option to modiffy it, then you are stuck. The only thing i can think of would be to use another version of libcrypto, so system creates another, unrelated instance of it's internal structure. This is ugly soultion and might behave weird on user machines.
There's a whole man page dedicated to the usage of the OpenSSL library and threads: man 3 threads. You will need to use this if your application has multiple threads that use the OpenSSL library.

Using new Windows features with fallback

I've been using dynamic libraries and GetProcAddress stuff for quite some time, but it always seems tedious, intellisense hostile, and ugly way to do things.
Does anyone know a clean way to import new features while staying compatible with older OSes.
Say I want to use a XML library which is a part of Vista. I call LoadLibraryW and then I can use the functions if HANDLE is non-null.
But I really don't want to go the #typedef (void*)(PFNFOOOBAR)(int, int, int) and PFNFOOOBAR foo = reinterpret_cast<PFNFOOOBAR>(GetProcAddress(GetModuleHandle(), "somecoolfunction"));, all that times 50, way.
Is there a non-hackish solution with which I could avoid this mess?
I was thinking of adding coolxml.lib in project settings, then including coolxml.dll in delayload dll list, and, maybe, copying the few function signatures I will use in the needed file. Then checking the LoadLibraryW return with non null, and if it's non-null then branching to Vista branch like in a regular program flow.
But I'm not sure if LoadLibrary and delay-load can work together and if some branch prediction will not mess things up in some cases.
Also, not sure if this approach will work, and if it wont cause problems after upgrading to the next SDK.
IMO, LoadLibrary and GetProcAddress are the best way to do it.
(Make some wrapper objects which take care of that for you, so you don't pollute your main code with that logic and ugliness.)
DelayLoad brings with it security problems (see this OldNewThing post) (edit: though not if you ensure you never call those APIs on older versions of windows).
DelayLoad also makes it too easy to accidentally depend on an API which won't be available on all targets. Yes, you can use tools to check which APIs you call at runtime but it's better to deal with these things at compile time, IMO, and those tools can only check the code you actually exercise when running under them.
Also, avoid compiling some parts of your code with different Windows header versions, unless you are very careful to segregate code and the objects that are passed to/from it.
It's not absolutely wrong -- and it's completely normal with things like plug-in DLLs where two entirely different teams probably worked on the two modules without knowing what SDK version each other targeted -- but it can lead to difficult problems if you aren't careful, so it's best avoided in general.
If you mix header versions you can get very strange errors. For example, we had a static object which contained an OS structure which changed size in Vista. Most of our project was compiled for XP, but we added a new .cpp file whose name happened to start with A and which was set to use the Vista headers. That new file then (arbitrarily) became the one which triggered the static object to be allocated, using the Vista structure sizes, but the actual code for that object was build using the XP structures. The constructor thought the object's members were in different places to the code which allocated the object. Strange things resulted!
Once we got to the bottom of that we banned the practise entirely; everything in our project uses the XP headers and if we need anything from the newer headers we manually copy it out, renaming the structures if needed.
It is very tedious to write all the typedef and GetProcAddress stuff, and to copy structures and defines out of headers (which seems wrong, but they're a binary interface so not going to change) (don't forget to check for #pragma pack stuff, too :(), but IMO that is the best way if you want the best compile-time notification of issues.
I'm sure others will disagree!
PS: Somewhere I've got a little template I made to make the GetProcAddress stuff slightly less tedious... Trying to find it; will update this when/if I do. Found it, but it wasn't actually that useful. In fact, none of my code even used it. :)
Yes, use delay loading. That leaves the ugliness to the compiler. Of course you'll still have to ensure that you're not calling a Vista function on XP.
Delay loading is the best way to avoid using LoadLibrary() and GetProcAddress() directly. Regarding the security issues mentioned, about the only thing you can do about that is use the delay load hooks to make sure (and optionally force) the desired DLL is being loaded during the dliNotePreLoadLibrary notification using the correct system path, and not relative to your app folder. Using the callbacks will also allow you to substitute your own fallback implementations in the dliFailLoadLib/dliFailGetProc notifications when the desired API function(s) are not available. That way, the rest of your code does not have to worry about platform differences (or very little).

What are the trade-offs between procedurally copying a file versus using ShellExecute and cp?

There are at least two methods for copying a file in C/C++: procedurally and using ShellExecute. I can post an explanation of each if needed but I'm going to assume that these methods are known. Is there an advantage to using one method over the other?
Procedurally will give you better error checking/reporting and will work cross-platform -- ShellExecute is Windows API only.
You could also use a third-party filesystem library to make the task less annoying -- boost::filesystem is a good choice.
Manual methods give you complete control over how you detect and respond to errors. You can program different responses to access control, out of space, hostile file from aliens, whatever. If you call ShellExec (or the moral equivalent on some other platform) you are left with error messages on stderr. Not so hot for an application with a Window-ed UI.
why not use in built programs like system()/ShellExecute:
Your program will not be platform independent
Each call to such function creates a separate executing unit
Pros:
The code is well tested, so more reliable
In such cases, a well tested library is what which is more desirable.
Call to external applications should be avoided, since often (especially under Windows) they do not return easily understandable error codes and report errors in an undesirable way (stdout, error windows, ...). Moreover, you do not have the full control over the copy, and starting a new application just to do a file copy is often overkill, especially on platforms like Windows, where processes are quite heavyweight objects.
A good compromise could be to use the API that your OS provides you (e.g. CopyFile on Windows), which delivers you surely working code and well-defined error codes.
However, if you want to be cross-platform often the simplest thing to do is just to write your own copy code: after all it's not rocket science, it's a simple task that can be accomplished in a few rows of standard C++ code.
In case with shell you are gaining the convenience of having best available implementation(s) of functionality implemented by OS vendor at cost of added complexity of cross process integration. Think about handling errors, asynchronous nature of file operations, accidental losses of return error level, limited or non-existent control over progress of completion, inability to respond to "abort"/"retry"/"continue" interactive requests etc.