Cross-compiler library communication - c++

I need to develop a C++ front-end GUI using MSVC that needs to communicate with the bank-end library that is compiled with C++ Builder.
How can we define our interfaces so that we don't run into CRT library problems?
For example, I believe we will be unable to safely pass STL containers back and forth. Is that true?
I know I can pass POD types safely, but I am hoping that I can use some more sophisticated data structures as well.

You might find this article interesting Binary-compatible C++ Interfaces. The lesson in general is, never pass STL container, boost or anything of the like. Like the two other answers your best bet is to stick with PODs and functions with a calling convention specified.
Since implementations of the STL vary from compiler to compiler, it is not safe to pass STL classes. You can then either require the user to a specific implementation of the STL (and probably a specific version as well), or simply not use the STL between libraries.
Further more stick with the calling conventions where the behaviour can be considered cross compiler frieindly. For instance __cdecl and __stdcall will be handled equally on most compilers, while the __fastcall calling convention will be a problem, especially if you wish to use the code in C++ Builder.
As the article "Binary-compatible C++ Interface" mentions you can use interface as well, as long as you remember a few basic principles.
Always make interfaces pure virtual classes (that is no implementations).
Make sure to use a proper calling convention for the member functions in the interface (the article mentions __stdcall for Windows.
Keep the memory clean up at the same side of the DLL boundary.
And quite a few other things, like don't use exceptions, don't overload functions in the interface (compilers treat this differently), etc. Find them all at the bottom of the article.
You might want to read more about the Component Object Model (COM) if you choose to go with the C++ interfaces, to get an idea about how and why this will be able to work across compilers.

You should be able to pass data that you can safely pass via a C interface, in other words, PODs. Everything over and above PODs that is being passed by regular C or C++ function calls will run into issues with differing object layouts and different implementations of the runtime libraries.
You probably will be able to pass structs of PODs if you are very careful about how you lay them out in memory and ensure that both compilers are using the same data packing etc. Over and above C structs, you pretty much have a creek/paddle problem.
For passing more sophisticated data types, I would look into object component technologies like COM, CORBA or other technologies that allow you to make remote or cross-process function calls. These would solve the problem of marshalling the data between compilers and processes and thus solve your 'pod-only' problem.
Or you could write the front end using C++-Builder and save yourself a lot of grief and headaches.

I ran into problems when passing STL-Containers even when using the same STL-Implementation, but having set different levels of debug information etc. Therefore Passing PODs will be OK. C++ Containers will almost certainly result in problems.

Related

Porting Symbian C++ to Android NDK

I've been given some Symbian C++ code to port over for use with the Android NDK.
The code has lots of Symbian specific code in it and I have very little experience of C++ so its not going very well.
The main thing that is slowing me down is trying to figure out the alternatives to use in normal C++ for the Symbian specific code.
At the minute the compiler is throwing out all sorts of errors for unrecognised types.
From my recent research these are the types that I believe are Symbian specific:
TInt, TBool, TDesc8, RSocket, TInetAddress, TBuf, HBufc,
RPointerArray
Changing TInt and TBool to int and bool respectively works in the compiler but I am unsure what to use for the other types?
Can anyone help me out with them? Especially TDesc, TBuf, HBuf and RPointerArray.
Also Symbian has a two phase contructor using
NewL
and
NewLc
But would changing this to a normal C++ constructor be ok?
Finally Symbian uses the clean up stack to help eliminate memory leaks I believe, would removing the clean up stack code be acceptable, I presume it should be replaced with try/catch statements?
I'm not sure whether you're still interested, but one possibility is that where the Symbian idioms are using the EUSER.DLL (i.e. TDesC derived classes, RPointer*, etc) you may find taking the open source EPL code from Symbian developer site and adding it directly into your port a viable option. That is port over the necessary bits of EUSER (and others perhaps?).
However, if your current code base already uses a lot of the other subsystems you're going to see this become very unwieldy.
You should try to read some introductory text on development for Symbian. They used to have some examples in the Symbian site, and I am sure that you can find specific docs on how the types you want are meant to be used and what they provide.
The problem is that symbian development has its own idioms that cannot/should not be directly used outside of the symbian environment, as for example the two phase construction with the cleanup stack is unneeded in environments where the compiler has proper exception handling mechanisms --in Symbian a constructor that throws can lead to all sorts of mayhem.
If this is not a very large codebase it may be easier/faster to start from scratch and doing everything Android style. Even if you require NDK/C++ this approach may be quicker.
Another approach may be to use portable C/C++ for the core, and the use this on both Symbian and Android version while doing UI stuff separately for each platform. Spotify have done this on Android and iPhone.
It would typically be a bad idea to try and port Symbian OS C++ to standard C++ without having a very good understanding of what the Symbian idioms do.
This could very well be one of these projects where the right thing to do is to rewrite most of the code pretty much from scratch. If you barely know the language you are targetting, there is little point in deluding yourself into thinking you won't make mistakes, waste time and throw away new code anyway. It's all part of learning.
The CleanupStack mechanism is meant to help you deal with anything that could go wrong, including power outage and out of memory conditions. Technically, these days, it is implemented as C++ exceptions but it covers more than the usual error cases standard C++ code normally handles.
Descriptors (TDesc, TBuf and HBuf all belong to the descriptor class hierarchy) and templates (arrays, queues, lists...) predate their equivalent in standard C++ while dealing with issues like the CleanupStack, coding standards, memory management and integrity...
A relevant plug if you want to learn about it: Quick Recipes On Symbian OS is a recent attempt at explaning it all in as few pages as possible.
You should also definitely look at the Foundation website to get started.
Classes prefixed by T are meant to be small enough by themselves that they can be allocated on the stack.
Descriptor classes suffixed by C are meant to be immutable (A mutable descriptor can usually be created from them, though).
HBufC is pretty much the only Symbian class prefixed by H. It should always be allocated on the Heap.
A method suffixed by C will add an object on the CleanupStack when it returns successfully (usually, it's the object it returns). It's up to the calling code to Pop that object.
Classes prefixed by R are meant to be allocated on the stack but manage their own heap-based resources. They usually have some kind of Close() method that needs to be called before their destructor.
A typical way to thing about the differences between a collection of objects and a collection of pointers to object is who owns the objects in the collection. Either the collection owns the objects when they are added and looses them when they are removed (and is therefore responsible for deleting each object it still contains when it is itself destroyed) or the collection doesn't transfer ownership and something else must ensure the objects it contains will stay valid during the collection's lifetime.
Another way to think about collections is about how much copying of objects you want to happen when you add/get objects to/from the collection.
Symbian descriptor and collection classes are meant to cover all these different ways of using memory and let you choose the one you need based on what you want to do.
It's certainly not easy to do it right but that's how this operating system works.

How to design a C / C++ library to be usable in many client languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end.
Choice of language
Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it.
Requirements
It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question:
The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel.
Most of the complexity will be hidden inside the library, though
The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way
Clients may change the objects, and the library must be notified thereof
The library may change the objects, and the client must be notified thereof, if it already has a handle to that object
The library must be multi-threaded, because it will maintain network connections to several other hosts
While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure)
Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience!
My assumptions, so far
So here are some of my assumptions and conclusions, which I gathered in the past months:
Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++)
But my interface has to be a clean C interface that does not involve name mangling
Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values
If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory
For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form)
Is this also true for usage in C# ?
For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it.
Not all client languages handle multithreading well, so there should be a single thread talking to the client
For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM
For usage in Bash scripts, there must be an executable with a command line interface
For the other client languages, I have no idea
For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others
For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort
Possible subquestions
is this possible with manageable effort, or is it just too much portability?
are there any good books / websites about this kind of design criteria?
are any of my assumptions wrong?
which open source libraries are worth studying to learn from their design / interface / souce?
meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)
Mostly correct. Straight procedural interface is the best. (which is not entirely the same as C btw(**), but close enough)
I interface DLLs a lot(*), both open source and commercial, so here are some points that I remember from daily practice, note that these are more recommended areas to research, and not cardinal truths:
Watch out for decoration and similar "minor" mangling schemes, specially if you use a MS compiler. Most notably the stdcall convention sometimes leads to decoration generation for VB's sake (decoration is stuff like #6 after the function symbol name)
Not all compilers can actually layout all kinds of structures:
so avoid overusing unions.
avoid bitpacking
and preferably pack the records for 32-bit x86. While theoretically slower, at least all compilers can access packed records afaik, and the official alignment requirements have changed over time as the architecture evolved
On Windows use stdcall. This is the default for Windows DLLs. Avoid fastcall, it is not entirely standarized (specially how small records are passed)
Some tips to make automated header translation easier:
macros are hard to autoconvert due to their untypeness. Avoid them, use functions
Define separate types for each pointer types, and don't use composite types (xtype **) in function declarations.
follow the "define before use" mantra as much as possible, this will avoid users that translate headers to rearrange them if their language in general requires defining before use, and makes it easier for one-pass parsers to translate them. Or if they need context info to auto translate.
Don't expose more than necessary. Leave handle types opague if possible. It will only cause versioning troubles later.
Do not return structured types like records/structs or arrays as returntype of functions.
always have a version check function (easier to make a distinction).
be careful with enums and boolean. Other languages might have slightly different assumptions. You can use them, but document well how they behave and how large they are. Also think ahead, and make sure that enums don't become larger if you add a few fields, break the interface. (e.g. on Delphi/pascal by default booleans are 0 or 1, and other values are undefined. There are special types for C-like booleans (byte,16-bit or 32-bit word size, though they were originally introduced for COM, not C interfacing))
I prefer stringtypes that are pointer to char + length as separate field (COM also does this). Preferably not having to rely on zero terminated. This is not just because of security (overflow) reasons, but also because it is easier/cheaper to interface them to Delphi native types that way.
Memory always create the API in a way that encourages a total separation of memory management. IOW don't assume anything about memory management. This means that all structures in your lib are allocated via your own memory manager, and if a function passes a struct to you, copy it instead of storing a pointer made with the "clients" memory management. Because you will sooner or later accidentally call free or realloc on it :-)
(implementation language, not interface), be reluctant to change the coprocessor exception mask. Some languages change this as part of conforming to their standards floating point error(exception-)handling.
Always pair a callbacks with an user configurable context. This can be used by the user to give the the callback state without defining global variables. (like e.g. an object instance)
be careful with the coprocessor status word. It might be changed by others and break your code, and if you change it, other code might stop working. The status word is generally not saved/restored as part of calling conventions. At least not in practice.
don't use C style varargs parameters. Not all languages allow variable number of parameters in an unsafe way
(*) Delphi programmer by day, a job that involves interfacing a lot of hardware and thus translating vendor SDK headers. By night Free Pascal developer, in charge of, among others, the Windows headers.
(**)
This is because what "C" means binary is still dependant on the used C compiler, specially if there is no real universal system ABI. Think of stuff like:
C adding an underscore prefix on some binary formats (a.out, Coff?)
sometimes different C compilers have different opinions on what to do with small structures passed by value. Officially they shouldn't support it at all afaik, but most do.
structure packing sometimes varies, as do details of calling conventions (like skipping
integer registers or not if a parameter is registerable in a FPU register)
===== automated header conversions ====
While I don't know SWIG that well, I know and use some delphi specific header tools( h2pas, Darth/headconv etc).
However I never use them in fully automatic mode, since more often then not the output sucks. Comments change line or are stripped, and formatting is not retained.
I usually make a small script (in Pascal, but you can use anything with decent string support) that splits a header up, and then try a tool on relatively homogeneous parts (e.g. only structures, or only defines etc).
Then I check if I like the automated conversion output, and either use it, or try to make a specific converter myself. Since it is for a subset (like only structures) it is often way easier than making a complete header converter. Of course it depends a bit what my target is. (nice, readable headers or quick and dirty). At each step I might do a few substitutions (with sed or an editor).
The most complicated scheme I did for Winapi commctrl and ActiveX/comctl headers. There I combined IDL and the C header (IDL for the interfaces, which are a bunch of unparsable macros in C, the C header for the rest), and managed to get the macros typed for about 80% (by propogating the typecasts in sendmessage macros back to the macro declaration, with reasonable (wparam,lparam,lresult) defaults)
The semi automated way has the disadvantage that the order of declarations is different (e.g. first constants, then structures then function declarations), which sometimes makes maintenance a pain. I therefore always keep the original headers/sdk to compare with.
The Jedi winapi conversion project might have more info, they translated about half of the windows headers to Delphi, and thus have enormous experience.
I don't know but if it's for Windows then you might try either a straight C-like API (similar to the WINAPI), or packaging your code as a COM component: because I'd guess that programming languages might want to be able to invoke the Windows API, and/or use COM objects.
Regarding automatic wrapper generation, consider using SWIG. For Java, it will do all the JNI work. Also, it is able to translate complex OO-C++-interfaces properly (provided you follow some basic guidelines, i.e. no nested classes, no over-use of templates, plus the ones mentioned by Marco van de Voort).
Think C, nothing else. C is one of the most popular programming languages. It is widely used on many different software platforms, and there are few computer architectures for which a C compiler does not exist. All popular high-level languages provide an interface to C. That makes your library accessible from almost all platforms in existence. Don't worry too much about providing an Object Oriented interface. Once you have the library done in C, OOP, functional or any other style interface can be created in appropriate client languages. No other systems programming language will give you C's flexibility and potability.
NestedVM I think is going to be slower than pure Java because of the array bounds checking on the int[][] that represents the MIPS virtual machine memory. It is such a good concept but might not perform well enough right now (until phone manufacturers add NestedVM support (if they do!), most stuff is going to be SLOW for now, n'est-ce pas)? Whilst it may be able to unpack JPEGs without error, speed is of no small concern! :)
Nothing else in what you've written sticks out, which isn't to say that it's right or wrong! The principles sound (mainly just listening to choice of words and language to be honest) like roughly standard best practice but I haven't thought through the details of everything you've said. As you said yourself, this really ought to be several questions. But of course doing this kind of thing is not automatically easy just because you're fixed on perhaps a slightly different architecture to the last code base you've worked on...! ;)
My thoughts:
All your comments on C interface compatibility sound sensible to me, pretty much best practice except you don't seem to properly address memory management policy - some sentences a bit ambiguous/vague/wrong-sounding. The design of the memory management will be to a large extent determined by the access patterns made in your application, rather than the functionality per se. I suiggest you study others' attempts at making portable interfaces like the standard ANSI C API, Unix API, Win32 API, Cocoa, J2SE, etc carefully.
If it was me, I'd write the library in a carefully chosen subset of the common elements of regular Java and Davlik virtual machine Java and also write my own custom parser that translates the code to C for platforms that support C, which would of course be most of them. I would suggest that if you restrict yourself to data types of various size ints, bools, Strings, Dictionaries and Arrays and make careful use of them that will help in cross-platform issues without affecting performance much most of the time.
your assumptions seem ok, but i see trouble ahead, much of which you have already spotted in your assumptions.
As you said, you can't really export c++ classes and methods, you will need to provide a function based c interface. What ever facade you build around that, it will remain a function based interface at heart.
The basic problem i see with that is that people choose a specific language and its runtime because their way of thinking (functional or object oriented) or the problem they address (web programming, database,...) corresponds to that language in some way or other.
A library implemented in c will probably never feel like the libraries they are used to, unless they program in c themselves.
Personally, I would always prefer a library that "feels like python" when I use python, and one that feels like java when I do Java EE, even though I know c and c++.
So your effort might be of little actual use (other than your gain in experience), because people will probably want to stick with their mindset, and rather re-implement the functionality than use a library that does the job, but does not fit.
I also fear the desired portability will seriously hamper development. Just think of the infinite build settings needed, and tests for that. I have worked on a project that tried to maintain compatibility for 5 operating systems (all posix-like, but still) and about 10 compilers, the builds were a nightmare to test and maintain.
Give it an XML interface, whether passed as a parameter and return value or as files through a command-line invocation. This may not seem as direct as a normal function interface, but is the most practical way to access an executable from, e.g., Java.

win32 application aren't so object oriented and why there are so many pointers?

This might be a dumb question to some of you and maybe I asked this question wrong, because I am new to C++. But I notice when working in a lot of Win32 applications, you use a lot of resources that are pointers. Why do you have to always acquire a objects pointer? Why not initiate a new instance of the class. and speaking of that, I notice in most cases you never initiate new objects, but always call on methods that return that pointer. What if that pointer is being used somewhere else. Couldn't you mess something up if you alter that pointer and it is being used somewhere else?
Windows APIs were designed for C, which was and still is the most used language for system programming; C APIs are the de-facto standard for system APIs, and for this almost all other languages had and have some way to call external C functions, so writing a C API helps to be compatible with other languages.
C APIs need just a simple ABI, that consists of almost just the definition for the calling convention to use for functions (and something about the structures layout). C++ and other object oriented languages, on the contrary, require a complex ABI, that must define how objects are laid out in memory, how to handle inheritance, how to lay out the vtable, how to propagate exceptions, where to put RTTI data, ... Moreover not all the languages are object oriented, and using APIs thought for C++ with other non-object oriented languages may be a real pain (if you ever used COM from C you know what I mean).
As an aside, when Windows was initially designed C++ wasn't so widespread on PCs, and also C wasn't used so much: actually, a large part of Windows 3.11 and many applications were still written in assembly, since the memory and CPU constraints at the era were very tight; compilers were also less smart than now are, especially C++ ones. On machines where hand-forged assembly was often the only solution, the C++ overhead was really unacceptable.
For the pointers thing: Windows APIs use almost always handles, i.e. opaque pointers, to be able to change the underlying nature of every resource without affecting the existing applications and to stop applications to mess around with internal structures. It doesn't matter if the structure used by the window manager to represent a window internally is changed: all the applications use simply an HWND, which is always of the size of a pointer. You may think at this as some kind of PIMPL idiom.
However, Windows is in some way object-oriented (see for example the whole "window class" concept, or, at a deeper level, the inner working of the NT kernel, which is heavily based on the "object" concept), however its most basic APIs, being simple C functions, somehow hide this OO nature. The shell, on the other side, being designed many years after, is written mostly in C++ and it provides a really object-oriented COM interface.
Interestingly, you can see in COM all the tradeoffs that you must face in building a cross-language but still C++ biased object-oriented interface: the result is quite complicated, in some respects ugly and not really simple to use from any language. The Windows APIs, instead, being simple functions are generally more easily callable.
If you are interested in a system based on C++ APIs you may have a look at Haiku; personally, this is one of the aspects because of which I am quite interested in that project.
By the way, if you are going to do Win32 programming just with the APIs you'd better get a good book to get used to these "particularities" and to other Win32 idioms. Two well-known ones are the Rector-Newcomer and the Petzhold.
Because Win32 Api are written on plain C, not C++. So any program on almost any language can make call to those API.
Plus, there are not simple mechanism to use objects across diferent modules, and diferent languages. I.e. you can't export C++ class to python. Of course, there are technoligies like OLE/COM, but they still written on plain C. And they are bit comlicated to use.
At other hand - calls to plain C functions are standardized. So you can call routines from DLL or static lib in any language.
Win32 was designed to work with the C language not C++.
That's why you will see return types of the defined BOOL instead of bool for example.
bool is specific to C++ and doesn't exist in C.
For Microsoft's object oriented wrapper of Win32 see MFC.
A newer framework from Microsoft since then is the .Net Framework.
The .Net framework is based on managed code though and does not run natively. The most modern way to do GUI programming on Windows is WPF or even Silverlight.
The most modern way to do unmanaged GUI programming is still using MFC, although some people still prefer using straight Win32.
Note working with pointers is not specific to C, it is still very common in C++.
First reason is because passing pointers around is cheap. Pointers are 4 bytes on x86 and 8 bytes on x64. While the structure or class it points to can occupy a whole lot more in memory. So instantiating a class means reserving new memory again and again. This isn't efficient from speed and memory consumption POVs.
Another way is to pass references to objects or smart pointers or similar constructs. But win32 api was designed in the C era, so this is where it is up to now ;)
As about potential messing up with pointers - it's possible of course. But most of the time their lifetime is explicitly stated in the API (if not obvious).
Probably because the Win32 API is "older" than mainstream object-oriented programming, it's not a C++ API at its core.
Its almost like you should try one of the many OO wrappers. Like MFC or .net.
The Windows API is plain old C, hence the use of pointers everywhere. Also, the reason you ask Windows for a new pointer is because Windows needs to keep track of all objects... it allocates things and tells you a pointer (or sometimes just a numeric ID) to let you work with them.
Having C functions as the API allows both C and C++ programmers use it.
Windows APIs goes way back - C was famous during those days.
All the HWND, HANDLE, HDC are but a weak attempt to make clamped, objects-like data types (using struct). C FAQ has a question on this -> http://c-faq.com/struct/oop.html.
To understand the pointers you might want to read the CPlusPlus.com tutorial on pointers.

Unmanaged C++: How to dynamically load code?

Our industry is in high-performance distributed parallel computing. We have an unmanaged C++ application being developed using Visual Studio 2008.
Our application (more like a framework) is supposed to be able to dynamically load code (algorithms) developed by 3rd parties (there can be many dlls) that conforms to our interface specification, and calls the loaded code to get some results.
Think of it like you want to call a sin(x) function, but there are many different implementations of sin(x) that you could use.
I have a few questions as I'm very new to this area of dynamically loading code:
Is dll (dynamic-link library) the answer to this type of requirement?
If the 3rd-party used a different type of IDE to create the dll compared to mine (say Eclipse CDT, C++Builder, Visual C++ 6.0, etc), would the dll still work with my application?
Our application is supposed to work cross-platform as well (should be able to run on Linux), would using QLibrary be the most logical way to abstract away all the platform-specific dll loading?
(OPTIONAL) Are there some unforeseen problems that I might encounter?
1) generally, yes. If the API gets complex and multi-object, I'd use COM or a similar mechanism. But if you have to manage only a little state, or can go completely state-free, a pure DLL interface is fine.
2) Use a suitable calling convention (stdcall) and data types. I would not even asusme that the implementation has to be in C++. so that means char/wchar_t, explicit-sized ints, e.g. int32 , float and double, and C-style arrays of them.
3) can't say
4)
No cross-boundary memory allocations: free what you allocate, and let the plugin free what it allocated.
Your API design has a big influence on achievable performance and implementation effort. DOn't make the functions to small, give the implementation some freedom how it handles certain things, define error protocol, threading requirements etc.
[edit]
Also, if you declare structures, look into alignment and compiler options to control this (usually #pragma pack). This is required so clients see the same layout.
The compiler usually "mangles" the names for the exported symbols (e.g. add an udnerscore for STDCALL convention). Typically, this is controlled through a .def file that is passed to the linker.
Yes, it is. But there are many issues.
May or may not. Firstly, you should be aware of calling conventions. Secondly, since there is no standardized ABI for C++, you should stick to plain C at interface level (Btw, COM technology was all about this -- making a standardized ABI). Thirdly, each plugin may have its own CRT and so problems may appear, there is a good article about this on MSDN.
Yes, this will help with loading dynamic libraries, but not with problems in (2). Althought, these problems mostly windows specific.
In my experience, QLibrary is the best answer to your questions.
It provides a simple interface and takes care of all the platform-specific details.
Indeed, that means that the plugins must be also written using QT.

What's the advantage of using COM over a plain DLL?

Assume that you work only in the C++ world (cross-language interop is not required). What advantages/inconvenients do you see in using COM instead of a plain basic DLL? Do you think using COM is worth the trouble if you are not going to use the interface from different languages?
Everybody is mentioning things that are in COM's plus column. I'll mention a couple of detractions.
When you implement your system using COM, you need to register the COM 'servers' (be they in-proc or out-of-proc) at setup and unregister them at uninstall. This could increase the complexity of your setup system slightly and tends to require a reboot unless the user carefully tears down running processes first.
COM is slow compared to other standard ways of doing the same thing. This comment will probably generate a lot of hate and maybe some downvotes, but the fact of the matter is that at some point you will need to marshall data, and that is expensive.
According to the Rules of COM, once an interface has been published it can never be changed. That in itself is not a negative, and you might even argue that it forces you to do thorough design before shipping the interface. But the truth is there's no such thing as never, and in production code interfaces change. You will undoubtedly need to either add methods or change the signatures of existing methods. In order to accomplish this you have to either break the rules of COM -- which has bad effects -- or follow the rules of COM which are more complicated than just adding a parameter to a function like you would with a astraight DLL.
COM can be useful in plain old C++ for:
Interprocess communication
Plugin architectures
Late binding scenarios
"Much, much, more..." (tm)
That said, if you don't need it, don't use it.
With DLL you can get much closer coupling, while COM limits interactions very precisely. This is the root of both the advantages and the disadvantages!
You get more power and flexibility (e.g. inherit from classes defined in the DLL, not feasible in COM) but the dependency is thereby much stronger (need to rebuild the user for certain changes to the DLL, etc).
Often especially galling is that all DLLs and the EXE must use the same kind of runtime library and options (e.g. all dynamically linked to the non-debug multithreaded version of msvcrt* for example -- can't rebuild just one to use the debug version without incurring very likely errors!).
The looser coupling of COM is therefore often preferable, unless you really need the closer-coupling kinds of interactions in a specific case (e.g., a framework, which definitely requires user-code to inherit from its classes, should be a DLL).
If you can avoid don't use it. In my last project COM brought pretty much limitations into C++ interfaces being used. Just imagine, that you can't simply pass a std::string but have to use an array of characters. In that case you build the string, an then copy it to an array which can be handled by COM.
You also can only use very limited set of fundamental types, have casts and proprietary memory management. You can't use new/delete, but have to use COM own functions.
You also can't simply throw an exception, but have to initialize some COM interface IErrorInfo, which will be rethrown at the other end.
So if you don't need, don't use it. It will definitely screw your design. And if you need it, try to evaluate other interop possibilities: boost::interprocess, zeroc ice...
Regards,
Ovanes
Registration and discovery
Out-of-process
Remote invocation
are the few extra features that you would have got. Even transactional support can flow without the need for COM support these days.
The IUnknown interface is a good base level to support anyway -- gets you a way to add features without breaking old clients (QueryInterface) and pervasive reference counting. You can implement this without buying into everything in COM.
Then, whenever you are adding a feature to a class, if you use the COM interface for it, you at least get an interface that is known -- for example IDispatch if you want reflection features.
Your only delta away from being able to be called by another language would then be the registration and the class factory.
Because interfaces are independent of any particular DLL, at its simplest level, a COM like approach at the very least frees you to change the dll serving an interface under the hood, without having to recompile your app against the new dll name.
Using Full COM with MIDL defined interfaces and proxy stub dlls means that you can use COM to manage thread safety in-process, interprocess comms on the same PC, or even connect to the COM server object on a remote PC.