multiple dlls with multiple classes and global variables in dll - c++

Good time of day, everyone!
I have some questions about .dll programming in C++, it's rather new for me.
1) If I want to create DLL with a multiple classes, but I still want to create abstract interface for each class, should I create one header file for interfaces, or create separate multiple headers for each abstract class? And what should I do with .cpp implementation of factory functions?
2) If I create object and factory function, and gets a pointer to instance, can I just call "delete" in program when I want to free that memory? I think, that object is placed in dll's pages and there may be some problems. What should I do to properly free memory in this case?
3) I read, that if more than one process binds .dll - dll creates separate individual instances of global variables for each project. Does it right? Then I have two questions if it is true:
3.1) What happens with static members in dll? What if I want to create a singleton manager, can I place it in dll?
3.2) If I have Core.dll and Graphics.dll, Sound.dll and Physics.dll. Core.dll has a global variable (or a singleton manager in my real case). Will the other dlls work with one instance of singleton, or other? (each dll uses Core.dll)
I apologize for my weak English and many questions in one topic :)
Thank you for your attention and answers.

1: Mostly this is up to you and depends on the scale of the project. On something small it matters little, so keep it simple and have a single header. On larger projects it is best to reduce unnecessary interdependencies as much as possible - so put them in seperate files. You can alwasy create "all.h" which just includes the other things.
2: Yes, if the DLL and the EXE are both linked to the multithreaded DLL CRT. Unless you know what you are doing, always use this as it is the safest and will do what you expect - it results in the exe and dll(s) being able to share the heap as if they were a single executable. You can "new Object()" in the dll and "delete obj" in the exe freely.
NOTE: Mixing different versions of your EXE and your DLL can introduce incredibly subtle bugs (if, say, a class/struct definition changes), so don't do that.
3: Every process has its own independent memory space (unless you specifically do certain things to try to get some shared memory). Processes are not allowed to get to memory of other processes.
3.1: I strongly recommend you avoid global state. (Global static-const is OK). Global variables lead to many unexpected and difficult problems, and globals in Windows DLL have a number of additional complexities. It is far better in the long run for you to have explicit "Initialize/Deinitialze" functions in the DLL that the EXE must call.
But, global statics in a dll are not much different than in an executable ... they get initialized in pretty much the same way when the DLL is loaded. (THings get more complicated when you dynamically load DLLs, but lets ignore that here).
3.2: Yes, they would work with the single instance - but don't do it anyways, you will eventually regret it. Much better to make the initialization explicit because you cannot control the order in which global variables are constructed, and this can quickly lead to very difficult initialization problems.

Related

How to include classes of a DLL?

I'm new to using DLL's in C++ and I am wondering how to sucessfully load and use classes contained in a DLL without getting a "corrupted stack", a "null pointer" or any application crash ^^. This is a bit confusing to me for now.
Even if it might be safer I don't want to use interface class because it seems too complicated to use. I know that I should have the same include in both DLL and application to prevent crashes. That means that my include file should mention members name.
Here is the problem :
I want to distribute my software (DLL + include file) and I don't want
my customers to see the architecture of my project. But the only class
I want to export from my DLL has as member, objects from other
classes. So my class is dependent of other classes and so on in
cascade.
Is it possible to provide only one include file of one class with just useful methods without having risks of crashes ? If no, what solutions might fit my needs ?
Thanks
FIrst of all, if you use the same include in DLL and in the application using it, if both were compiled with the same compiler and compatible settings, and both use the same runtime library (DLL one, not statically linked), you should have none of the problem you fear.
If you want to hide details of your DLL and provide an alterered, simplified header to the DLL users, you work on your own risk. For example, if there are data members missing (even private ones) of if there are virtual functions missing (even private ones), then it might probably crash, or worse, corrupt something without being noticed. So I'd strongly advise you not to consider this approach unless you're very very experienced with the assembler generated by your compiler.
If you want to hide implementation details of your class, the best approach would probably to design your library around the PIMPL idiom.

Is it bad practice to have an application which ships with lots of DLL's?

Is it better to have lots of DLL dependencies or better to static link as mich as possible?
Thanks
No, it is not bad practice to ship with lots of DLLs; it is bad practice, though, to put them in %System32%. Actually, it is usually good to use DLLs instead of statically linking; for one thing, you can easily swap out just the DLL that you need to update, rather than having to replace the entire binary, and for another, if your program eventually needs multiple executables that work together, you only pay for one copy of the DLL code (whereas, with static linking, you would end up duplicating the code that was common).
Having static link gives your app a large memory footprint, therefore having DLL's is better from that POV i.e. you only load what you need. Nowadays installations are normally done by an installer so it doesn't matter if you have lots of DLL's.
I don't think it's a bad practice. Look at Office or Adobe or any large-scale application. They end up with lots of DLLs -- because they otherwise would have to pack everything into a 100M+ exe.
Break things into DLL when you don't absolutely need them.
Generally speaking it is not a bad practice. It is better to split the code of a program into separated dynamic libraries, especially if the functions provided are used from more than one executable.
That doesn't mean that every program should have its code split in more dynamic libraries; for simple utilities, that is not probably needed.
As mentioned by others, lots of DLLs is not a bad practice. Put some thought into what to put in each one. I like to keep the DLLs as 'tiny-island-ish' as I can. If these will be distributed, I like to have a specific naming convention that reflects the product and/or company name and/or initials of some sort.
Just wanted to add another observation from other programs that use many dynamically loaded DLLs. For example, the GIMP and its plug-ins. The way you load your DLLs will affect your client's perceived application speed, if that's a factor among the other very good ones (updates, reuse, etc.). I'm sure there's some overhead for the OS to load a DLL and you might run into process limits (like open file handles). Having very many very small DLLs might not be as desired as "smaller than that" number of "larger than that"-sized DLLs.

Benefits of exporting a class from a dll vs. static library

I have a C++ class I'm writing now that will be used all over a project I'm working on. I have the option to put it in a static library, or export the class from a dll. What are the benefits/penalties for each approach. The only one I can think of is compiled code size which I don't really care about. Thanks!
Advantages of a DLL:
You can have multiple different exe's that access this functionality, so you will have a smaller project size overall.
You can dynamically update your component without replacing the whole exe. If you do this though be careful that the interface remains the same.
Sometimes like in the case of LGPL you are forced into using a DLL.
You could have some components as C#, Python or other languages that tie into your DLL.
You can build programs that consume your DLL that work with different versions of the DLL. For example you could check if a function exists in a certain operating system DLL and only call it if it exists, and otherwise do some other processing.
Advantages of Static library:
You cannot have dll verisoning problems that way
Less to distribute, you aren't forced into a full installer if you only have a small application.
You don't have to worry about anyone else tying into your code that would have been accessible if it was a DLL.
Easier to develop a static library as you don't need to worry about exports and imports.
Memory management is easier.
One of the most significant and often unnoted features of dynamic libraries on Windows is that DLLs have their own heap. This can be an advantage or a disadvantage depending on your point of view but you need to be aware of it. For example, a global variable in a DLL will be shared among all the processes attaching to that library which can be a useful form of de facto interprocess communication or the source of an obscure run time error.

C++ How to communicate between DLLs of an application?

I have a application and two Dlls. Both libraries are loaded by the application. I don't have the source of the application, but the source of the libs. I want to instantiate a class in lib A and want to use this instance also in lib B.
How do I do this? I'm not sure, but I think that both libs are used in different threads of the application.
I have no idea where I have to search for a solution.
No. Think about DLL as just normal library. Both can be used in a single thread.
If you want to use a class A in library X, you must pass a pointer/reference to it. The same applies to library Y. This way both libraries can work with same class/data.
Two dll's loaded into the same process is a fairly simple setup.
You just need to be careful with module scope, which will be the same as dll scope.
e.g. each dll will have its own set of static instances for any static objects.
Next you need to understand how to reference functions/classes across the boundary and what sort of types are safe to use as arguments.
Have a look on any documentation for dllexport and dllimport - there are several useful questions on this site if you search with those terms.
You have to realize that even though your DLLs are used by the host application nothing prevents you (that is your DLLs) from using your DLLs as well. So in your DLL A you could load and use your DLL B and call functions and stuff. When DLL A is unloaded, free DLL B as well. DLLs are reference counted, so your DLL A will have a reference of 1 (the host application) and your DLL B 2 (the host application and DLL A). You will not have two instances of DLL B loaded in the same process.
Named pipes might be the solution to your problems.
If you're targetting windows, you can check this reference
[EDIT] Didnt see you wanted to work on the same instance. In that case you need shared memory spaces, however, you really have to know what you're doing as it's quite dangerous :)
A better solution could be to apply OO Networking principles to your libs and communicate with, say CORBA, using interprocess middleware or the 127.0.0.1 loopback interface (firewalls will need to allow this).
Seems the simple solution would be to include in some initialization function of library A (or in DllMain if needed) a simple call to a function in library B passing a pointer to the common object. The only caveat is that you must ensure the object is deleted by the same DLL that newed it to avoid problems with the heap manager.
If these DLL's are in fact used in different threads you may have to protect access to said data structure using some kind of mutex.

How to implement monkey patch in C++?

Is it possible to implement monkey patching in C++?
Or any other similar approach to that?
Thanks.
Not portably so, and due to the dangers for larger projects you better have good reason.
The Preprocessor is probably the best candidate, due to it's ignorance of the language itself. It can be used to rename attributes, methods and other symbol names - but the replacement is global at least for a single #include or sequence of code.
I've used that before to beat "library diamonds" into submission - Library A and B both importing an OS library S, but in different ways so that some symbols of S would be identically named but different. (namespaces were out of the question, for they'd have much more far-reaching consequences).
Similary, you can replace symbol names with compatible-but-superior classes.
e.g. in VC, #import generates an import library that uses _bstr_t as type adapter. In one project I've successfully replaced these _bstr_t uses with a compatible-enough class that interoperated better with other code, just be #define'ing _bstr_t as my replacement class for the #import.
Patching the Virtual Method Table - either replacing the entire VMT or individual methods - is somethign else I've come across. It requires good understanding of how your compiler implements VMTs. I wouldn't do that in a real life project, because it depends on compiler internals, and you don't get any warning when thigns have changed. It's a fun exercise to learn about the implementation details of C++, though. One application would be switching at runtime from an initializer/loader stub to a full - or even data-dependent - implementation.
Generating code on the fly is common in certain scenarios, such as forwarding/filtering COM Interface calls or mapping OS Window Handles to library objects. I'm not sure if this is still "monkey-patching", as it isn't really toying with the language itself.
To add to other answers, consider that any function exposed through a shared object or DLL (depending on platform) can be overridden at run-time. Linux provides the LD_PRELOAD environment variable, which can specify a shared object to load after all others, which can be used to override arbitrary function definitions. It's actually about the best way to provide a "mock object" for unit-testing purposes, since it is not really invasive. However, unlike other forms of monkey-patching, be aware that a change like this is global. You can't specify one particular call to be different, without impacting other calls.
Considering the "guerilla third-party library use" aspect of monkey-patching, C++ offers a number of facilities:
const_cast lets you work around zealous const declarations.
#define private public prior to header inclusion lets you access private members.
subclassing and use Parent::protected_field lets you access protected members.
you can redefine a number of things at link time.
If the third party content you're working around is provided already compiled, though, most of the things feasible in dynamic languages isn't as easy, and often isn't possible at all.
I suppose it depends what you want to do. If you've already linked your program, you're gonna have a hard time replacing anything (short of actually changing the instructions in memory, which might be a stretch as well). However, before this happens, there are options. If you have a dynamically linked program, you can alter the way the linker operates (e.g. LD_LIBRARY_PATH environment variable) and have it link something else than the intended library.
Have a look at valgrind for example, which replaces (among alot of other magic stuff it's dealing with) the standard memory allocation mechanisms.
As monkey patching refers to dynamically changing code, I can't imagine how this could be implemented in C++...