Preserve state in dll between succssesive calls from C++ application - c++

I am explicitly using a DLL in my application, is it possible to preserve state in that DLL between successive calls to it? My attempts using a global have so far failed.
Would I have to use implicit linking for this to work?

The type of linking shouldn't have any influence here. It just defines when the DLL is loaded and if it's required to actually start your program. E.g. with runtime loading you're able to load DLLs that aren't there at compile time (e.g. plugins) and you're able to handle missing dependencies yourself. With compile time linking you'd just get a Windows error telling you there's a DLL missing.
As for the unloading, you don't have direct control wether your DLL will stay in memory, so it's possible it's unloaded between being used by two different programs. Also, what do you actually consider "successive calls"? Two calls from the same program? Two calls from the same program happening during two different executions? Two programs running at the same time? Depending on the scenario you might need some shared memory (or disk space) to actually pass data.
You might have a look at DllCanUnloadNow to tell windows if you're ready to unload already, but depending on your use case this might be the wrong tool.

Related

In dynamic linking, how does the .exe know where to search for the library when it is updated?

It is my understanding that when a C program uses dynamic linking, the compiled version of the program (.exe) stores the memory address of the library somewhere. How about when the program is installed on someone else's computer, isn't the location of the library different? Or, when you update the library, wouldn't its memory address be different?
Neither C nor C++ specifies how this works. It's different for the different operating systems and exe formats. To know the specifics you need to look into how your implementation does things.
The short answer to your question is that the OS sets the environment up within which your program runs. It has to attach the program to the right places, or at the least notify it. Generally you start your program and the format tells the OS what libraries it should load and then it links up the addresses in some way.
There's usually a way to do this manually as well and directly request a library to be loaded during runtime. The automatic linking of calls though may not happen in these cases.
Yes, location of the library is different on different computers. And yes, when you update the library, its memory address is different. That is why the address of dynamically linked function cannot be hardwired in the executable file. Instead, only its name and the name of hosting library (without path specification) is stored in PE format.
Before the program.exe starts, OS loader looks for required DLL, loads it to the virtual memory space of starting program, finds current addresses of required functions from this DLL and writes them to Imported Address Table (IAT).
When your program calls some dynamically linked function, it actually makes an indirect CALL of its address in IAT.

Are shared objects/DLLs loaded by different processes into different areas of memory?

I'm trying to figure out how an operating system handles multiple unrelated processes loading the same DLL/shared library. The OSes I'm concerned with are Linux and Windows, but to a lesser extent Mac as well. I presume the answers to my questions will be identical for all operating systems.
I'm particularly interested in regards to explicit linking, but I'd also like to know for implicit linking. I presume the answers for both will also be identical.
This is the best explanation I've found so far concerning Windows:
"The system maintains a per-process reference count on all loaded modules. Calling LoadLibrary increments the reference count. Calling the FreeLibrary or FreeLibraryAndExitThread function decrements the reference count. The system unloads a module when its reference count reaches zero or when the process terminates (regardless of the reference count)." - http://msdn.microsoft.com/en-us/library/windows/desktop/ms684175%28v=vs.85%29.aspx
But it leaves some questions.
1.) Do unrelated processes load the same DLL redundantly (that is, the DLL exists more than once in memory) instead of using reference counting? ( IE, into each process's own "address space" as I think I understand it )
if the DLL is unloaded as soon as a process is terminated, that leads me to believe the other processes using exact same DLL will have a redundantly loaded into memory, otherwise the system should not be allowed to ignore the reference count.
2.) if that is true, then what's the point of reference counting DLLs when you load them multiple times in the same process? What would be the point of loading the same DLL twice into the same process? The only feasible reason I can come up with is that if an EXE references two DLLs, and one of the DLLs references the other, there will be at least two LoadLibrar() and two FreeLibrary() calls for the same library.
I know it seems like I'm answering my own questions here, but I'm just postulating. I'd like to know for sure.
The shared library or DLL will be loaded once for the code part, and multiple times for any writeable data parts [possibly via "copy-on-write", so if you have a large chunk of memory which is mostly read, but some small parts being written, all the DLL's can use the same pieces as long as they haven't been changed from the original value].
It is POSSIBLE that a DLL will be loaded more than once, however. When a DLL is loaded, it is loaded a base-address, which is where the code starts. If we have some process, which is using, say, two DLL's that, because of their previous loading, use the same base-address [because the other processes using this doesn't use both], then one of the DLL's will have to be loaded again at a different base-address. For most DLL's this is rather unusual. But it can happen.
The point of referencecounting every load is that it allows the system to know when it is safe to unload the module (when the referencecount is zero). If we have two distinct parts of the system, both wanting to use the same DLL, and they both load that DLL, you don't really want to cause the system to crash when the first part of the system closes the DLL. But we also don't want the DLL to stay in memory when the second part of the system has closed the DLL, because that would be a waste of memory. [Imagine that this application is a process that runs on a server, and new DLL's are downloaded every week from a server, so each week, the "latest" DLL (which has a different name) is loaded. After a few months, you'd have the entire memory full of this applications "old, unused" DLL's]. There are of course also scenarios such as what you describe, where a DLL loads another DLL using the LoadLibrary call, and the main executable loads the very same DLL. Again, you do need two FreeLibrary calls to close it.

Is a DLL slower than a static link?

I made a GUI library for games. My test demo runs at 60 fps. When I run this demo with the static version of my library it takes 2-3% cpu in taskmanager. When I use the DLL version it uses around 13-15%. Is that normal? Is so, how could I optimize it? I already ask it to use /O2 for the most function inlining.
Do not start your performance timer until the DLL has had opportunity to execute its functionality one time. This gives it time to load into memory. Then start the timer and check performance. It should then basically match that of the static lib.
Also keep in mind that the load-location of the DLL can greatly affect how quickly it loads. The default base addres for DLLs is 0x400000. If you already have some other DLL in that location, then the load process must perform an expensive re-addressing step which will throw off your timing even more.
If you have such a conflict, just choose a different base address in Visual Studio.
You will have the overhead of loading the DLL (should be just once at the beginning). It isn't statically linked in with direct calls, so I would expect a small amount of overhead but not much.
However, some DLLs will have much higher overheads. I'm thinking of COM objects although there may be other examples. COM adds a lot of overhead on function calls between objects.
If you call DLL-functions they cannot be inlined for a caller. You should think a little about your DLL-boundaries.
May be it is better for your application to have a small bootstrap exe which just executes a main loop in your DLL. This way you can avoid much overhead for function calls.
It's a little unclear as to what's being statically/dynamically linked. Is the DLL of your lib statically linked with its dependencies? Is it possible that the DLL is calling other DLLs (that will be slow)? Maybe try running a profiler from valgrind on your executable to determine where all the CPU usage is coming from.

Overhead of DLL

I have a quite basic question.
When a library is used only by a single process. Should I keep it as a static library?
If I use the library as a DLL, but only a single process uses it. **What will be the overhead?*
There is almost no overhead to having a separate DLL. Basically, the first call to a function exported from a DLL will run a tiny stub that fixes up the function addresses so that subsequent calls are performed via a single jump through a jump table. The way CPUs work, this extra indirection is practically free.
The main "overhead" is actually an opportunity cost, not an "overhead" per-se. That is, modern compilers can do something called "whole program optimization" in which the entire module (.exe or .dll) is compiled and optimized at once, at link time. This means the compiler can do things like adjust calling conventions, inline functions and so on across all .cpp files in the whole program, rather than just within a single .cpp file.
This can result in fairly nice performance boost, for certain kinds of applications. But of course, whole program optimization cannot happen across DLL boundaries.
There are two overheads to a DLL. First as the DLL is loaded into memory the internal addresses must be fixed for the actual address that the DLL is loaded at, versus the addresses assumed by the linker. This can be minimized by re-basing the DLLs. The second overhead is when the program and DLL are loaded, as the program calls into the DLL have the addresses of the functions filled in. These overheads are generally negligible except for very large programs and DLLs.
If this is a real concern you can use delay-loaded DLLs which only get loaded as they are called. If the DLL is never used, for example it implements a very uncommon function, then it never gets loaded at all. The downside is that there's a short delay the first time the DLL is called.
I like to use statically linked libraries, not to decrease the overhead but to minimize the hassle of having to keep the DLL with the program.
Imported functions have no more overhead than virtual functions.

How do I decide whether having more that one VC++ CRT state is a problem for my application?

This MSDN article says that if my application loads VC++ runtime multiple times because either it or some DLLs it depends on are statically linked against VC++ runtime then the application will have multiple CRT states and this can lead to undefined behaviour.
How exactly do I decide if this is a problem for me? For example in this MSDN article several examples are provided that basically say that objects maintained by C++ runtime such as file handles should ot be passed across DLL boundaries. What exactly is a list of things to check if I want my project to statically link against VC++ runtime?
It's OK to have multiple copies of the CRT as long as you aren't doing certain things...:
Each copy of the CRT will manage its own heap.
This can cause unexpected problems if you allocate a heap-based object in module A using 'new', then pass it to module B where you attempt to release it using 'delete'. The CRT for module B will attempt to identify the pointer in terms of its own heap, and that's where the undefined behavior come in: If you're lucky, the CRT in module B will detect the problem and you'll get a heap corruption error. If you're unlucky, something weird will happen, and you won't notice it until much later...
You'll also have serious problems if you pass other CRT-managed things like file handles between modules.
If all of your modules dynamically link to the CRT, then they'll all share the same heap, and you can pass things around and not have to worry about where they get deleted.
If you statically link the CRT into each module, then you have to take care that your interfaces don't include these types, and that you don't allocate using 'new' or malloc in one module, and then attempt to clean up in another.
You can get around this limitation by using an OS allocation function like MS Windows's GlobalAlloc()/GlobalFree(), but this can be a pain since you have to keep track of which allocation scheme you used for each pointer.
Having to worry about whether the target machine has the CRT DLL your modules require, or packaging one to go along with your application can be a pain, but it's a one-time pain - and once done, you won't need to worry about all that other stuff above.
It is not that you can't pass CRT handles around. It is that you should be careful when you have two modules reading/writing the handle.
For example, in dll A, you write
char* p = new char[10];
and in the main program, write
delete[]p;
When dll A and your main program have two CRTs, the new an delete operations will happen in different heaps. The heap in the DLL can't manage the heap in the main program, resulting memory leaks.
Same thing on File handle. Internally File handle may have different implementations in addition to its memory resource. Calling fopen in one module and calling fwrite and fclose in another one could result problems as the data FILE* points could be different in CRT runtimes.
But you can certainly pass FILE* or memory pointer back and forth between two modules.
One issue I've seen with having different versions of the MSVC runtime loaded is that each runtime has its own heap, so you can end up with allocations failing with "out of memory" even though there is plenty of memory available, but in the wrong heap.
The major problem with this is that the problems can occur at runtime. At the binary level, you've got DLLs making function calls, eventually to the OS. The calls themselves are fine, but at runtime the arguments are not. Of course, this can depend on the exact code paths exercised.
Worse, it could even be time- or machine-dependent. An example of that would be a shared resource, used by two DLLs and protected by a proper reference count. The last user will delete it, which could be the DLL that created it (lucky) or the other (trouble)
As far as I know, there isn't a (safe) way to check which version of the CRT a library is statically linked with.
You can, however, check the binaries with a program such as DependencyWalker. If you don't see any DLLs beginning with MSVC in the dependency list, it's most likely statically linked.
If the library is from a third part, I'd just assume it's using a different version of the CRT.
This isn't 100% accurate, of course, because the library you're looking at could be linked with another library that links with the CRT.
if CRT handles are a problem, then don't expose CRT handles in your dll exports :-)
don't pass memory pointers around (if an exported class allocates some memory, it should free it... which means don't mess with inline constructor/destructor?)
This MSDN article (here) says that you should be alterted to the problem by linker error LNK4098. It also suggests that passing any CRT handles across a CRT boundary is likely to cause trouble, and mentions locale, low-level file I/O and memory allocation as examples.