I have a big program A that at some point calls my (big) program B. Program B is called only once in Program A. At the moment Program B is an executable Program (B.exe - compiled C++-Code).
Somebody proposed using a DLL of Program B instead of using the executable.
Are there any advantages in using a DLL ( like security, size, etc.)
Is it easy to create a DLL from my source-code ( I use Qt creator)
Are there any advantages in using a dll ( like security, size, etc)
No. As a matter of fact, if you're looking at things like security, size, etc. using a DLL makes things worse. When you load a DLL, everything happens inside the loading process' address space. So any bug inside the DLL directly affects the rest of the program. A crash in the DLL code, will crash the whole program.
Is it easy to create a dll from my source-code ( I use Qt creator)
Yes it is. But to me it seems there's hardly any benefit for your particular use case. As a matter of fact, for rarely used code paths I'd rather strongly encourage putting it into a separate process (i.e. link it into a .EXE).
BTW: .dll and .exe are exactly the same. You can load a .exe as if it were a DLL; give it a DllMain and you can use it either way! Of course loading a EXE with LoadLibrary won't make it run in a separate process and rather import all the bugs into your main program.
Related
I'd like my Windows application to be able to reference an extensive set of classes and functions wrapped inside a DLL, but I need to be able to guide the application into choosing the correct version of this DLL before it's loaded. I'm familiar with using dllexport / dllimport and generating import libraries to accomplish load-time dynamic linking, but I cannot seem to find any information on the interwebs with regard to possibly finding some kind of entry point function into the import library itself, so I can, specifically, use CPUID to detect the host CPU configuration, and make a decision to load a paricular DLL based on that information. Even more specifically, I'd like to build 2 versions of a DLL, one that is built with /ARCH:AVX and takes full advantage of SSE - AVX instructions, and another that assumes nothing is available newer than SSE2.
One requirement: Either the DLL must be linked at load-time, or there needs to be a super easy way of manually binding the functions referenced from outside the DLL, and there are many, mostly wrapped inside classes.
Bonus question: Since my libraries will be cross-platform, is there an equivalent for Linux based shared objects?
I recommend that you avoid dynamic resolution of your DLL from your executable if at all possible, since it is just going to make your life hard, especially since you have a lot of exposed interfaces and they are not pure C.
Possible Workaround
Create a "chooser" process that presents the necessary UI for deciding which DLL you need, or maybe it can even determine it automatically. Let that process move whatever DLL has been decided on into the standard location (and name) that your main executable is expecting. Then have the chooser process launch your main executable; it will pick up its DLL from your standard location without having to know which version of the DLL is there. No delay loading, no wonkiness, no extra coding; very easy.
If this just isn't an option for you, then here are your starting points for delay loading DLLs. Its a much rockier road.
Windows
LoadLibrary() to get the DLL in memory: https://msdn.microsoft.com/en-us/library/windows/desktop/ms684175(v=vs.85).aspx
GetProcAddress() to get pointer to a function: https://msdn.microsoft.com/en-us/library/windows/desktop/ms683212(v=vs.85).aspx
OR possibly special delay-loaded DLL functionality using a custom helper function, although there are limitations and potential behavior changes.. never tried this myself: https://msdn.microsoft.com/en-us/library/151kt790.aspx (suggested by Igor Tandetnik and seems reasonable).
Linux
dlopen() to get the SO in memory: http://pubs.opengroup.org/onlinepubs/009695399/functions/dlopen.html
dladdr() to get pointer to a function: http://man7.org/linux/man-pages/man3/dladdr.3.html
To add to qexyn's answer, one can mimic delay loading on Linux by generating a small static stub library which would dlopen on first call to any of it's functions and then forward actual execution to shared library. Generation of such stub library can be automatically generated by custom project-specific script or Implib.so:
# Generate stub
$ implib-gen.py libxyz.so
# Link it instead of -lxyz
$ gcc myapp.c libxyz.tramp.S libxyz.init.c
I have a strange issue that I am trying to work out for someone. I don't have any access to the code. There is a program that loads a DLL and has somewhat of a plugin framework. They provide virtually no documentation beyond how to import functions from the DLL and what calling convention to use for exports.
This person's plugin imports functions from a DLL (let's assume they used the proper calling conventions and imported properly). It periodically runs into access violations (usually access violation write/read from 0x0000000). Sometimes, it crashes the program and Event Viewer shows exception code 0xc0000005 (another access violation) with faulting module SHLWAPI.dll.
Using depends, I have determined that the program is statically linked to msvcr. I found that the plugin DLL dynamically links to msvcr120.dll.
Yes, I am aware that this is just asking for trouble and the access violations are no surprise, but unfortunately, I have to deal with someone else's problem.
Anyway, my question is this:
Let's say is imported from this DLL and inside is a call to a function that is provided by msvcr120. When the program calls the imported , is it possible that it is calling from the msvcr it is statically linked to rather than from msvcr120?
I realize that it probably depends on the main program's plugin framework, but general feedback would be appreciated.
Thanks in advance!
There are known issues when using multiple copies of the CRT in one program, even when they all use the same version of the CRT (see Potential Errors Passing CRT Objects Across DLL Boundaries). If the CRTs are different versions, there are lots of other problems due to different size or layout of internal structures.
Since the program you use statically links with the CRT, it can not reliably be plugged in to. The anti-debugger code is just plain silly; there are several ways around it. If you paid for it send it back and demand a refund.
I've snooped around a little bit in MS-Office DLLs, and I noticed that some of the DLLs don't have any exported functions. What I don't quite understand, how an application can use these DLLs without any functions exported ?!
I mean, the dllmain() does get executed on LoadLibrary(), but whats the point? Why would anyone create a DLL without exported functions?
thanks! :-)
One way of dealing with versions of a program destined for different languages is to put all of the resources into a language DLL. The DLL doesn't contain any code, just resources that have been translated to a target language. When the main program starts up, all it needs to do is load the proper language DLL.
I haven't looked at the DLLs in question; but it's possible in something like MSOffice Microsoft have done this to obfuscate the DLL to make it more difficult to debug / reverse engineer.
However, as you ask how would you use such a DLL? Well if the application knows the layout of the DLL then it can create a function pointer with the address of a known function and call it.
If you really want to dig further you could objdump the DLL and look for standard C / C++ ABI function prologues & epilogues and possibly work out where the functions start.
When you call LoadLibrary the DLL gets call of its DllMain.
That is DLL entry point. It is called on process attach and thread attach.
So you do have entry point.
As soon as it has at least one entry point then it can create instance of some interface (e.g. factory) an set it in e.g. TLS variables where other modules will pickup them.
So you can can have COM alike system of interfaces that are not exposed outside except to the application. Something like that - many over variations are possible.
Resources
The DLL likely has resources, like string tables, images, icons, etc., used by the rest of Office.
Always possible that they just don't export them as C interfaces. A DLL isn't magic, it's just bits and bytes, and nothing says that you can't get code out of a DLL if you don't ask Windows for it. I believe that .NET takes this approach- they save metadata in the DLL that tells the CLR what's in it, instead of making .NET functions available by the normal GetProcAddress approach. Unless you explicitly ask for it.
I have a C++ class I'm writing now that will be used all over a project I'm working on. I have the option to put it in a static library, or export the class from a dll. What are the benefits/penalties for each approach. The only one I can think of is compiled code size which I don't really care about. Thanks!
Advantages of a DLL:
You can have multiple different exe's that access this functionality, so you will have a smaller project size overall.
You can dynamically update your component without replacing the whole exe. If you do this though be careful that the interface remains the same.
Sometimes like in the case of LGPL you are forced into using a DLL.
You could have some components as C#, Python or other languages that tie into your DLL.
You can build programs that consume your DLL that work with different versions of the DLL. For example you could check if a function exists in a certain operating system DLL and only call it if it exists, and otherwise do some other processing.
Advantages of Static library:
You cannot have dll verisoning problems that way
Less to distribute, you aren't forced into a full installer if you only have a small application.
You don't have to worry about anyone else tying into your code that would have been accessible if it was a DLL.
Easier to develop a static library as you don't need to worry about exports and imports.
Memory management is easier.
One of the most significant and often unnoted features of dynamic libraries on Windows is that DLLs have their own heap. This can be an advantage or a disadvantage depending on your point of view but you need to be aware of it. For example, a global variable in a DLL will be shared among all the processes attaching to that library which can be a useful form of de facto interprocess communication or the source of an obscure run time error.
I have a system that runs like this:
main.exe runs sub.exe runs sub2.exe
and etc. and etc...
Well, would it be any faster of more efficient to change sub and sub2 to dlls?
And if it would, could someone point me in the right direction for making them dlls without changing a lot of the code?
DLL really are executables too. They comply to the PE standard which covers multiple common file extensions for windows, like .exe, .dll, .ocx...
When you start 2 executables they each get their own address space, their own memory and such. However when you load an executable and a dll, the dll is loaded into the process space of the executable so they share a lot of things.
Now depending on how your 3 executables communicate together (if they even communicate together), you might have to rewrite some code. Basically the general approach to having dlls is to simply call the dll function from inside your program. This is usually much simpler than interprocess communication
DLLs would definitely be faster than separate executables. But keeping them separate allows more flexibility and reuse (think Unix shell scripting).
This seems to be a good DLL tutorial for Win32.
As for not changing code much, I'm assuming you are just passing information to theses subs with command line arguments. In that case, just rename the main functions, export them from the DLL, and call these renamed "main" functions from the main program.
If your program (main.exe) is merely starting programs that really have nothing to with it, keep doing what you're doing. If sub.exe and sub2.exe contain functionality that main.exe would benefit from, convert them to dlls, so main.exe can call functions in them.
When it comes to efficiency, it depends on how large sub.exe and sub2.exe are. Remember that loading a dll also implies overhead.
There are several factors to take into consideration. For starters, how often do you run that sequence, and how long is the job executed by the other executables? If you do not call them very often, and the job they execute is not very short, the load time itself becomes insignificant. In that case, I'd say go with whatever fits the other needs. If, OTOH, you do call them quite a lot, I'd say make them DLLs. Load them once, and from that point onward every call is as fast as a call to a local function.
As for converting an exe to a dll - it should not be very complicated, but there are some points when working with dlls that require special care: using dllmain for initialization has some limitations a regular main doesn't have (synchronization issues); you'll have to keep in mind a dll shares the address space with the exe; CRT versions discrepencies might cause you grief and so on.
That depends on how often they are run. That if, is the tear-up/tear-down of processes is a significant cost.
Wouldn't it be safer to convert them to dll's to prevent the user from accidentally running sub1 or sub2 without main starting them?