C++: allow program to run even if 3rd party DLL is missing? - c++

I have a c++ application that is linked against some 3rd party dynamic libraries. Some of my main classes inherit from these libraries, and make function calls to those libraries, etc. My application works without the inclusion of those libraries in theory (ie, if I manually stripped out all code and references pertaining to those libraries, it would still work), it would just be more limited in functionality. If I could craft an analogy, imagine that I've created a Windows Notepad clone, and included a 3rd party library that allows users to embed pictures and video in the document.
When I distribute my application, there is a chance my clients may not have those libraries installed. Is there a way I can have my program detect if the required DLL library exists, and simply ignore all related code if it's not installed?
Currently if I run my application without the 3rd party libraries installed, it displays errors related to the missing DLLs and crashes. One obvious solution is to simply release two versions of my application...one without the external dependencies and one with, but I'd like to avoid managing two products independently like that.

There is such option "delay load dll"
For dll xxx.dll you configure linker to use "delayload"
Till you call any function from dll it will not be loaded and your application will start successfully even if the dll is not present
You use LoadLibrary to check if xxx.dll is available.
If LoadLibrary failed - you disable module using xxx.dll
If LoadLibrary is successful - You unload it (you do not need dynamic loading - it is only use to test the presence of the dll) and use library as if it is linked regularly - no need to modify code using any xxx.dll related functionality

See this answer here:
Dynamically load a function from a DLL
You basically use LoadLibrary to load the DLL, the result will be
null or not if loaded.
Then you use GetProcAddress to get the functions.

If you want to distribute the binary of your application you can use dynamic loading (take a look here)
But if you have the option to build your application on clients system, you can use CMake and set compilation flags if it could not find the library you want. Then your code can behave according to this flag's value (i.e. this way you have the information about weather this library exists or not in your code)

Loading dynamic lib or injecting to the process or ... can be using after this you can :
if (ul_reason_for_call == DLL_PROCESS_ATTACH)
{
inj_hModule = hModule;
DisableThreadLibraryCalls(hModule); // Disable DllMain calls for DLL_THREAD_*
HANDLE hThreadProc = CreateThread(nullptr, NULL, LPTHREAD_START_ROUTINE(ThreadProc), hModule, NULL, nullptr);
CloseHandle(hThreadProc);
Beep(2000, 500);
}
if (ul_reason_for_call == DLL_PROCESS_DETACH)
{
Beep(2000, 500);
FreeLibraryAndExitThread(hModule, 0);
// exit this thread
ExitThread(0);
}

Well, you should revise your software architecture, and go about a Plugin Design Pattern as was introduced by Martin Fowler.
You don't want dynamic linkage for such case, though it's quite similar.
Regarding implementations in C++ that basically means you introduce an
abstract interface first
struct Foo {
virtual void Bar() = 0;
virtual ~Foo() {}
};
and lookup for suitable implementations coming with shared libraries at runtime1.
The exported interface of the shared library implementation needs to match your abstract interface declaration.
You can use a Factory to do that lookup, loading the shared library and create an instance of the implementation.
Here's some more information which OS functions are involved in dynamically loading and binding the Plugin.
1)Note that shared libraries don't need to have the extension .dll or .so if you're loading these actively.

My advice would be to carefully split the code dependent on that DLL. Design a clean interface to that part of your project that can be easily dynamically loaded (LoadLibrary), and have only pure virtual interfaces (no DLL export) on any classes you interact with in the main part of your program. Ideally it has a pure function interface, as those are most stable.
Now, that DLL can depend on the 3rd party DLL like normal. Your main code calls your "wrapping" DLL by loading it manually, and handles load failure gracefully.
This is easier than using a DLL not designed for this, and you can experiment without installing/uninstalling the 3rd party DLL by omitting your wrapping DLL.

Related

Understanding lua extension dll building/loading in statically linked embedded lua environment

I have a relatively complex lua environment and I'm trying to understand how the following would/could work. The starting setup includes the following two modules:
Main application (no lua environment)
DLL (statically linked to lua lib, including interpreter)
The dll is loaded into the main application and runs a lua console interpreter and a lua API accessible from the console.
Now, let's say I want to expand this setup to include another dll that would extend that lua API, luasql for example. The new dll needs to link against lua in order to build, and my understanding is that I cannot link against lua statically since there would now be two unshared copies of the lua code in process when I load the extension dll. However, even if I built the lua core lib as a dll and linked against it with the extension dll, that lua core dll would not be loaded at runtime by the main application or the primary dll. So my questions are:
What happens if I load that extension dll from the lua intepreter in the primary dll, considering that the lua core dll will not be loaded?
If I loaded the lua core dll at runtime, how would that conflict with the statically linked lua lib?
Would both scenarios (linking statically in extension dll and dynamically linking/loading the lua dll) result in having two copies of the lua core code in process?
In that event, what would happen if I tried to call an API function from the primary dll's lua environment/interpreter that was built/loaded in the extension dll?
OR does lua have some kind of special mechanism for loading native dlls that provide new C API functions that allows it to bypass normal linking rules?
Hopefully I have provided enough details to make the questions specific, if not I will be happy to refine the scenario/questions further.
Edit: I have looked at Bundling additional Lua libraries for embedded and statically linked Lua runtime and I believe it may be helpful in providing a solution ultimately but I'd like to understand it at the linker level.
You can't have the situation when you load one interpreter (let's say it's linked statically) and load a module X that is linked against a dll with a Lua interpreter, which loads another copy of the interpreter. This is likely to cause an application crash. You need to make the loaded dll to use the interpreter that is already loaded, either by linking against that dll with the interpreter or by using a proxy dll (see below).
You have two main options: (1) make dllA that is loaded by the main application that in turn depends on Lua dll; you can then link all other lua modules against Lua dll without any issues; or (2) include Lua dll into dllA, but keep Lua methods exposed so that lua modules can be linked against that dllA.
I think the first option is much simpler and likely not to require any changes to the Lua modules (as long as you can keep the name of the Lua dll the same as the one that the modules are compiled against).
Another option I should mention is that you can still use Lua modules compiled against a Lua DLL even with applications that have the Lua interpreter statically compiled. You need to use a proxy DLL; see this maillist thread for the solution and related discussion.
The answer boils down to this:
Don't try to load any Lua extensions from a dll linked against a different Lua-core. Doing so will cause utter chaos.
As long as any Lua extension loaded resolves all its dependencies to the proper Lua core, it does not matter (aside from bloat) how many Lua cores you use.
Keep in mind that windows always resolves symbols according their name and their providing dll.

C++ - Does LoadLibrary() actually link to the library?

I'm using Code::Blocks and hate manually linking DLLs. I found the LoadLibrary() function, and am wondering if it works like a .a or .lib file would. Does this function work like that? If not, what can I do programming-wise (if anything) to link a DLL without having to link a DLL by doing the Project < Build options < Linker settings < add < ... method?
LoadLibrary loads the requested library (and all the libraries it needs) into your process' address space. In order to access any of the code/data in that library, you need to find out the code or data address in the newly loaded region of memory. You need to use GetProcAddress.
The difference between this process and adding a library during build time is that for build-time library the compiler prepares a list of locations that refer to a given function, linker puts that list into .exe, and run-time linker loads the library, does the equivalent of GetProcAddress for the function name and places the address into all locations that the compiler marked.
When you don't have this automated support, you have to declare a pointer to function, call GetProcAddress yourself, and assign the returned value to your pointer to function. You can then call the function like any other C function (note "C" part - the above process is complicated by name mangling when you use C++, so make use of extern "C")
LoadLibrary() loads a DLL at runtime. Normaly you link when you compile the EXE, and link the DLLs like a static library at this time. If you need to load libraries dynamically at runtime, you use LoadLibrary().
For example when you implement a pluginsystem, is this usefull, as you don't know the libraries beforehand.
Not how it works at all. LoadLibrary is used to load a "at compile time unknown" DLL - such as program extensions/plugins or "This DLL for SSE, that DLL for non-SSE" based on "what can the hardware do - one could also consider having a DLL per connection type to email servers or something like that, so that the email program doesn't have to "carry" all the different variants, when only one is used for any particular email address.
Further, to use a DLL that has been loaded this way, you need to use GetProcAddress to get the address of the functions in the DLL. This is very different from linking the DLL into the project at buildtime, where functions simply appear "automagically" by help of the system loader functions that load the DLL's that are added to the project at build-time.
Contrary to static or dynamic library linking, LoadLibrary doesn't make the library's symbols directly available to your program. You need to call GetProcAddress at runtime to get a pointer to the functions you want to call.
As #Devolus mentioned, this is a good way to implement plugin systems and/or to access optional components. However, since the symbols are not available to your program in a transparent way, this is not really practical for ordinary usage.

Loosen DLL dependencies at start of application

in my Windows C++ program I have a few dependencies on DLLs (coming with drivers of input devices). I don't actually load the DLLs myself, but the drivers provide (small) .lib library that I statically link against (and I assume it is those libraries that make sure the DLLs are present in the system and loads them). I'm writing an application that can take input from a series of video cameras. At run-time, the user chooses which one to use. Currently my problem is that my routines that query whether a camera is connected already require the functionality of the camera being present on the system. I.e. let's say there is camera model A and B, the user has to install the drivers for A and B, even if he knows he just owns model B. The user has to do this, as otherwise my program won't even start (then when it started it will of course tell the user which of the two cameras are actually connected).
I'd like to know whether there is any possibility, at run-time, to determine which of the DLLs are present, and for those that aren't, somehow disable loading even the static (and, thus, dynamic) component.
So basically my problem is that you cannot do if(DLL was found){ #include "source that includes header using functions defined in lib which loads DLL"}
I think using the DELAYLOAD linker flag may provide the functionality required. It would allow linking with the .lib files but would only attempt to load the DLL if it is used:
link.exe ... /DELAYLOAD:cameraA.dll /DELAYLOAD:cameraB.dll Delayimp.lib
The code would be structured something similar to:
if (/* user selected A */)
{
// Use camera A functions, resulting in load of cameraA's DLL.
}
else
{
// Use camera B functions, resulting in load of cameraB's DLL.
}
From Linker Support for Delay-Loaded DLLs
:
Beginning with Visual C++ 6.0, when statically linking with a DLL, the
linker provides options to delay load the DLL until the program calls
a function in that DLL.
An application can delay load a DLL using the /DELAYLOAD (Delay Load Import)
linker option with a helper function (default implementation provided by
Visual C++). The helper function will load the DLL at run time by calling
LoadLibrary and GetProcAddress for you.
You should consider delay loading a DLL if:
- Your program may not call a function in the DLL.
- A function in the DLL may not get called until late in your program's
execution.
You need to load the libs at run-time. Take a look a look at LoadLibary.
This is an MSDN article about this: DLLs the Dynamic Way I just glanced over this. It's very old.
This one shows the usage of LoadLibrary: Using Run-Time Dynamic Linking

Load shared library by path at runtime

I am building a Java application that uses a shared library written in C++ and compiled for different operating systems. The problem is, that this shared library itself depends on an additional library it normally finds under the appropriate environment variable (PATH, LIBRARY_PATH or LD_LIBRARY_PATH).
I can - but don't want to - set these environment variables. I'd rather load the needed shared libraries from a given path at runtime - just like a plugin. And no - I don't want any starter application that starts a new process with a new environment. Does anybody know how to achieve this?
I know that this must be possible, as one of the libraries I use is capable of loading its plugins from a given path. Of course I'd prefer platform independent code, but if this ain't possible, seperate solutions for Windows, Linux and MacOS would also do it.
EDIT
I should have mentioned that the shared library I'd wish to use is object oriented, which means that a binding of single functions won't do it.
Un UNIX/Linux systems you can use dlopen. The issue then is you have to fetch all symbols you need via dlsym
Simple example:
typedef int (*some_func)(char *param);
void *myso = dlopen("/path/to/my.so", RTLD_NOW);
some_func *func = dlsym(myso, "function_name_to_fetch");
func("foo");
dlclose(myso);
Will load the .so and execute function_name_to_fetch() from in there. See the man page dlopen(1) for more.
On Windows, you can use LoadLibrary, and on Linux, dlopen. The APIs are extremely similar and can load a so/dll directly by providing the full path. That works if it is a run-time dependency (after loading, you "link" by calling GetProcAddress/dlsym.)
I concur with the other posters about the use of dlopen and LoadLibrary. The libltdl gives you a platform-independent interface to these functions.
I do not think you can do it for it.
Most Dlls have some sort of init() function that must be called after it have been loaded, and sometime that init() function needs some parameters and returns some handle to be used to call the dll's functions. Do you know the definition of the additional library?
Then, the first library can not simply look if the DLL X is in RAM only by using its name. The one it needs can be in a different directory or a different build/version. The OS will recognize the library if the full path is the same than another one already loaded and it will share it instead of loading it a second time.
The other library can load its plugins from another path because it written to not depend on PATH and they are his own plugins.
Have you try to update the process's environment variables from the code before loading the Dll? That will not depends on a starter process.

Returning C++ objects from Windows DLL

Due to how Microsoft implements the heap in their non-DLL versions of the runtime, returning a C++ object from a DLL can cause problems:
// dll.h
DLL_EXPORT std::string somefunc();
and:
// app.c - not part of DLL but in the main executable
void doit()
{
std::string str(somefunc());
}
The above code runs fine provided both the DLL and the EXE are built with the Multi-threaded DLL runtime library.
But if the DLL and EXE are built without the DLL runtime library (either the single or multi-threaded versions), the code above fails (with a debug runtime, the code aborts immediately due to the assertion _CrtIsValidHeapPointer(pUserData) failing; with a non-debug runtime the heap gets corrupted and the program eventually fails elsewhere).
Two questions:
Is there a way to solve this other then requiring that all code use the DLL runtime?
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
Is there a way to solve this other then requiring that all code use the DLL runtime?
Not that I know of.
For people who distribute their libraries to third parties, how do you handle this? Do you not use C++ objects in your API? Do you require users of your library to use the DLL runtime? Something else?
In the past I distributed an SDK w/ dlls but it was COM based. With COM all the marshalling of parameters and IPC is done for you automatically. Users can also integrate in with any language that way.
Your code has two potential problems: you addressed the first one - CRT runtime. You have another problem here: the std::string could change among VC++ versions. In fact, it did change in the past.
The safe way to deal with is to export only C basic types. And exports both create and release functions from the DLL. Instead of export a std::string, export a pointer.
__declspec(export) void* createObject()
{
std::string* p = __impl_createObject();
return (void*)p;
}
__declspec(export) void releasePSTRING(void* pObj)
{
delete ((std::string*)(pObj));
}
There is a way to deal with this, but it's somewhat non-trivial. Like most of the rest of the library, std::string doesn't allocate memory directly with new -- instead, it uses an allocator (std::allocator<char>, by default).
You can provide your own allocator that uses your own heap allocation routines that are common to the DLL and the executable, such as by using HeapAlloc to obtain memory, and suballocate blocks from there.
If you have a DLL that you want to distribute and you don't want to bind your callers to a specific version to the C-Runtime, do any of the following:
I. Link the DLL to the static version of the C-Runtime library. From the Visual Studio Project Properties page, select the tab for Configuration Properties-> C/C++ -> Code Generation. These an option for selecting the "Runtime library". Select "Multithread" or "Multithreaded Debug" instead of the DLL versions. (Command line equilvalent is /MT or /MTd)
There are a couple of different drawbacks to this approach:
a. If Microsoft ever releases a security patch of the CRT, your shipped components may be vulnerable until your recompile and redist your binary.
b. Heap pointers allocated by "malloc" or "new" in the DLL can not be "free"d or "delete"d by the EXE or other binary. You'll crash otherwise. The same holds true for FILE handles created by fopen. You can't call fopen in teh DLL and expect the EXE to be able to fclose on it. Again, crash if you do. You will need to build the interface to your DLL to be resilient to all of these issues. For starters, a function that returns an instance to std::string is likely going to be an issue. Provide functions exported by your DLL to handle the releasing of resources as needed.
Other options:
II. Ship with no c-runtime depedency. This is a bit harder. You first have to remove all calls to the CRT out of your code, provide some stub functions to get the DLL to link, and specify "no default libraries" linking option. Can be done.
III. C++ classes can be exported cleanly from a DLL by using COM interface pointers. You'll still need to address the issues in 1a above, but ATL classes are a great way to get the overhead of COM out of the way.
The simple fact here is, Microsofts implementation aside, C++ is NOT an ABI. You cannot export C++ objects, on any platform, from a dynamic module, and expect them to work with a different compiler or language.
Exporting c++ classes from Dlls is a largely pointless excercise - because of the name mangling, and lack of support in c++ for dynamically loaded classes - the dlls have to be loaded statically - so you loose the biggest benefit of splitting a project into dlls - the ability to only load functionality as needed.