Is there any standard way to force users to use my dll library exactly in the same version used during compilation ?
Lets assume I have library in version 1.0 with a function:
extern "C" void A();
In version 1.1 I have added breaking change e.g.: another parameter to this function so I have:
extern "C" void A(int);
The exported name in dll is exactly the same but if the developer compile the product with version 1.1, then send it to the customer and the customer will update only the product (exe file) then everything will fail. And it may fail in random point at runtime (depending on when this changed function is executed).
Are there any standard ways to prevent loading the library in wrong version ? I'm mostly interested with solution for Windows dll files. (But if there are solutions specific to other platforms, please leave the comment)
The example above is simplified. Adding c++ mangling solves the particular problem above, but I'm looking for more general solution.
My idea is to add a static object in the library header file. This static object in constructor could execute method from the library and the method in name has encoded current version like: init_library_1_1(); So if the method is missing in dll then at very beginning user see that something is wrong. But the solution looks like a dirty workaround. And I have to take care that developer include this file.
Is there any better solution for such problem ?
There is no standard answer but many compiler/linker pairs have non-standard features to perform these kinds of tests.
Visual C++ for example has "#pragma detect_mismatch" that places a record in the output object file and then when the link is performed the records are tested and if they don't match an error is reported. I don't believe this would do any good if the DLL were loaded dynamically at run-time rather than statically during the load. I believe gcc/clang have something similar but don't know the details.
The best solution I was able to do:
In your .dll file add exported function:
extern "C"
{
DLL_FUNCTION void library_dll_required_in_version_10_0_1() {}
}
In public header add:
namespace library_impl{
extern "C"{
DLL_FUNCTION void library_dll_required_in_version_10_0_1();
}
class VersionProtection{
public:
VersionProtection() {
library_dll_required_in_version_10_0_1();
}
};
static VersionProtection aversion_protection_verifier;
}
Finally If user try to use application with wrong version of the dll file then he receive nice error message at program startup:
The procedure entry point library_dll_required_in_version_10_0_1
could not be located in the dynamic link library
You need to update the function version number after each breaking change.
Related
Let's say I have this lib
//// testlib.h
#pragma once
#include <iostream>
void __declspec(dllexport) test();
int __declspec(dllexport) a();
If I omit the definition for a() and test() from my testlib.cpp, the library still compiles, because the interface [1] is still valid. (If I use them from a client app then they won't link, obviously)
Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?
This is not related to any real world issue. Just curious.
[1] MSVC docs
No, it's not possible.
Partially because a dllexport declaration might legally not even be implemented in the same DLL, let alone library, but be a mere forward declaration for something provided by yet another DLL.
Especially it's impossible to decide on the object level. It's just another forward declaration like any other.
You can dump the exported symbols once the DLL has been linked, but there is no common tool for checking completeness.
Ultimately, you can't do without a client test application which attempts to load all exported interfaces. You can't check that on compile time yet. Even just successfully linking the test application isn't enough, you have to actually run it.
It gets even worse if there are delay-loaded DLLs (and yes, there usually are), because now you can't even check for completeness unless you actually call at least one symbol from each involved DLL.
Tools like Dependency Walker etc. exist for this very reason.
You asked
"Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?"
When you load a DLL then the actual function binding is happenning at runtime(late binding of functions) so it is not possible for the compiler to know if the function definition are available in the DLL or not. Hope this answer to your query.
How do I (programmatically) ensure that a dll contains definitions for all exported functions?
In general, in standard C++11 (read the document n3337 and see this C++ reference), you cannot
Because C++11 does not know about DLL or dynamic linking. It might sometimes make sense to dynamically load code which does not define a function which you pragmatically know will never be called (e.g. an incomplete DLL for drawing shapes, but you happen to know that circles would never be drawn in your particular usage, so the loaded DLL might not define any class Circle related code. In standard C++11, every called function should be defined somewhere (in some other translation unit).
Look also into Qt vision of plugins.
Read Levine's Linkers and loaders book. Notice that on Linux, plugins loaded with dlopen(3) have a different semantics than Windows DLLs. The evil is in the details
In practice, you might consider using some recent variant of the GCC compiler and develop your GCC plugin to check that. This could require several weeks of work.
Alternatively, adapt the Clang static analyzer for your needs. Again, budget several weeks of work.
See also this draft report and think about C++ cross compilers (e.g. compiling on Windows a plugin for a RaspBerry Pi)
Consider also runtime code generation frameworks like asmjit or libgccjit. You might think of generating at runtime the missing stubs or functions (and fill appropriately function pointers with them, or even vtables). C++ exceptions could also be an additional issue.
If your DLL contains calls to the functions, the linker will fail if a definition isn't provided for those functions. It doesn't matter if the calls are never executed, only that they exist.
//// testlib.h
#pragma once
#include <iostream>
#ifndef DLLEXPORT
#define DLLEXPORT(TYPE) TYPE __declspec(dllexport)
#endif
DLLEXPORT(void) test();
DLLEXPORT(int) a();
//// testlib_verify.c
#define DLLEXPORT(TYPE)
void DummyFunc()
{
#include testlib.h
}
This macro-based solution only works for functions with no parameters, but it should be easy to extend.
I'm writing this Editor.exe program that loads a game.dll, gets the address of a function inside the dll, and pass a pointer to a Core object.
gameInitFuncPtr init =
(gameInitFuncPtr) GetProcAddress(LoadLibraryA("game.dll"),"gameInit");
init(&core); // core is already instanced somewhere, maybe on the stack
The game.dll includes the core.h where the Core class is defined.
The Core class is implemented and compiled into Editor.exe.
On the dll side, calling functions from the passed object pointer, results in an Unresolved external symbol
An example of a call the game.dll would do with the given object pointer would be:
void gameInit(ldk::Core* core)
{
_core->renderer.drawText("initializing...");
}
How can I compile the dll so that it does not try to find, for example, the drawText() implementation within the dll module ?
1 - Please, note that this is NOT a question about how to declare pointers to member functions.
2 - I know it could easily be fixed if i passed a struct with only pointer to the methods, but I'm really curious about this.
3 - I'm using Microsoft's cl compiler 18.00, the one that ships with Visual studio 2013
It is not clear where you initiallize _core. At first glance gameInit should do it.
Declare interface class Core, i.e. it should be abstract. Implement it in a successor class for example CoreImpl in exe. This will fix Unresolved external symbols.
Looks like I was overthinking it.
Wen compiling the editor.exe the Core sould be declared just link any class.
struct Core
{
struct Renderer
{
void drawText(const char* text);
}
...
}
But, since the editor and the game.dll share the same Core.h, I used a macro to modify the declaration of Core.h member functions to be pure virtual, for example:
struct Core
{
struct Renderer
{
virtual void drawText(const char* text) = 0;
}
...
}
So the Unresolved external symbol linking error is gone
BUT: I does not work as expected in RUNTIME! :(
I had a similar problem as you with almost the same setting - Game as dll and Engine as exe. Here are some notes on how to tackle this problem.
Call only virtual methods. As you pointed out, if the method you call is not declared virtual, the linker tries to find an implementation for it and fails (if it's not in the header - a thing we try to avoid). The method does not need to be abstract, virtual is enough. Also, note that in your struct Renderer you can have methods that are not virtual, as long as you don't call them from the dll (if you do, the linker complains). It is probably not advisable to have such an interface, it would be much better to have some sort of API class which has only virtual public methods so users of this class cannot make a mistake.
All classes used from the dll need to be shared or header only. What I mean by this is, that as far as I know, there is no magic way, to have classes declared in header, implemented in cpp which is compiled to the exe and then use these classes from the dll. E.g., if you have a custom string class, it needs to be in a shared library. If it's just in the exe you will not be able to instantiate it in the dll (return it from functions etc.). A solution to this is to use header-only classes. E.g., your string may be implemented in a header in the Editor project and this header may be included by your Game project. This way you essentially compile the same code to both exe and dll.
To see a small working example see my repository with VS 2017 solution which demonstrates this exact problem and nothing else. repo link.
Much larger working example of this problem can be seen in idTech4 engine - DOOM 3 version here. It also uses a game as a dll and an engine as an exe. And also needs to exchange pointers to the engine's systems which are used from the game. The project is big, but if you take a look at project Game-d3xp class Game.h all the way down, they have the game's API with a single function GetGameAPI_t which expects to get gameImport_t struct with pointers to engine systems and returns gameExport_t with game informations. The loading then happens in the Common.cpp
As you can see they use shared library idLib in the respective project for things such as idString. All engine classes used from the dll are usually very small and implemented in headers only (they are mostly structs).
Note that id themselves are moving away from this architecture and even their latest version of DOOM 3 - DOOM 3 BFG edition compiles to a single exe and the modules are static libraries instead of dlls.
I'm new to C++ and I'm having a hard time getting my dll references to work. I've been trying to get it to work for a couple of days, but the few explainations I've found often refer to doing x or y, but don't tell me how to do x or y. Since I'm not a C++ veteran, I need someone to walk me through it. What I want to do is the following:
MySolution
MyExe (Win32 .exe)
Main.h
Main.cpp
(constructs ImplementationB calls the methods as defined by InterfaceA, then deletes the instances)
(calls/fills HelperC.Foobar)
MyInterfaces (dll)
InterfaceA.h
~InterfaceA();
virtual void DoSomething();
MyUtils (dll)
HelperC.h
static float Foobar;
HelperD.cpp
float HelperC::Foobar = 1.0f;
MyImplementations (dll)
ImplementationB : InterfaceA
(uses the value from HelperC.Foobar)
The MyExe and MyImplementations projects contain most of the executing code. But, I need an interface, so I need an interface project (MyInterfaces). I need some helper classes that need to be accessible from both MyExe and MyImplementations, hence MyUtils. I would like this helper class to be statically available, though it is not mandatory.
I had a compiling version before I started adding MyUtils with the HelperC class. I had to mark the interface destructor with __declspec(dllexport), along with the DoSomething method. I also had to mark the constructor of ImplementationB in order to instantiate it from MyExe, which makes sense. However, when I tried to mark the entire class (both the implementation and the interface) with __declspec(dllexport), the example wouldn't compile (which does not make sense).
From what I've read, having static fields in a dll and using them from external code doesn't really work all too well. So, as an alternative, I made Foobar non-static and passed a HelperC instance to the method as described by InterfaceA. Since I had already gotten simple classes to work, I figured that should work as well. However, now the compiler is throwing errors on the constructor of ImplementationB (LNK2019).
In short: I'm getting link errors all over the place in sections that have nothing to do with my changes, and there's little documentation describing the specific steps I need to perform in order to get a simple dll reference to work.
Can someone point out what I need to add and where I need to add it in order to make it compile? Also, some do's and don't's about C++ dll references would help a lot (e.g. don't use statics across projects).
After much digging, I found out that the culprit was a magical project setting. It is called Ignore Import Library and is located at Project Properties->Linker->General, and is set to Yes by default, while it should be set to No in most cases. The setting tells the executable project to use the dll's lib file during compilation. This still sounds strange to me (sounds like duplicate build output), but as far as I understand it, the lib file describes how to link to the dll. If your dll produces a lib during build, you probably want to set the setting to No.
I also learned that to be able to use the HelperC class as a statically accessible helper, I needed to use dllimport in combination with the macro trick, as described by #drescherjm. The dllimport declaration is only ever needed to be able to use data members across libraries (static class fields or globally defined variables). It may be applied to functions as well, though it is not required, in which case it provides a slight performance boost during library linking.
For completeness, my project structure after getting it to work:
MySolution
MyExe (Win32 .exe, Debugger Type=Mixed)
Main.h
Main.cpp
(constructs ImplementationB calls the methods as defined by InterfaceA, then deletes the instances)
(calls/fills HelperC::Foobar)
MyInterfaces (dll, Ignore Import Library=Yes, because there is no .lib after building)
InterfaceA.h
class __declspec(dllexport) InterfaceA
~InterfaceA() {};
virtual void DoSomething() = 0;
MyUtils (dll, Ignore Import Library=No)
HelperC.h
class __declspec(dllimport/dllexport) HelperC // (see macro trick)
static float Foobar;
HelperD.cpp
float HelperC::Foobar = 1.0f;
MyImplementations (dll, Ignore Import Library=No)
ImplementationB.h
class __declspec(dllexport) ImplementationB : public InterfaceA
ImplementationB();
~ImplementationB();
void DoSomething();
ImplementationB.cpp
ImplementationB::ImplementationB() {};
ImplementationB::~ImplementationB() {};
ImplementationB::DoSomething() { /* Omitted */ };
(uses HelperC::Foobar in implementation)
On a side note: if you added a default C++ class library project in Visual Studio, you may need to flip the Project Properties->Debugging->Debugger Type setting to Mixed before you will be able to set/use breakpoints in the dll code. See this.
I hope this helps others who are wrestling with dll's in C++ (and Visual Studio).
I have a library StudentModelLib in which CStudentModeler is the main class in the library. It has a logging option that I made conditional on whether PRETTY_LOG is enabled. If only if PRETTY_LOG is enabled do I include the CPrettyLogger, initialize it (later), and/or actually log things.
Another project in the same solution, StudentModel2, statically links to StudentModelLib. It includes StudentModeler.h from the library and instantiates CStudentModeler at runtime.
How to set up the weirdness:
Set PRETTY_LOG inside the library in the project's preprocessor definitions
Unset PRETTY_LOG in the EXE project
Compile the entire solution, which builds the library, then the EXE
The weirdness starts when CStudentModeler is instantiated within the code for the executable. At that point, the debugger seems confused about which version of CStudentModeler it should be using and hovering over variables in the IDE leads to really confusing results. When the EXE runs, it also has memory corruption that shows up.
My hypothesis is that the compiled library's CStudentModeler has a prettyLogger member, but the compiled EXE uses the .h file with the directive disabled and it assumes CStudentModeler does not have a prettyLogger member. I'm guessing the memory corruption occurs because the library and the EXE have different definitions for where class's member variables live on the heap.
My questions are as follows:
Have I correctly identified the problem?
Is it possible to have library features be optional based on compiler directives but not break other projects that use that library?
What is the proper way to ensure that the projects using the library assume the correct enabled features based on how the library was compiled?
How is it that no part of the VS2010 compilation/linking process warns me about this seemingly huge bug?
For the sake of this test, CPrettyLogger has an empty default constructor and all other code related to it is commented out. Simply instantiating it causes the bug.
StudentModeler.h
This is part of the library and contains the conditional member variable.
class CStudentModeler : public CDataProcessor2
{
// Configuration variables
string student_id;
// Submodules
CContentSelector contentSelector;
EventLog eventLog;
#ifdef PRETTY_LOG
CPrettyLogger prettyLogger; // <--- the problem?
#endif
// Methods
void InitConcepts();
void InitLOs();
public:
CStudentModeler( string sm_version, string session_id, string url,
string db_user, string db_password, string db_name,
SMConfig config );
~CStudentModeler();
}
It seems your assessment is right on the spot.
Yes, it is possible. Don't make externally visible declarations depend on preprocessing directives and you should be OK. Internal stuff may be as configurable as you want, but interfaces should be set in stone. In your case the library should export an interface and a class factory. The client should either not know whether the selected instance has an additional feature, or be able to access it only through potentially fallible interface. If it fails, it's not supported.
If you do what I'm suggesting in (2) you shouln't need to. If you still want to, have a variable name that macro expands to library_options_opt1_yes_opt2_no_opt3_42_... in the library, and have the client code in the header reference it. You will have a link error in case of mismatch.
Thr C++ standard specifically allows the compiler not to warn you if you do such things. It's actually not really easy for the compiler. The corresponding rule is called One Definition Rule.
I have a big C++ Project, in which I try to implement a debug function which needs classes from other libraries. Unfortunately these classes share the same name and namespaces with classes called inside the project. I tried to use a static library to avoid multiple definitions, but of course the compiler complains about that issue. So my question:
It is possible to create that library for the function without that the compiler knows about the called classes inside the function?
I don't know, like a "protected function" or like putting all the code from the libraries inside the function code..
Edit: I'm using the g++ compiler.
Max, I know but so far I see no other way.
Schematic, the problem is.
Project:
#include a.h // (old one)
#include a2.h
return a->something();
return a2->something(); //debug function
debug function a2:
#include a.h // (new one!!)
return a->something(); // (new one!)
Compiling Process looks so far:
g++ project -la -la2
That is a very simplified draft. But that's it actually.
Maybe you can create a wrapper library which internally links to that outside library and exports its definitions under a different name or namespace.
try enclosing the #includes for the declarations of the classes that you are using in your debug function in a namspace, but don't use an using clause for that namespace.
There are a few techniques that may help you, but that depends on what the "debug version" of the library does.
First, it's not unheard of to have #ifdef blocks inside functions that do additional checking depending on whether the program was built in debug mode. The C assert macro behaves this way.
Second, it's possible that the "debug version" does nothing more than log messages. It's easy enough to include the logging code in both debug and release versions, and make the decision to actually log based on some kind of "priority" parameter for each log message.
Third, you may consider using an event-based design where functions can, optionally, take objects as parameters that have certain methods, and then if interesting things happen and the function was passed an event object, the function can call those methods.
Finally, if you're actually interested in what happens at a lower level than the library you're working on, you can simply link to debug versions of those lower level libraries. This is a case of the first option mentioned above, applied to a different library than the one you're actually working on. Microsoft's runtime libraries do this, as do Google's perftools and many "debugging malloc" libraries.