Say, there is a third-party library that has the following in a header file:
foo.h:
namespace tpl {
template <class T, class Enable = void>
struct foo {
static void bar(T const&) {
// Default implementation...
};
};
}
In the interface of my own library, I'm supposed to provide partial specialization of this foo for my own type(s). So, let's say I have:
xxx.h:
# include <foo.h>
namespace ml {
struct ML_GLOBAL xxx {
// Whatever...
};
}
namespace tpl {
template <>
struct ML_GLOBAL foo<::ml::xxx> {
static void bar(::ml::xxx const&);
};
}
where ML_GLOBAL is a compiler-specific visibility attribute to ensure that the symbols are available for dynamic linkage (that is by default my build system hides all of the symbols in the produced shared library).
Now, I don't want to disclose my implementation of bar, so I employ explicit template instantiation:
xxx.cpp:
# include "xxx.h"
namespace tpl {
void foo<::ml::xxx>::bar(::ml::xxx const&) {
// My implementation...
}
extern template struct foo<::ml::xxx>;
}
When the time comes to actually use this tpl::foo<::ml::xxx>::bar function in some consumer application (where my shared library is linked as well), I get the undefined reference error to the tpl::foo<::ml::xxx, void>::bar symbol. And indeed, running nm -CD on the produced shared library shows no trace of tpl::foo<::ml::xxx, void> symbol.
What I've tried so far, were different combinations on where to put ML_GLOBAL (e.g. on explicit template instantiation itself, about what GCC clearly complains unlike Clang) and with/without the second template argument void.
The question is whether this is related to the fact that the original definition has no visibility attribute (ML_GLOBAL) attached by virtue of coming from a third-party library or did I actually miss something here? If I didn't miss anything, then am I really forced to expose my implementation in such a scenario ? [... *cough* looks more like a compiler deficiency to be honest *cough* ...]
It turned out to be a false alarm. Nevertheless, this catch took me a couple of hours to finally remember why this symbol might be invisible to consumers. It's really trivial but I feel like posting it here for future visitors who happen to have the same setup. Basically, if you use either a linker script [1] or a (pure) version script [2] (specified with the --version-script linker option), then don't forget to set global visibility for those tpl::foo* third-party-based symbols (or whichever they are in your case). In my case, I originally had the following:
{
global:
extern "C++" {
ml::*;
typeinfo*for?ml::*;
vtable*for?ml::*;
};
local:
extern "C++" {
*;
};
};
what I clearly had to change to
{
global:
extern "C++" {
tpl::foo*;
ml::*;
typeinfo*for?ml::*;
vtable*for?ml::*;
};
local:
extern "C++" {
*;
};
};
in order to link everything properly and get the expected result.
Hope this helps and regards.
BONUS
A curious reader could ask though, "Why the hell are you combining explicit visibility attributes and a linker/version script to control visibility of symbols when there are already the -fvisibility=hidden and -fvisibility-inlines-hidden options which are supposed to do just that?".
The answer is that they of course do, and I indeed do use them to build my shared libraries. However, there is one catch. It is a common practice to link some internal libraries (privately) used by your shared library statically (into that library), primarily in order to completely conceal such dependencies (keep in mind, though, that header files accompanying your shared library should also be properly designed to implement this). The benefits are clear: clean and controllable ABI and reduced compile times for shared library consumers.
Take for example, Boost, as the most widespread candidate for such a use case. Encapsulating all of the heavily templated code from Boost privately into your shared library and eliminating any Boost symbols from the ABI will greatly reduce interface pollution and compile times of your shared library consumers, not including the fact that your software components will also look professionally developed.
Anyway, to the point, it turns out that unless those static libraries that you want to link into your shared library were themselves also built with the -fvisibility=hidden and -fvisibility-inlines-hidden options (what would be a ridiculous expectation as nobody is going to distribute static libraries with hidden interface symbols by default as it defeats their purpose), their symbols will inevitably still be visible (for instance, through nm -CD <shared-library>) regardless of the fact that you're building the shared library itself with those options. That is, in this case, you have two options to resolve it:
Manually rebuild those static libraries (your shared library dependencies) with the -fvisibility=hidden and -fvisibility-inlines-hidden options, what is clearly not always possible/practical given their potential third-party origin.
Use linker/version script (like it is done above) to supply at link time in order to instruct linker to forcefully export/hide proper symbols from your shared library.
Related
This code:
void undefined_fcn();
void defined_fcn() {}
struct api_t {
void (*first)();
void (*second)();
};
api_t api = {undefined_fcn, defined_fcn};
defines a global variable api with a pointer to a non-existent function. However, it compiles, and to my surprise, links with absolutely no complaints from GCC, even with all those -Wall -Wextra -Werror -pedantic flags.
This code is part of a shared library. Only when I load the library, at run-time, it finally fails. How do I check, at library link-time, that I did't forget to define any function?
Update: this question mentions the same problem, and the answer is the same: -Wl,--no-undefined. (by the way, I guess this could even be marked as duplicate). However, according to the accepted answer below, you should be careful when using -Wl,--no-undefined.
This code is part of a shared library.
That's the key. The whole purpose of having a shared library is to have an "incomplete" shared object, with undefined symbols that must be resolved when the main executable loads it and all other shared libraries it gets linked with. At that time, the runtime loader attempts to resolve all undefined symbols; and all undefined symbols must be resolved, otherwise the executable will not start.
You stated you're using gcc, so you are likely using GNU ld. For the reason stated above, ld will link a shared library with undefined symbols, but will fail to link an executable unless all undefined symbols are resolved against the shared libraries the executable gets linked with. So, at runtime, the expected behavior is that the runtime loader is expected to successfully resolve all symbols too; so the only situation when the runtime loader fails to start the executable will indicate a fatal runtime environment failure (such as a shared library getting replaced with an incompatible version).
There are some options that can be used to override this behavior. The --no-undefined option instructs ld to report a link failure for undefined symbols when linking a shared libraries, just like executables. When invoking ld indirectly via gcc this becomes -Wl,--no-undefined.
However, you are likely to discover that this is going to be a losing proposition. You better hope that none of the code in your shared library uses any class in the standard C++ or C library. Because, guess what? -- those references will be undefined symbols, and you will fail to link your shared library!
In other words, this is a necessary evil that you need to deal with.
You can't have the compiler tell you whether you forgot to define the function in that implementation file. And the reason is when you define a function it is implicitly marked extern in C++. And you cannot tell what is in a shared library until after it is linked (the compiler's linker does not know if the reference is defined)
If you are not familiar with what extern means. Things marked extern signal external linkage, so if you have a variable that is extern the compiler doesn't require a definition for that variable to be in the translation unit that uses it. The definition can be in another implementation file and the reference is resolved at link time (when you link with a translation unit that defines the variable). The same applies for functions, which are essentially variables of a function type.
To get the behavior you want make the function static which tells the compiler that the function is not extern and is a part of the current translation unit, in which case it must be defined -Wundefined-internal picks up on this (-Wundefined-internal is a part of -Werror so just compile with that)
I have created several C++ libraries that currently are header-only. Both the interface and the implementation of my classes are written in the same .hpp file.
I've recently started thinking that this kind of design is not very good:
If the user wants to compile the library and link it dynamically, he/she can't.
Changing a single line of code requires full recompilation of existing projects that depend on the library.
I really enjoy the aspects of header-only libraries though: all functions get potentially inlined and they're very very easy to include in your projects - no need to compile/link anything, just a simple #include directive.
Is it possible to get the best of both worlds? I mean - allowing the user to choose how he/she wants to use the library. It would also speed up development, as I'd work on the library in "dynamically-linking mode" to avoid absurd compilation times, and release my finished products in "header-only mode" to maximize performance.
The first logical step is dividing interface and implementation in .hpp and .inl files.
I'm not sure how to go forward, though. I've seen many libraries prepend LIBRARY_API macros to their function/class declarations - maybe something similar would be needed to allow the user to choose?
All of my library functions are prefixed with the inline keyword, to avoid "multiple definition of..." errors. I assume the keyword would be replaced by a LIBRARY_INLINE macro in the .inl files? The macro would resolve to inline for "header-only mode", and to nothing for the "dynamically-linking mode".
Preliminary note: I am assuming a Windows environment, but this should be easily transferable to other environments.
Your library has to be prepared for four situations:
Used as header-only library
Used as static library
Used as dynamic library (functions are imported)
Built as dynamic library (functions are exported)
So let's make up four preprocessor defines for those cases: INLINE_LIBRARY, STATIC_LIBRARY, IMPORT_LIBRARY, and EXPORT_LIBRARY (it is just an example; you may want to use some sophisticated naming scheme).
The user has to define one of them, depending on what he/she wants.
Then you can write your headers like this:
// foo.hpp
#if defined(INLINE_LIBRARY)
#define LIBRARY_API inline
#elif defined(STATIC_LIBRARY)
#define LIBRARY_API
#elif defined(EXPORT_LIBRARY)
#define LIBRARY_API __declspec(dllexport)
#elif defined(IMPORT_LIBRARY)
#define LIBRARY_API __declspec(dllimport)
#endif
LIBRARY_API void foo();
#ifdef INLINE_LIBRARY
#include "foo.cpp"
#endif
Your implementation file looks just like usual:
// foo.cpp
#include "foo.hpp"
#include <iostream>
void foo()
{
std::cout << "foo";
}
If INLINE_LIBRARY is defined, the functions are declared inline and the implementation gets included like a .inl file.
If STATIC_LIBRARY is defined, the functions are declared without any specifier, and the user has to include the .cpp file into his/her build process.
If IMPORT_LIBRARY is defined, the functions are imported, and there isn't a need for any implementation.
If EXPORT_LIBRARY is defined, the functions are exported and the user has to compile those .cpp files.
Switching between static / import / export is a really common thing, but I'm not sure if adding header-only to the equation is a good thing. Normally, there are good reasons for defining something inline or not to do so.
Personally, I like to put everything into .cpp files unless it really has to be inlined (like templates) or it makes sense performance-wise (very small functions, usually one-liners). This reduces both compile time and - way more important - dependencies.
But if I choose to define something inline, I always put it in separate .inl files, just to keep the header files clean and easy to understand.
It is operating system and compiler specific. On Linux with a very recent GCC compiler (version 4.9) you might produce a static library using interprocedural linktime optimization.
This means that you build your library with g++ -O2 -flto both at compile and at library link time, and that you use your library with g++ -O2 -flto both at compile and link time of the invoking program.
This is to complement #Horstling's answer.
You can either create a static or a dynamic library. When you create statically-linked libraries, compiled code for all functions/objects will be saved to a file (with .lib extension in Windows). At main project (the project using the library) 's link time, these codes will be linked into your final executable together with the main project codes. So the final executable wouldn't have any runtime dependency.
Dynamically linked libraries will be merged into the main project at run time (and not link time). When you compile the library you get a .dll file (which contains actual compiled code) and a .lib file (which contains enough data for the compiler/runtime to find functions/objects in the .dll file). At link time, the executable will be configured to load the .dll and use compiled code from that .dll as needed. You will need to distribute the .dll file with your executable to be able to run it.
There is no need to choose between static or dynamic linking (or header-only) when designing your library, you create multiple project/makefiles, one to create a static .lib, another to create a .lib/.dll pair, and distribute both versions, for the user to choose between. (You'll need to use preprocessor macros like the ones #Horstling suggested).
You cannot put any templates in a pre-compiled library, unless you use a technique called Explicit Instantiation, which limits template parameters.
Also note that modern compiler/linkers usually do not respect the inline modifier. They may inline a function even if it's not designated as inline, or may dynamically call another that has inline modifier, as they see fit. (Regardless, I'll advise explicitly putting inline where applicable for maximum compatibility). So, there won't be any runtime performance penalty if you use a statically linked library instead of a header-only library (and enable compiler/linker optimizations, of course). As others have suggested, for really small functions that are sure to benefit from being called inline, it is best practice to put them in header files, so dynamically linked libraries will also not suffer any significant performance loss. (In any case, inlining functions will only affect performance for functions that are being called very often, inside loops that are going to be called thousands/millions of times).
Instead of putting inline functions in header files (with an #include "foo.cpp" in your header), you can change makefile/project settings and add foo.cpp to the list of source files to be compiled. This way, if you change any function implementation there will be no need to re-compile the whole project and only foo.cpp will be re-compiled. As I mentioned earlier, your small functions will still be inlined by the optimizing compiler, and you don't need to worry about that.
If you use/design a pre-compiled library, you should consider the case where the library is compiled with a different version of compiler to the main project. Each different compiler version (even different configurations, like Debug or Release) uses a different C runtime (things like memcpy, printf, fopen, ...) and C++ standard library runtime (things like std::vector<>, std::string, ...). These different library implementations may complicate linking, or even create runtime errors.
As a general rule, always avoid sharing compiler runtime objects (data structures that are not defined by standards, like FILE*) across libraries, because incompatible data structures will lead to runtime errors.
When linking your project, C/C++ runtime functions must be linked into your library .lib or .lib/.dll, or your executable .exe. C/C++ runtime itself can be linked as static or dynamic library (you can set this in makefile/project settings).
You will find that dynamically linking to C/C++ runtime in both the library and the main project (even when you compile the library itself as a static library) avoids most linking problems (with duplicate function implementations in multiple runtime versions). Of course you would need to distribute runtime DLLs for all used versions with your executable and library.
There are scenarios that statically linking to C/C++ runtime is needed, and the best approach in these cases would be to compile the library with the same compiler setting as the main project to avoid linking problems.
Rationale
Put as little as necessary in header files and as much as possible in library modules, because of the very reasons that you mentioned: compile-time dependency and long compilation time. The only good reasons for header-only modules are:
generic templates for user-defined template parameter;
very short convenience functions when inlining gives significant
performance.
In case 1, it is often possible to hide some functionality that does not depend on the user-defined type in a .cpp file.
Conclusion
If you stick to this rationale, then there is no choice: templated functionality that must allow user-defined types cannot be pre-compiled, but requires a header-only implementation. Other functionality should be hidden from the user in a library to avoid exposing them to the implementation details.
Rather than a dynamic library, you could have a precompiled static library and thin header file. In an interactive quick build, you get the benefit of not having to recompile the world if implementation details changes. But a fully optimized release build can do global optimization and still figure out it can inline functions. Basically, with "link-time code generation" the toolset does the trick you were thinking about.
I'm familiar with Microsoft's compiler, which I know for sure does this as of Visual Studio 2010 (if not earlier).
Templated code will necessarily be header-only: for instantiating this code, the type parameters must be known at compilation time. There is no way to embed template code in shared libraries. Only .NET and Java support JIT instantiation from byte-code.
Re: non-template code, for short one-liners I suggest keeping it header-only. Inline functions give the compiler a lot more opportunities to optimize the final code.
To avoid "insane compilation time", Microsoft Visual C++ has a "precompiled headers" feature. I do not think GCC has a similar feature.
Long functions should not be inlined in any case.
I had one project which had header-only bits, compiled library bits and some bits I could not decide where belonged. I ended up having .inc files, conditionally included in either .hpp or .cxx depending on #ifdef. Truth to be told, the project was always compiled in "max inline" mode, so after a while I got rid of the .inc files and simply moved the contents to .hpp files.
Is it possible to get the best of both worlds?
In terms; limitations arise because tools aren't smart enough. This answer gives the current best effort that is still portable enough to be used effectively.
I've recently started thinking that this kind of design is not very good.
It ought to be. Header-only libraries are ideal because they simplify deployment: makes the reusing mechanism of the language similar to almost all others', which is just the sane thing to do. But this is C++. Current C++ tools still rely on half-a-century-old linking models that remove important degrees of flexibility, such as choosing which entry points to import or export on an individual level without being forced to change the library's original source code. Also, C++ lacks a proper module system and still relies on glorified copy-paste operations to work (although this is just a side factor to the problem in question).
In fact, MSVC is a little better in this regard. It is the only major implementation trying to achieve some degree of modularity in C++ (by attempting e.g. C++ modules). And it is the only compiler that actually allows e.g. the following:
//// Module.c++
#pragma once
inline void Func() { /* ... */ }
//// Program1.c++
#include <Module.c++>
// Inlines or "vague" links Func(), whatever is better.
int main() { Func(); }
//// Program2.c++
// This forces Func() to be imported.
// The declaration must come *BEFORE* the definition.
__declspec(dllimport) __declspec(noinline) void Func();
#include <Module.c++>
int main() { Func(); }
//// Program3.c++
// This forces Func() to be exported.
__declspec(dllexport) __declspec(noinline) void Func();
#include <Module.c++>
Note that this can be used to selectively import and export individual symbols from the library, although still cumbersomely.
GCC also accepts this (but the order of the declarations must be changed) and Clang does not have any way to achieve the same effect without changing the library's source.
I have made my own implementations of many of the STL features like Vectors, Lists, BST, Queue, Stack and given them all the functions that an STL corresponding library has....
Now i want to use this library by
#include "myLibName.h"
What I Did :
g++ -o -c myLib myLib.cpp
From This I got the object file...
But when i compile programs i have to link the object file myself...
Is there any way that i can do without linking...like the iostream and the other libraries are linked automatically.
I know that a SHARED OBJECT file (eg. libc.so in C) is where all the implementations are held in C....
If that's the solution then how do i make any and use it like other standard libraries in C++ without linking object file every time.
PS: After a lot of efforts i have created these libraries myself...Now Struck at the final step...Pls Help...
You can't unless you're going to write your own toolchain. GCC links in its runtime and standard library because it's GCC and knows that it should; it won't magically do the same with your library.
Conventionally, either make your library header-only or ship a .a/.so/.dll for devs to link against at linktime. In the latter two cases you'll also need to ship the .so/.dll for users to link against at runtime.
To make your build process cleaner for large projects in which you need to link multiple projects, you can use Makefiles.
After that you need to just type make at the terminal to compile and build the whole project.
Another solution is the following, although many people don't recommend it,
header.h
class Foo
{
// some variable and method declarations.
}
header.h is your header file which will contain your declarations.
implement.cpp // this is the implementation file
#include "header.h"
// Now implement various methods you declared in your "header.h" file.
implement.cpp is your implementation file which contains the implementation and the definition of static members.
main.cpp
#include "header.cpp"
// use your methods.
Now you don't need to link your object files, just do g++ -Wall main.cpp
First of all, you should probably differentiate between STL and the Standard C++ Library.
Each compiler comes with its own implementation of the Standard Library, some of them being (at least mostly) compatible (see clang++ and g++). So basically your way to go would be to modify the compiler you are using.
If you are writing header-only implementations, then no library is needed to be built and you can use it without linking. But in that case your work has to be distributed as source and not as library + header.
If you want to simply distribute your library and do not mind to link against the shared or static library you distributed, you should build a shared or static library, depending on the case. But you will have to link it when it is used.
I compiled a shared library with gcc and linked it to my main. The main class should initialize a logger class, which should be visible inside the shared library, but it looks as if the shared library has it's own instance of it.
The include file looks like this:
extern Log gLog;
In main it is declared.
Log gLog(new StreamWriter());
When I try to link it, I get linker errors undefined symbol _gLog in the shared library. I thought that it might be because it is a class instance, so I changed it to a pointer, but I get the same. To make it worse, I figured I could create a small dummy module where I create the same global variable in the shared library and then call a function to initialize it. But for this function I also get a linker error because it is not visible in main.
In the shared library:
Log *gLogger;
int initLibrary(Log *pLogger)
{
gLogger = pLogger;
}
And in main:
Log gLog(new StreamWriter());
int initLibrary(Log *pLogger);
main()
{
initLibrary(&gLog);
}
Again I get an undefined symbol in the linker, this time for my initLibrary function.
For now I solve the problem by creating a dummy class, which works. However, I would like to know how to properly define symbols across shared library boundaries, as my understanding seems to be wrong about it.
When using google I found some threads here Using a global variable in a shared library and Global variables, shared libraries and -fPIC effect as examples (there are several others as well with this problem). However I tried to recompile everything with -fpic, also the main module and it still doesn't work. The -rdynamic option is unknown so I don't know where this comes from.
I can use classes from the shared library and vice versa, so this affects only global symbols. So what am I doing wrong that the main code and the shared library can not see symbols from each other?
The right approach is to create the instance of the Logger inside the shared library, using either a global variable (better if encapsulated in a namespace) or a Singleton class. Then let your main program use it.
You need to declare you variables with default visibility to make it visible to other shared libraries or the main program. It appears that you are compiling with -fvisibility=hidden, so that a symbol in the library does not resolve to a definition in the main program or other libraries and vice versa. With the visibility attribute of GCC, you can reverse this effect.
In a nutshell
scope visibility of a declaration across entities of a single object file (global, local, ..)
linkage visibility of a declaration across entities of multiple object files (external, internal)
visibility visibility of a declaration across entities of different shared libraries (default, hidden).
Another possibility is that you are mixing C and C++ code and messing up with the language linkage.
I finally found the problem why this didn't work for global symbols(no problem with classes).
I'm compiling under cygwin, and apparently the shared object is compiled as a DLL. I tried to look into the library and notized that it is an EXE format and not ELF. So I tried to use the Microsoft DLL syntax for exporting symbols and suddenly it worked. Adding a __declspec(dllexport) to the symbol did the trick.
I expected that I would have to use __declspec(dllimport) in the main project for importing the symbols, but this doesn't work. Not sure if I missunderstood this parameter or if the cygwin version of gcc does some magic work.
So when you compile a shared library under cygwin it must look like this if you want to export symbols.
Shared library foo.h:
__declspec(dllexport) int foo(int a);
Shared library foo.cpp:
int foo(int a)
{
....
}
In the executable foo.h:
int foo(int a);
In the executable main.cpp:
main()
{
foo(1);
}
The shared library must be compiled with the -fpic switch and linked with -shared.
I have a circular dependency between two functions. I would like each of these functions to reside in its own dll. Is it possible to build this with visual studio?
foo(int i)
{
if (i > 0)
bar(i -i);
}
-> should compile into foo.dll
bar(int i)
{
if (i > 0)
foo(i - i);
}
-> should compile into bar.dll
I have created two projects in visual studio, one for foo and one for bar. By playing with the 'References' and compiling a few times, I managed to get the dll's that I want. I would like to know however whether visual studio offers a way to do this in a clean way.
If foo changes, bar does not need to be recompiled, because I only depend on the signature of bar, not on the implementation of bar. If both dll's have the lib present, I can recompile new functionality into either of the two and the whole system still works.
The reason I am trying this is that I have a legacy system with circular dependencies, which is currently statically linked. We want to move towards dll's for various reasons. We don't want to wait until we clean up all the circular dependencies. I was thinking about solutions and tried out some things with gcc on linux and there it is possible to do what I suggest. So you can have two shared libraries that depend on each other and can be built independent of each other.
I know that circular dependencies are not a good thing to have, but that is not the discussion I want to have.
The reason it works on Unix-like systems is because they perform actual linking resolution at load time. A shared library does not know where its function definition will come from until it's loaded into a process. The downside of this is that you don't know either. A library can find and call functions in any other library (or even the main binary that launched the process in the first place). Also by default everything in a shared library is exported.
Windows doesn't work like that at all. Only explicitly exported things are exported, and all imports must be resolved at library link-time, by which point the identity of the DLL that will supply each imported function has been determined. This requires an import library to link against.
However, you can (with some extra work) get around this. Use LoadLibrary to open any DLL you like, and then use GetProcAddress to locate the functions you want to call. This way, there are no restrictions. But the restrictions in the normal method are there for a reason.
As you want to transition from static libraries to DLLs, it sounds like you're assuming that you should make each static library into a DLL. That's not your only option. Why not start moving code into DLLs only when you identify it as a self-contained module that fits into a layered design with no circularity? That way you can begin the process now but still attack it a piece at a time.
I deeply sympathise with your situation (as clarified by your edit), but as a firm believer in doing the correct thing, not the thing which works for now, if there's any possibility at all I think you need to refactor these projects.
Fix the problem not the symptom.
It's possible to use the LIB utility with .EXP files to "bootstrap" (build without prior .LIB files) a set of DLLs with a circular reference such as this one. See MSDN article for details.
I agree with other people above that this kind of situation should be avoided by revising the design.
This question was first in my search for 'dll cyclic dependency', and even if it is 10 years old, it is a shame that most answers points to 'refactoring' which is a very very very stupid advice for large project and was not a question anyway.
So I need to point out that cyclic dependency are not so dangerous. They are totally ok in unix/linux. They are mentioned in many msdn articles as possible situations with ways to go arround them. They happens in JAVA (compiler solving it by muilti-pass compiling). Saying that refactoring is the only way is like forbidding 'friends' in classes.
this pragraph in some begginers guide to linkers (David Drysdale) explains it well for VS-linkers.
So the trick is to use two-pass compiling: first one that will create just 'import-libs', and the second one that will generate dll's itself.
For visual studio and any graphic-ide compiling, it is probably still something strange. But if you make your own Makefiles, and have better controll of linker process and flags, than it is not so hard to do.
Using OP exampe files and mingw-gcc syntax as a concept to show (because i tested it and know for sure that it works ok on windows), one must:
- compile/link a.lib and b.lib without specifing cyclic libraries:
g++ -shared -Wl,--out-implib=a.lib -o a.dll a.obj //without specifying b.lib
g++ -shared -Wl,--out-implib=b.lib -o b.dll b.obj //without specifying a.lib
... will show 'undefined refernce errors' and fail to provide dll-s, but it will create a.lib and b.lib, which we want foor second-pass linking:
g++ -shared -Wl,--out-implib=a.lib -o a.dll a.obj b.lib
g++ -shared -Wl,--out-implib=b.lib -o b.dll b.obj a.lib
and the result is a.dll and b.dll with pretty clean method. Using Microsoft compilers should be simmilar, with their advice to switch link.exe to lib.exe (did not tested it, seems even cleaner, but probably harder to make something productive from it comparing to mingw + make tools).
How about this:
Project A
Public Class A
Implements C.IA
Public Function foo(ByVal value As C.IB) As Integer Implements C.IA.foo
Return value.bar(Me)
End Function
End Class
Project B
Public Class B
Implements C.IB
Public Function bar(ByVal value As C.IA) As Integer Implements C.IB.bar
Return value.foo(Me)
End Function
End Class
Project C
Public Interface IA
Function foo(ByVal value As IB) As Integer
End Interface
Public Interface IB
Function bar(ByVal value As IA) As Integer
End Interface
Project D
Sub Main()
Dim a As New A.A
Dim b As New B.B
a.foo(b)
End Sub
The only way you'll get around this "cleanly" (and I use the term loosely) will be to eliminate one of the static/link-time dependencies and change it to a run-time dependency.
Maybe something like this:
// foo.h
#if defined(COMPILING_BAR_DLL)
inline void foo(int x)
{
HMODULE hm = LoadLibrary(_T("foo.dll");
typedef void (*PFOO)(int);
PFOO pfoo = (PFOO)GetProcAddress(hm, "foo");
pfoo(x); // call the function!
FreeLibrary(hm);
}
#else
extern "C" {
__declspec(dllexport) void foo(int);
}
#endif
Foo.dll will export the function. Bar.dll no longer tries to import the function; instead, it resolves the function address at runtime.
Roll your own error handling and performance improvements.
As I came across this very problem recently I wanted to share a solution using CMake.
The idea is that we have two dlls a and b that have a circular dependency and one main object that calls a.dll. To make the linking work we create an additional library b_init that is compiled with the /FORCE:UNRESOLVED flag in order to allow building with incomplete symbols as it is not linking against a. Additionally, b_inits output name is b, so it will create b.lib.
In the next step we link a to the newly created b.lib. Note, the PRIVATE keyword to avoid transitive adding of b.lib to the b library below.
Here's the CMakeLists.txt:
project(circ_dll CXX)
cmake_minimum_required(VERSION 3.15)
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
add_library(b_init SHARED b_dll.cpp)
set_target_properties(b_init PROPERTIES LINK_FLAGS "/FORCE:UNRESOLVED")
set_target_properties(b_init PROPERTIES OUTPUT_NAME "b")
add_library(a SHARED a_dll.cpp)
target_link_libraries(a PRIVATE b_init)
add_library(b SHARED b_dll.cpp)
target_link_libraries(b a)
add_executable(main main.cpp)
target_link_libraries(main a b)
It is not possible to do cleanly. Because they both depend on each other, if A changes, then B must be recompiled. Because B was recompiled, it has changed and A needs to be recompiled and so on.
That is part of the reason circular dependencies are bad and whether you want to or not, you cannot leave that out of the discussion.
Visual Studio will enforce the dependencies, generally speaking, since function addresses may change within the newly-compiled DLL. Even though the signature may be the same, the exposed address may change.
However, if you notice that Visual Studio typically manages to keep the same function addresses between builds, then you can use one of the "Project Only" build settings (ignores dependencies). If you do that and get an error about not being able to load the dependency DLL, then just rebuild both.
You need to decouple the two DLLs, placing the interfaces and implementation in two different DLLs, and then using late binding to instantiate the class.
// IFoo.cs: (build IFoo.dll)
interface IFoo {
void foo(int i);
}
public class FooFactory {
public static IFoo CreateInstance()
{
return (IFoo)Activator.CreateInstance("Foo", "foo").Unwrap();
}
}
// IBar.cs: (build IBar.dll)
interface IBar {
void bar(int i);
}
public class BarFactory {
public static IBar CreateInstance()
{
return (IBar)Activator.CreateInstance("Bar", "bar").Unwrap();
}
}
// foo.cs: (build Foo.dll, references IFoo.dll and IBar.dll)
public class Foo : IFoo {
void foo(int i) {
IBar objBar = BarFactory.CreateInstance();
if (i > 0) objBar.bar(i -i);
}
}
// bar.cs: (build Bar.dll, references IBar.dll and IFoo.dll)
public class Bar : IBar {
void bar(int i) {
IFoo objFoo = FooFactory.CreateInstance();
if (i > 0) objFoo.foo(i -i);
}
}
The "Factory" classes are technically not necessary, but it's much nicer to say:
IFoo objFoo = FooFactory.CreateInstance();
in application code than:
IFoo objFoo = (IFoo)Activator.CreateInstance("Foo", "foo").Unwrap();
because of the following reasons:
You can avoid a "cast" in application code, which is a good thing
If the DLL that hosts the class changes, you don't have to change all the clients, just the factory.
Code-completion still wroks.
Depending on your needs, you have to call culture-aware or key-signed DLLs, in which case you can add more parameters to the CreateInstance call in the factory in one place.
--
Kenneth Kasajian