I am creating a library (.lib) in c++ with Visual Studio 2008. I would like to set a variable to change the behaviour of the library depending on the variable. Simplifying a lot, something like this:
#ifdef OPTION1
i = 1;
#else
i = 0;
#endif
But the variable (in this case OPTION1) should not be defined in the library itself, but in the code that links to the library, so that just changing the definition of the variable I could obtain different behaviours from the program, but always linking to the same library.
Is this possible, and how? Or is there a more elegant way to achieve what I want?
To pull this off, the code which depends on the macro must be compiled as part of the code which links to the library, not as part of the library itself. The best you could do is something like this:
In your public .h file:
namespace LibraryPrivate {
void functionForOptionSet();
void functionForOptionUnset();
}
#ifdef OPTION1
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionSet();
}
#else
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionUnset();
}
#endif
In you library's .cpp file:
namespace LibraryPrivate {
void functionForOptionSet()
{ i = 1; }
void functionForOptionUnset()
{ i = 0; }
}
That is, you have to implement both options in the library, but you can (partially) limit the interface based on the macro. Kind of something like what WinAPI does with char vs. wchar_t functions: if provides both SomeFunctionA(char*) and SomeFunctionW(wchar_t*) and then a macro SomeFunction which expands to one of those.
The simple answer is no. Things like #ifdef are entirely
processed by the compiler (and in fact, by a preprocessor phase
of the compiler, before it even parses the code); a .lib file
has already been compiled.
One solution would be to supply the library in source form, and
let the client compile it as part of his project. This has an
additional advantage in that you automatically support all
versions of the compiler, with all possible combinations of
compiler options. And the disadvantage that your library will
be used with versions of the compiler and compiler options that
you've never tested, and that possibly you cannot even test.
Otherwise, you'll need to use a variable, and ifs and ?:,
rather than #ifdef. And you'll have to arrange some means of
setting the variable.
Finally, if there's only one such variable, you might consider
furnishing two different sets of versions of the library: one
with it set, and one without. The client then decides which one
he wants to use. In many ways, this is the simplest solution,
but it definitely doesn't scale—with a hundred such
variables, if they're independent, you'll need 2^100 different
sets of variants, and that won't fit on any disk.
Related
I don't know what this concept is called, so title may sound weird. Imagine the following scenario:
main.cpp:
#define SOME_KEYWORD
int main()
{
foo();
return 0;
}
other.cpp:
void foo()
{
//Do some stuff
#ifdef SOME_KEYWORD
//Do some additional stuff
#endif
}
I've tried it out and it doesn't work if #define is present in other file. Is there a way around this? (I'd rather not to modify function parameters just to achieve this, since it will only be present at development time and functions can be many layers of abstraction away.)
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
In c++, from c++17, a constexpr-if would be a good way to go about doing this. e.g. in some header file:
// header.hpp
#pragma once
constexpr bool choice = true; // or false, if you don't want to compile some additional stuff
and in an implementation file:
#include "header.hpp"
void foo()
{
//Do some stuff
if constexpr(choice)
{
//Do some additional stuff
}
}
Note that is not a drop in replacement for #define, but it works in many cases.
A preprocessor symbol defined in one translation unit is not visible in a different translation unit. As suggested in a comment you can define it in a header and then include where needed (its not a keyword, so I chose a better name):
// defines.h
#define SOME_SYMBOL
// other.cpp
#include "defines.h
Conditional compilation via preprocessor macros has some uses, eg conditionally compiling platform specific code or excluding debug code from release builds. For anything else I would not use it, because when overused it can create a big mess and is error-prone (eg too easy to forget to include defines.h). Consider to make foo a template:
template <bool SOME_FLAG>
void foo()
{
//Do some stuff
if constexpr (SOME_FLAG) {
//Do some additional stuff
}
}
And if you still want to make use of the preprocessor, this allows you to concentrate usage of macros to a single location:
// main.cpp
#define SOME_SYMBOL
#ifdef SOME_SYMBOL
constexpr bool flag = true;
#else
constexpr bool flag = false;
int main()
{
foo<flag>();
return 0;
}
I don't know what this concept is called
Generally, pre-processing. More specifically, the pre-processor is used here to conditionally compile the program.
This a common technique that is used to create portable interfaces over platform specific ones. Sometimes it is used to enable or suppress debugging features.
I've tried it out and it doesn't work if #define is present in other file.
Macros only affect the file where they are defined.
Is there a way around this?
Define the macro in all of the files where you use it. Typically, this is achieved by including the definition from a header, or by specifying a compiler option.
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
There is no complete alternative in C++. In some cases they can be replaced or combined with templates and if constexpr.
There exists quite a bit of discussions on feature flags/toggles and why you would use them but most of the discussion on implementing them center around (web or client) apps. If your product/artifact is a C or C++ library and your public headers are affected by the flags, how would you implement them?
The "naive" way of doing it doesn't really work:
/// Does something
/**
* Does something really cool
#ifdef FEATURE_FOO
* #param fooParam describe param for foo
#endif
*/
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
You wouldn't want to ship something like this.
Your library that you ship was built for a certain feature flag combination, clients shouldn't need to #define the same feature flags to make things work
The ifdefs in your public header are ugly
And most importantly, if you disable your flag, you don't want clients to see anything about the disabled features - maybe it is something upcoming and you don't want to show your stuff until it is ready
Running the preprocessor on the file to get the header for distribution doesn't really work because that would not only act on feature flags but also do everything else the preprocessor does.
What would be a technical solution to this that doesn't have these flaws?
This kind of goo ends up in a codebase due to versioning. Broad topic with very few happy answers. But you certainly want to avoid making it more difficult then it needs to be. Focus on the kind of compatibility you want to provide.
The syntax proposed in the snippet is only required when you need binary compatibility. It keeps the library compatible with a doSomethingCool() call in the client code (passing no argument) without having to compile that client code. In other words, the client programmer does nothing at all beyond copying the updated .dll or .so file, does not need any updated headers and it is entirely your burden to get the feature flags right. Binary compatibility is pretty difficult to pull off reliably, beyond the flag wrangling, easy to make a mistake.
But what you are actually talking about is source compatibility, you do provide the user with an updated header and he rebuilds his code to use the library update. In which case you don't need the feature flag, the C++ compiler by itself ensures that an argument is passed, it will be 42. No flag required at all, either on your end or the user's end.
Another way to do it is by providing an overload. In other words, both a doSomethingCool() and a doSomethingCool(int) function. The client programmer keeps using the original overload until he's ready to move ahead. You also favor an overload when the function body has to change too much. If these functions are not virtual then it even provides link compatibility, could be useful in some select case. No feature flags required.
I'd say it's a relatively broad question, but I'll trow in my two cents.
First, you really want to separate the public headers from implementation (source and internal headers, if any). The public header that gets installed (e.g., at /usr/include) should contain function declaration and, preferably, a constant boolean to inform the client whether the library has a certain feature compiled in or not, as so:
#define FEATURE_FOO 1
void doSomethingCool();
Such a header is generally generated. Autotools is de facto standard tools for this purpose in GNU/Linux. Otherwise you can write your own scripts to do this.
For completeness, in .c file you should have the
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
It's also up to your distribution tools to keep the installed headers and library binaries in sync.
Use the forward declarations
Hide implementation by using a pointer (Pimpl idiom)
this code id quoted from the previous link:
// Foo.hpp
class Foo {
public:
//...
private:
struct Impl;
Impl* _impl;
};
// Foo.cpp
struct Foo::Impl {
// stuff
};
Binary compatibility is not a forte of C++, it probably isn’t worth considering.
For C, you might construct something like an interface class, so that your first touch with the library is something like:
struct kv {
char *tag;
int val;
};
int Bind(struct kv *compat, void **funcs, void **stamp);
and your access to the library is now:
#define MyStrcpy(src, dest) (funcs->mystrcpy((stamp)(src),(dest)))
The contract is that Bind provides/constructs an appropriate (func, stamp) pair for the attribute set you provided; or fails if it cannot. Note that Bind is the only bit that has to know about multiple layouts of *funcs,*stamp; so it can transparently provide robust interface for this reduced version of the problem.
If you wanted to get really fancy, you might be able to achieve the same by re-writing the PLT that the dlopen/dlsym prepare for you, but:
You are grossly expanding your attack surface.
You are adding a lot of complexity for very little gain.
You are adding platform / architecture specific code where none is warranted.
A few downsides remain. You have to invoke Bind before any part of your program/library attempts to use it. Attempts to solve that lead straight to hell (Finding C++ static initialization order problems), which must make N.Wirth smile. If you get too clever with your Bind(), you will wish you hadn’t. You might want to be careful about re-entrency, since a given client might Bind multiple times for different attribute sets (users are such a pain).
That's how I would manage this in pure C.
First of all the features, I would pack them in a single unsigned int 32/64 bits long to keep them as compact as possible.
Second step a private header to use only in library compilation, where I would define a macro to create the API function wrapper, and the internal function:
#define CoolFeature1 0x00000001 //code value as 0 to disable feature
#define CoolFeature2 0x00000010
#define CoolFeature3 0x00000100
.... // Other features
#define Cool CoolFeature1 | CoolFeature2 | CoolFeature3 | ... | CoolFeature_n
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__) \
{ return Internal_#fname(Cool, __VA_ARGS__);} \
ret Internal_#fname(unsigned long Cool, __VA_ARGS__)
#include "user_header.h" //Include the standard user header where there is no reference to Cool features
Now we have a wrapper with a standard prototype that will be available in the user definition header, and an internal version which keep an addition flag group to specify optional features.
When coding using the macro you can write:
ImplementApi(int, MyCoolFunction, int param1, float param2, ...)
{
// Your code goes here
if (Cool & CoolFeature2)
{
// Do something cool
}
else
{
// Flat life ...
}
...
return 0;
}
In the case above you'll get 2 definitions:
int Internal_MyCoolFunction(unsigned long Cool, int param1, float param2, ...);
int MyCoolFunction(int param1, float param2, ...)
You can eventually add in the macro, for the API function, the attributes for export if you're distribuiting a dynamic library.
You can even use the same definition header if the definition of ImplementApi macro is done on the compiler command line, in that case the following simple definition in the header will do:
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__);
The last will generate only the exported API prototypes.
This suggestion, of course, is not exhaustive. There a lot of more adjustments you can do to make more elegant and automatic the definitions. I.e. including a sub header with function list to create only API function prototypes for the user, and both, internal and API, for developers.
Why are you using defines for feature flags? Feature flags are supposed to enable you to turn features on and off runtime, not compile time.
In the code you would then case out implementation as early as possible using interfaces and concrete classes that are chosen based on the feature flag.
If users of the header files arent supposed to be able to access the feature flags, then create header files that you dont distribute, that are only included in the implementation c/cpp files. You can then flip the flags in the private headers when you compile the library that they link to.
If you are keeping features internal until you are ready to release, you can move the feature flag into the public header, or just remove the feature flag entirely and switch to using the new implementation.
Sloppy example if you want this compile time:
public_class.h
class Thing
{
public:
void DoSomething();
}
private_class_feature1.h
#define USE_FEATURE_1
class NewFeatureImp
{
public:
static void CoolNewWay1();
}
public_class.cpp
#include “public_class.h”
#include “private_class_feature1.h”
void Thing::DoSomething()
{
#ifdef USE_FEATURE_1
NewFeatureImpl::CoolNewWay();
#else
// Regular impl
#endif
}
I stumbled upon the following code:
//
// Top-level file that includes all of the C/C++ files required
//
// The C code may be compiled by compiling this top file only,
// or by compiling individual files then linking them together.
#ifdef __cplusplus
extern "C" {
#endif
#include <stdlib.h>
#include "my_header.h"
#include "my_source1.cc"
#include "my_source2.cc"
#ifdef __cplusplus
}
#endif
This is definitely unusual but is it considered bad practice and if so why?
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
First off: extern "C" { #include "my_cpp_file.cc" } just doesn't add up... anyway, I'll attempt to answer your question using a practical example.
Note that sometimes, you do see #include "some_file.c" in a source file. Often this is done because the code in the other file is under development, or it's not certain that the feature that is being developed in that file will make the release.
Another reason is quite simple: to improve readability: not having to scroll too much), or even: Reflecting you're threading. To some, having the child's code in a separate file helps, especially when learning threading.
Of course, the major benefit of including translation units into one master translation unit (which, to me, is abusing the pre-processor, but that's not the point) is simple: less I/O while compiling, hence, faster compilation. It's all been explained here.
That's one side of the story, though. This technique is not perfect. Here's a couple of considerations. And just to balance out the "the magic of unity builds" article, here's the "the evils of unity builds" article.
Anyway, here's a short list of my objections, and some examples:
static global variables (be honest, we've all used them)
extern and static functions alike: both are callable everywhere
Debugging would require you to build everything, unless (as the "pro" article suggests) have both a Unity-Build and modular-build ready for the same project. IMO a bit of a faff
Not suitable if you're looking to extract a lib from your project you'd like to re-use later on (think generic shared libraries or DLL's)
Just compare these two situation:
//foo.h
struct foo
{
char *value;
int checksum;
struct foo *next;
};
extern struct foo * get_foo(const char *val);
extern void free_foo( struct foo **foo);
//foo.c
#include <foo.h>
static int get_checksum( const char *val);
struct foo * get_foo( const char *val)
{
//call get_checksum
struct foo *retVal = malloc(sizeof *retVal);
retVal->value = calloc(strlen(val) + 1, 1);
retVal->cecksum = get_checksum(val);
retVal->next = NULL;
return retVal;
}
void free_foo ( struct foo **foo)
{
free(*foo->value);
if (*foo->next != NULL)
free_foo(&(*foo->next));
free(*foo);
*foo = NULL;
}
If I were to include this C file in another source file, the get_checksum function would be callable in that file, too. Here, this is not the case.
Name conflicts would be a lot more common, too.
Imagine, too, if you wrote some code to easily perform certain quick MySQL queries. I'd write my own header, and source files, and compile them like so:
gccc -Wall -std=c99 mysql_file.c `mysql_config --cflags --libs` -o mysql.o
And simply use that mysql.o compiled file in other projects, by linking it simply like this:
//another_file.c
include <mysql_file.h>
int main ( void )
{
my_own_mysql_function();
return 0;
}
Which I can then compile like so:
gcc another_file.c mysql.o -o my_bin
This saves development time, compilation time, and makes your projects easier to manage (provided you know your way around a make file).
Another advantage with these .o files is when collaborating on projects. Suppose I would announce a new feature for our mysql.o file. All projects that have my code as a dependency can safely continue to use the last stable compiled mysql.o file while I'm working on my piece of the code.
Once I'm done, we can test my module using stable dependencies (other .o files) and make sure I didn't add any bugs.
The problem is that each of your *.cc files will be compiled every time the header is included.
For example, if you have:
// foo.cc:
// also includes implementations of all the functions
// due to my_source1.cc being included
#include "main_header.h"
And:
// bar.cc:
// implementations included (again!)
// ... you get far more object code at best, and a linker error at worst
#include "main_header.h"
Unrelated, but still relevant: Sometimes, compilers have trouble when your headers include C stdlib headers in C++ code.
Edit: As mentioned above, there is also the problem of having extern "C" around your C++ sources.
This is definitely unusual but is it considered bad practice and if so why?
You're likely looking at a "Unity Build". Unity builds are a fine approach, if configured correctly. It can be problematic to configure a library to be built this way initially because there may be conflicts due to expanded visibility -- including implementations which were intended by an author to be private to a translation.
However, the definitions (in *.cc) should be outside if the extern "C" block.
One potential negative I can think of is that a typical build system would have difficulty analysing dependencies. Are there any other reasons that this technique isn't widely used?
It reduces dependency/complexity because the translation count goes down.
assume we were using gcc/g++ and a C API specified by a random committee. This specification defines the function
void foo(void);
Now, there are several implementations according to this specification. Let's pick two as a sample and call them nfoo and xfoo (provided by libnfoo and libxfoo as static and dynamic libraries respectively).
Now, we want to create a C++ framework for the foo-API. Thus, we specify an abstract class
class Foo
{
public:
virtual void foo(void) = 0;
};
and corresponding implementations
#include <nfoo.h>
#include "Foo.h"
class NFoo : public Foo
{
public:
virtual void foo(void)
{
::foo(); // calling foo from the nfoo C-API
}
};
as well as
#include <xfoo.h>
#include "Foo.h"
class XFoo : public Foo
{
public:
virtual void foo(void)
{
::foo(); // calling foo from the xfoo C-API
}
};
Now, we are facing a problem: How do we create (i.e. link) everything into one library?
I see that there will be a symbol clash with the foo function symbols of the C API implementations.
I already tried to split the C++ wrapper implementations into separate static libraries, but then I realized (again) that static libraries is just a collection of unlinked object files. So this will not work at all, unless there is a way to fully link the C libraries into the wrapper and remove/hide their symbols.
Suggestions are highly appreciated.
Update: Optimal solutions should support both implementations at the same time.
Note: The code is not meant to be functional. Perceive it as pseudo code.
Could you use dlopen/dlsym at runtime to resolve your foo call.
something like example code from link ( may not compile):
void *handle,*handle2;
void (*fnfoo)() = null;
void (*fxfoo)() = null;
/* open the needed object */
handle = dlopen("/usr/home/me/libnfoo.so", RTLD_LOCAL | RTLD_LAZY);
handle2 = dlopen("/usr/home/me/libxfoo.so", RTLD_LOCAL | RTLD_LAZY);
fnfoo = dlsym(handle, "foo");
fxfoo = dlsym(handle, "foo");
/* invoke function */
(*fnfoo)();
(*fxfoo)();
// don't forget dlclose()'s
otherwise, the symbols in the libraries would need to be modified.
this is not portable to windows.
First thing's first, if you are going to be wrapping up a C API in C++ code, you should hide that dependency behind a compilation firewall. This is to (1) avoid polluting the global namespace with the names from the C API, and (2) freeing the user-code from the dependency to the third-party headers. In this example, a rather trivial modification can be done to isolate the dependency to the C APIs. You should do this:
// In NFoo.h:
#include "Foo.h"
class NFoo : public Foo
{
public:
virtual void foo(void);
};
// In NFoo.cpp:
#include "NFoo.h"
#include <nfoo.h>
void NFoo::foo(void) {
::foo(); // calling foo from the nfoo C-API
}
The point of the above is that the C API header, <nfoo.h>, is only included in the cpp file, not in the header file. This means that user-code will not need to provide the C API headers in order to compile code that uses your library, nor will the global namespace names from the C API risk clashing with anything else being compiled. Also, if your C API (or any other external dependency for that matter) requires creating a number of things (e.g., handles, objects, etc.) when using the API, then you can also wrap them in a PImpl (pointer to a forward-declared implementation class that is only declared-defined in the cpp file) to achieve the same isolation of the external dependency (i.e., a "compilation firewall").
Now, that the basic stuff is out of the way, we can move to the issue at hand: simultaneously linking to two C APIs with name-clashing symbols. This is a problem and there is no easy way out. The compilation firewall technique above is really about isolating and minimizing dependencies during compilation, and by that, you could easily compile code that depends on two APIs with conflicting names (which isn't true in your version), however, you will still be hit hard with ODR (One Definition Rule) errors when reaching the linking phase.
This thread has a few useful tricks to resolving C API name conflicts. In summary, you have the following choices:
If you have access to static libraries (or object files) for at least one of the two C APIs, then you can use a utility like objcopy (in Unix/Linux) to add a prefix to all the symbols in that static library (object files), e.g., with the command objcopy --prefix-symbols=libn_ libn.o to prefix all the symbols in libn.o with libn_. Of course, this implies that you will need to add the same prefix to the declarations in the API's header file(s) (or make a reduced version with only what you need), but this is not a problem from a maintenance perspective as long as you have a proper compilation firewall in place for that external dependency.
If you don't have access to static libraries (or object files) or don't want to do this above (somewhat troublesome) approach, you will have to go with a dynamic library. However, this isn't as trivial as it sounds (and I'm not even gonna go into the topic of DLL Hell). You must use dynamic loading of the dynamic link library (or shared-object file), as opposed to the more usual static loading. That is, you must use the LoadLibrary / GetProcAddress / FreeLibrary (for Windows) and the dlopen / dlsym / dlclose (all Unix-like OSes). This means that you have to individually load and set the function-pointer address for each function that you wish to use. Again, if the dependencies are properly isolated in the code, this is going to be just a matter of writing all this repetitive code, but not much danger involved here.
If your uses of the C APIs is much simpler than the C APIs themselves (i.e., you use only a few functions out of hundreds of functions), it might be a lot easier for you to create two dynamic libraries, one for each C API, that exports only the limited subset of functions, giving them unique names, that wrap calls to the C API. Then, you main application or library can be link to those two dynamic libraries directly (statically loaded). Of course, if you need to do that for all the functions in that C API, then there is no point in going through all this trouble.
So, you can choose what seems more reasonable or feasible for you, there is no doubt that it will require quite a bit a manual work to fix this up.
if you only want to access one library implementation at a time, a natural way to go about it is as a dynamic library
in Windows that also works for accessing two or more library implementations at a time, because Windows dynamic libraries provide total encapsulation of whatever's inside
IIUC ifdef is what you need
put #define _NFOO
in the nfoo lib and #define XFOO in xfoo lib.
Also remember if nfoo lib and xfoo lib both have a function called Foo then there will be error during compilation. To avoid this GCC/G++ uses function overloading through name mangling.
you can then check if xfoo is linked using ifdefs
#ifdef XFOO
//call xfoo's foo()
#endif
A linker cannot distinguish between two different definitions of the same symbol name, so if you're trying to use two functions with the same name you'll have to separate them somehow.
The way to separate them is to put them in dynamic libraries. You can choose which things to export from a dynamic library, so you can export the wrappers while leaving the underlying API functions hidden. You can also load the dynamic library at runtime and bind to symbols one at a time, so even if the same name is define in more than one they won't interfere with each other.
I am trying to write something in c++ with an architecture like:
App --> Core (.so) <-- Plugins (.so's)
for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have:
static variables in Core are duplicated at run-time -- Plugins and App have different copys of them.
at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL.
Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.
What I did in the entry_point.cpp for a plugin:
#include "raw_space.hpp"
#include <gamustard/gamustard.hpp>
using namespace Gamustard;
using namespace std;
namespace
{
struct GAMUSTARD_PUBLIC_API RawSpacePlugin : public Plugin
{
RawSpacePlugin(void):identifier_("com.gamustard.engine.space.RawSpacePlugin")
{
}
virtual string const& getIdentifier(void) const
{
return identifier_;
}
virtual SmartPtr<Object> createObject(std::string const& name) const
{
if(name == "RawSpace")
{
Object* obj = NEW_EX RawSpaceImp::RawSpace;
Space* space = dynamic_cast<Space*>(obj);
Log::instance().log(Log::LOG_DEBUG, "createObject: %x -> %x.", obj, space);
return SmartPtr<Object>(obj);
}
return SmartPtr<Object>();
}
private:
string identifier_;
};
SmartPtr<Plugin> __plugin__;
}
extern "C"
{
int GAMUSTARD_PUBLIC_API gamustardDLLStart(void) throw()
{
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStart");
__plugin__.reset(NEW_EX RawSpacePlugin);
PluginManager::instance().install(weaken(__plugin__));
return 0;
}
int GAMUSTARD_PUBLIC_API gamustardDLLStop(void) throw()
{
PluginManager::instance().uninstall(weaken(__plugin__));
__plugin__.reset();
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStop");
return 0;
}
}
Some Background
Shared libraries in C++ are quite difficult because the standard says nothing about them. This means that every platform has a different way of doing them. If we restrict ourselves to Windows and some *nix variant (anything ELF), the differences are subtle. The first difference is Shared Object Visibility. It is highly recommended that you read that article so you get a good overview of what visibility attributes are and what they do for you, which will help save you from linker errors.
Anyway, you'll end up with something that looks like this (for compiling with many systems):
#if defined(_MSC_VER)
# define DLL_EXPORT __declspec(dllexport)
# define DLL_IMPORT __declspec(dllimport)
#elif defined(__GNUC__)
# define DLL_EXPORT __attribute__((visibility("default")))
# define DLL_IMPORT
# if __GNUC__ > 4
# define DLL_LOCAL __attribute__((visibility("hidden")))
# else
# define DLL_LOCAL
# endif
#else
# error("Don't know how to export shared object libraries")
#endif
Next, you'll want to make some shared header (standard.h?) and put a nice little #ifdef thing in it:
#ifdef MY_LIBRARY_COMPILE
# define MY_LIBRARY_PUBLIC DLL_EXPORT
#else
# define MY_LIBRARY_PUBLIC DLL_IMPORT
#endif
This lets you mark classes, functions and whatever like this:
class MY_LIBRARY_PUBLIC MyClass
{
// ...
}
MY_LIBRARY_PUBLIC int32_t MyFunction();
This will tell the build system where to look for the functions when it calls them.
Now: To the actual point!
If you're sharing constants across libraries, then you actually should not care if they are duplicated, since your constants should be small and duplication allows for much optimization (which is good). However, since you appear to be working with non-constants, the situation is a little different. There are a billion patterns to make a cross-library singleton in C++, but I naturally like my way the best.
In some header file, let's assume you want to share an integer, so you would do have in myfuncts.h:
#ifndef MY_FUNCTS_H__
#define MY_FUNCTS_H__
// include the standard header, which has the MY_LIBRARY_PUBLIC definition
#include "standard.h"
// Notice that it is a reference
MY_LIBRARY_PUBLIC int& GetSingleInt();
#endif//MY_FUNCTS_H__
Then, in the myfuncts.cpp file, you would have:
#include "myfuncs.h"
int& GetSingleInt()
{
// keep the actual value as static to this function
static int s_value(0);
// but return a reference so that everybody can use it
return s_value;
}
Dealing with templates
C++ has super-powerful templates, which is great. However, pushing templates across libraries can be really painful. When a compiler sees a template, it is the message to "fill in whatever you want to make this work," which is perfectly fine if you only have one final target. However, it can become an issue when you're working with multiple dynamic shared objects, since they could theoretically all be compiled with different versions of different compilers, all of which think that their different template fill-in-the-blanks methods is correct (and who are we to argue -- it's not defined in the standard). This means that templates can be a huge pain, but you do have some options.
Don't allow different compilers.
Pick one compiler (per operating system) and stick to it. Only support that compiler and require that all libraries be compiled with that same compiler. This is actually a really neat solution (that totally works).
Don't use templates in exported functions/classes
Only use template functions and classes when you're working internally. This does save a lot of hassle, but overall is quite restrictive. Personally, I like using templates.
Force exporting of templates and hope for the best
This works surprisingly well (especially when paired with not allowing different compilers).
Add this to standard.h:
#ifdef MY_LIBRARY_COMPILE
#define MY_LIBRARY_EXTERN
#else
#define MY_LIBRARY_EXTERN extern
#endif
And in some consuming class definition (before you declare the class itself):
// force exporting of templates
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::allocator<int>;
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::vector<int, std::allocator<int> >;
class MY_LIBRARY_PUBLIC MyObject
{
private:
std::vector<int> m_vector;
};
This is almost completely perfect...the compiler won't yell at you and life will be good, unless your compiler starts changing the way it fills in templates and you recompile one of the libraries and not the other (and even then, it might still work...sometimes).
Keep in mind that if you're using things like partial template specialization (or type traits or any of the more advanced template metaprogramming stuff), all the producer and all its consumers are seeing the same template specializations. As in, if you have a specialized implementation of vector<T> for ints or whatever, if the producer sees the one for int but the consumer does not, the consumer will happily create the wrong type of vector<T>, which will cause all sorts of really screwed up bugs. So be very careful.