classes and static variables in shared libraries - c++

I am trying to write something in c++ with an architecture like:
App --> Core (.so) <-- Plugins (.so's)
for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have:
static variables in Core are duplicated at run-time -- Plugins and App have different copys of them.
at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL.
Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.
What I did in the entry_point.cpp for a plugin:
#include "raw_space.hpp"
#include <gamustard/gamustard.hpp>
using namespace Gamustard;
using namespace std;
namespace
{
struct GAMUSTARD_PUBLIC_API RawSpacePlugin : public Plugin
{
RawSpacePlugin(void):identifier_("com.gamustard.engine.space.RawSpacePlugin")
{
}
virtual string const& getIdentifier(void) const
{
return identifier_;
}
virtual SmartPtr<Object> createObject(std::string const& name) const
{
if(name == "RawSpace")
{
Object* obj = NEW_EX RawSpaceImp::RawSpace;
Space* space = dynamic_cast<Space*>(obj);
Log::instance().log(Log::LOG_DEBUG, "createObject: %x -> %x.", obj, space);
return SmartPtr<Object>(obj);
}
return SmartPtr<Object>();
}
private:
string identifier_;
};
SmartPtr<Plugin> __plugin__;
}
extern "C"
{
int GAMUSTARD_PUBLIC_API gamustardDLLStart(void) throw()
{
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStart");
__plugin__.reset(NEW_EX RawSpacePlugin);
PluginManager::instance().install(weaken(__plugin__));
return 0;
}
int GAMUSTARD_PUBLIC_API gamustardDLLStop(void) throw()
{
PluginManager::instance().uninstall(weaken(__plugin__));
__plugin__.reset();
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStop");
return 0;
}
}

Some Background
Shared libraries in C++ are quite difficult because the standard says nothing about them. This means that every platform has a different way of doing them. If we restrict ourselves to Windows and some *nix variant (anything ELF), the differences are subtle. The first difference is Shared Object Visibility. It is highly recommended that you read that article so you get a good overview of what visibility attributes are and what they do for you, which will help save you from linker errors.
Anyway, you'll end up with something that looks like this (for compiling with many systems):
#if defined(_MSC_VER)
# define DLL_EXPORT __declspec(dllexport)
# define DLL_IMPORT __declspec(dllimport)
#elif defined(__GNUC__)
# define DLL_EXPORT __attribute__((visibility("default")))
# define DLL_IMPORT
# if __GNUC__ > 4
# define DLL_LOCAL __attribute__((visibility("hidden")))
# else
# define DLL_LOCAL
# endif
#else
# error("Don't know how to export shared object libraries")
#endif
Next, you'll want to make some shared header (standard.h?) and put a nice little #ifdef thing in it:
#ifdef MY_LIBRARY_COMPILE
# define MY_LIBRARY_PUBLIC DLL_EXPORT
#else
# define MY_LIBRARY_PUBLIC DLL_IMPORT
#endif
This lets you mark classes, functions and whatever like this:
class MY_LIBRARY_PUBLIC MyClass
{
// ...
}
MY_LIBRARY_PUBLIC int32_t MyFunction();
This will tell the build system where to look for the functions when it calls them.
Now: To the actual point!
If you're sharing constants across libraries, then you actually should not care if they are duplicated, since your constants should be small and duplication allows for much optimization (which is good). However, since you appear to be working with non-constants, the situation is a little different. There are a billion patterns to make a cross-library singleton in C++, but I naturally like my way the best.
In some header file, let's assume you want to share an integer, so you would do have in myfuncts.h:
#ifndef MY_FUNCTS_H__
#define MY_FUNCTS_H__
// include the standard header, which has the MY_LIBRARY_PUBLIC definition
#include "standard.h"
// Notice that it is a reference
MY_LIBRARY_PUBLIC int& GetSingleInt();
#endif//MY_FUNCTS_H__
Then, in the myfuncts.cpp file, you would have:
#include "myfuncs.h"
int& GetSingleInt()
{
// keep the actual value as static to this function
static int s_value(0);
// but return a reference so that everybody can use it
return s_value;
}
Dealing with templates
C++ has super-powerful templates, which is great. However, pushing templates across libraries can be really painful. When a compiler sees a template, it is the message to "fill in whatever you want to make this work," which is perfectly fine if you only have one final target. However, it can become an issue when you're working with multiple dynamic shared objects, since they could theoretically all be compiled with different versions of different compilers, all of which think that their different template fill-in-the-blanks methods is correct (and who are we to argue -- it's not defined in the standard). This means that templates can be a huge pain, but you do have some options.
Don't allow different compilers.
Pick one compiler (per operating system) and stick to it. Only support that compiler and require that all libraries be compiled with that same compiler. This is actually a really neat solution (that totally works).
Don't use templates in exported functions/classes
Only use template functions and classes when you're working internally. This does save a lot of hassle, but overall is quite restrictive. Personally, I like using templates.
Force exporting of templates and hope for the best
This works surprisingly well (especially when paired with not allowing different compilers).
Add this to standard.h:
#ifdef MY_LIBRARY_COMPILE
#define MY_LIBRARY_EXTERN
#else
#define MY_LIBRARY_EXTERN extern
#endif
And in some consuming class definition (before you declare the class itself):
// force exporting of templates
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::allocator<int>;
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::vector<int, std::allocator<int> >;
class MY_LIBRARY_PUBLIC MyObject
{
private:
std::vector<int> m_vector;
};
This is almost completely perfect...the compiler won't yell at you and life will be good, unless your compiler starts changing the way it fills in templates and you recompile one of the libraries and not the other (and even then, it might still work...sometimes).
Keep in mind that if you're using things like partial template specialization (or type traits or any of the more advanced template metaprogramming stuff), all the producer and all its consumers are seeing the same template specializations. As in, if you have a specialized implementation of vector<T> for ints or whatever, if the producer sees the one for int but the consumer does not, the consumer will happily create the wrong type of vector<T>, which will cause all sorts of really screwed up bugs. So be very careful.

Related

Activating blocks of code with [#define & #ifdef] cross-file

I don't know what this concept is called, so title may sound weird. Imagine the following scenario:
main.cpp:
#define SOME_KEYWORD
int main()
{
foo();
return 0;
}
other.cpp:
void foo()
{
//Do some stuff
#ifdef SOME_KEYWORD
//Do some additional stuff
#endif
}
I've tried it out and it doesn't work if #define is present in other file. Is there a way around this? (I'd rather not to modify function parameters just to achieve this, since it will only be present at development time and functions can be many layers of abstraction away.)
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
In c++, from c++17, a constexpr-if would be a good way to go about doing this. e.g. in some header file:
// header.hpp
#pragma once
constexpr bool choice = true; // or false, if you don't want to compile some additional stuff
and in an implementation file:
#include "header.hpp"
void foo()
{
//Do some stuff
if constexpr(choice)
{
//Do some additional stuff
}
}
Note that is not a drop in replacement for #define, but it works in many cases.
A preprocessor symbol defined in one translation unit is not visible in a different translation unit. As suggested in a comment you can define it in a header and then include where needed (its not a keyword, so I chose a better name):
// defines.h
#define SOME_SYMBOL
// other.cpp
#include "defines.h
Conditional compilation via preprocessor macros has some uses, eg conditionally compiling platform specific code or excluding debug code from release builds. For anything else I would not use it, because when overused it can create a big mess and is error-prone (eg too easy to forget to include defines.h). Consider to make foo a template:
template <bool SOME_FLAG>
void foo()
{
//Do some stuff
if constexpr (SOME_FLAG) {
//Do some additional stuff
}
}
And if you still want to make use of the preprocessor, this allows you to concentrate usage of macros to a single location:
// main.cpp
#define SOME_SYMBOL
#ifdef SOME_SYMBOL
constexpr bool flag = true;
#else
constexpr bool flag = false;
int main()
{
foo<flag>();
return 0;
}
I don't know what this concept is called
Generally, pre-processing. More specifically, the pre-processor is used here to conditionally compile the program.
This a common technique that is used to create portable interfaces over platform specific ones. Sometimes it is used to enable or suppress debugging features.
I've tried it out and it doesn't work if #define is present in other file.
Macros only affect the file where they are defined.
Is there a way around this?
Define the macro in all of the files where you use it. Typically, this is achieved by including the definition from a header, or by specifying a compiler option.
And, I guess this is a C way to do things, I don't know if that would be considered as a good practice in C++, if not, what are the alternative ways?
There is no complete alternative in C++. In some cases they can be replaced or combined with templates and if constexpr.

Feature flags / toggles when artifact is a library and flags affect C or C++ headers

There exists quite a bit of discussions on feature flags/toggles and why you would use them but most of the discussion on implementing them center around (web or client) apps. If your product/artifact is a C or C++ library and your public headers are affected by the flags, how would you implement them?
The "naive" way of doing it doesn't really work:
/// Does something
/**
* Does something really cool
#ifdef FEATURE_FOO
* #param fooParam describe param for foo
#endif
*/
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
You wouldn't want to ship something like this.
Your library that you ship was built for a certain feature flag combination, clients shouldn't need to #define the same feature flags to make things work
The ifdefs in your public header are ugly
And most importantly, if you disable your flag, you don't want clients to see anything about the disabled features - maybe it is something upcoming and you don't want to show your stuff until it is ready
Running the preprocessor on the file to get the header for distribution doesn't really work because that would not only act on feature flags but also do everything else the preprocessor does.
What would be a technical solution to this that doesn't have these flaws?
This kind of goo ends up in a codebase due to versioning. Broad topic with very few happy answers. But you certainly want to avoid making it more difficult then it needs to be. Focus on the kind of compatibility you want to provide.
The syntax proposed in the snippet is only required when you need binary compatibility. It keeps the library compatible with a doSomethingCool() call in the client code (passing no argument) without having to compile that client code. In other words, the client programmer does nothing at all beyond copying the updated .dll or .so file, does not need any updated headers and it is entirely your burden to get the feature flags right. Binary compatibility is pretty difficult to pull off reliably, beyond the flag wrangling, easy to make a mistake.
But what you are actually talking about is source compatibility, you do provide the user with an updated header and he rebuilds his code to use the library update. In which case you don't need the feature flag, the C++ compiler by itself ensures that an argument is passed, it will be 42. No flag required at all, either on your end or the user's end.
Another way to do it is by providing an overload. In other words, both a doSomethingCool() and a doSomethingCool(int) function. The client programmer keeps using the original overload until he's ready to move ahead. You also favor an overload when the function body has to change too much. If these functions are not virtual then it even provides link compatibility, could be useful in some select case. No feature flags required.
I'd say it's a relatively broad question, but I'll trow in my two cents.
First, you really want to separate the public headers from implementation (source and internal headers, if any). The public header that gets installed (e.g., at /usr/include) should contain function declaration and, preferably, a constant boolean to inform the client whether the library has a certain feature compiled in or not, as so:
#define FEATURE_FOO 1
void doSomethingCool();
Such a header is generally generated. Autotools is de facto standard tools for this purpose in GNU/Linux. Otherwise you can write your own scripts to do this.
For completeness, in .c file you should have the
void doSomethingCool(
#ifdef FEATURE_FOO
int fooParam = 42
#endif
);
It's also up to your distribution tools to keep the installed headers and library binaries in sync.
Use the forward declarations
Hide implementation by using a pointer (Pimpl idiom)
this code id quoted from the previous link:
// Foo.hpp
class Foo {
public:
//...
private:
struct Impl;
Impl* _impl;
};
// Foo.cpp
struct Foo::Impl {
// stuff
};
Binary compatibility is not a forte of C++, it probably isn’t worth considering.
For C, you might construct something like an interface class, so that your first touch with the library is something like:
struct kv {
char *tag;
int val;
};
int Bind(struct kv *compat, void **funcs, void **stamp);
and your access to the library is now:
#define MyStrcpy(src, dest) (funcs->mystrcpy((stamp)(src),(dest)))
The contract is that Bind provides/constructs an appropriate (func, stamp) pair for the attribute set you provided; or fails if it cannot. Note that Bind is the only bit that has to know about multiple layouts of *funcs,*stamp; so it can transparently provide robust interface for this reduced version of the problem.
If you wanted to get really fancy, you might be able to achieve the same by re-writing the PLT that the dlopen/dlsym prepare for you, but:
You are grossly expanding your attack surface.
You are adding a lot of complexity for very little gain.
You are adding platform / architecture specific code where none is warranted.
A few downsides remain. You have to invoke Bind before any part of your program/library attempts to use it. Attempts to solve that lead straight to hell (Finding C++ static initialization order problems), which must make N.Wirth smile. If you get too clever with your Bind(), you will wish you hadn’t. You might want to be careful about re-entrency, since a given client might Bind multiple times for different attribute sets (users are such a pain).
That's how I would manage this in pure C.
First of all the features, I would pack them in a single unsigned int 32/64 bits long to keep them as compact as possible.
Second step a private header to use only in library compilation, where I would define a macro to create the API function wrapper, and the internal function:
#define CoolFeature1 0x00000001 //code value as 0 to disable feature
#define CoolFeature2 0x00000010
#define CoolFeature3 0x00000100
.... // Other features
#define Cool CoolFeature1 | CoolFeature2 | CoolFeature3 | ... | CoolFeature_n
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__) \
{ return Internal_#fname(Cool, __VA_ARGS__);} \
ret Internal_#fname(unsigned long Cool, __VA_ARGS__)
#include "user_header.h" //Include the standard user header where there is no reference to Cool features
Now we have a wrapper with a standard prototype that will be available in the user definition header, and an internal version which keep an addition flag group to specify optional features.
When coding using the macro you can write:
ImplementApi(int, MyCoolFunction, int param1, float param2, ...)
{
// Your code goes here
if (Cool & CoolFeature2)
{
// Do something cool
}
else
{
// Flat life ...
}
...
return 0;
}
In the case above you'll get 2 definitions:
int Internal_MyCoolFunction(unsigned long Cool, int param1, float param2, ...);
int MyCoolFunction(int param1, float param2, ...)
You can eventually add in the macro, for the API function, the attributes for export if you're distribuiting a dynamic library.
You can even use the same definition header if the definition of ImplementApi macro is done on the compiler command line, in that case the following simple definition in the header will do:
#define ImplementApi(ret, fname, ...) ret fname(__VA_ARGS__);
The last will generate only the exported API prototypes.
This suggestion, of course, is not exhaustive. There a lot of more adjustments you can do to make more elegant and automatic the definitions. I.e. including a sub header with function list to create only API function prototypes for the user, and both, internal and API, for developers.
Why are you using defines for feature flags? Feature flags are supposed to enable you to turn features on and off runtime, not compile time.
In the code you would then case out implementation as early as possible using interfaces and concrete classes that are chosen based on the feature flag.
If users of the header files arent supposed to be able to access the feature flags, then create header files that you dont distribute, that are only included in the implementation c/cpp files. You can then flip the flags in the private headers when you compile the library that they link to.
If you are keeping features internal until you are ready to release, you can move the feature flag into the public header, or just remove the feature flag entirely and switch to using the new implementation.
Sloppy example if you want this compile time:
public_class.h
class Thing
{
public:
void DoSomething();
}
private_class_feature1.h
#define USE_FEATURE_1
class NewFeatureImp
{
public:
static void CoolNewWay1();
}
public_class.cpp
#include “public_class.h”
#include “private_class_feature1.h”
void Thing::DoSomething()
{
#ifdef USE_FEATURE_1
NewFeatureImpl::CoolNewWay();
#else
// Regular impl
#endif
}

Static Library with extern #define and typedef struct

I am trying to make a static library, where certain aspects of the library can be defined externally (outside the compiled library code).
For function definitions, I can compile the library without issue using extern void foo() declarations in the library, and then define the contents of foo() in the code referencing the static library.
I also want to make some #define values and typedef structs which are in the static library, editable externally.
If I remove the #defines or typedef structs declarations, then I am unable to compile the library.
All attempts at using extern also fail.
Is this possible ? If so, how do I do it ?
Regards,
John.
#defines are handled at compile time, so you can't make these editable outside the (compiled) library.
typedefs and structs define a memory layout and offsets into these data types. These are handled at compile time to insert the right offsets in the compiled code to access the members and so are also handled at compile time and can't be editable outside the (compiled) library.
You can though pass the library functions void * pointers to your data structures, and pass the library functions to handle these externally defined data types. For example:
void genericSort(void *ArrayToSort, int (*cmp)(void *, void *));
Here, you pass the library function an array to sort and a function to compare two elements, without the library knowing anything about what this array holds.
For the #define values, you can declare these in your library's header file as external constants, similar to the functions.
extern const int LIBRARY_USERS_VALUE;
This forces the application code to declare the constant itself, which it can do with a #define as well.
// Value used by the library, and elsewhere in this code.
#define ARBITRARY_NUMBER 69
// Define the constant declared in the library.
const int LIBRARY_USERS_VALUE = ARBITRARY_NUMBER;
As has been mentioned elsewhere, the struct and typedef are a little bit trickier. However, you can separate these into the bits required by the library and the bits that are used by the application. One technique, that is commonly used is to define a header that is required by the library that also has a 'generic' marker at the end that the application can fill in.
// Declare a type that points to a named, but undefined
// structure that the application code must provide.
typedef struct user_struct_tag* user_struct_pointer;
// Declare a type for a library structure, that refers to
// application data using the pointer to the undefined struct.
typedef struct
{
int userDataItemSize;
int userDataItemCount;
user_struct_pointer userDataPointer;
} library_type;
The application code then has to declare the structure (with the tag) itself.
// Define the structure referred to by the library in the application.
struct user_struct_tag
{
int dataLength;
char dataString[32];
};
// And typedef it if you need too.
typedef struct user_struct_tag user_data_type;
There are lots of other similar methods you can use, providing that your library has no need to know about the structure of the data in the application code. If it does, then the declaration of that structure needs to be available to the library at compile time. In these instances you would need to think about what the library is actually for, and whether you can need to use some kind of data abstraction for passing the information. For example, XML, TLV, etc.
See #define are pre-processor symbol, So while compiling it will be replaced by its original value. So it does not make any sense for making them extern in your library.
If you want some editing kind of access then use compiler time define
for gcc you can use -D and -U
For typedef and structure definition, Using extern you can tell compile that it will be defined in some other file, but when you make library at that time that definition should be there. So what you want to do that its not possible.

Use #ifdef in .lib and defining variable in linkin project

I am creating a library (.lib) in c++ with Visual Studio 2008. I would like to set a variable to change the behaviour of the library depending on the variable. Simplifying a lot, something like this:
#ifdef OPTION1
i = 1;
#else
i = 0;
#endif
But the variable (in this case OPTION1) should not be defined in the library itself, but in the code that links to the library, so that just changing the definition of the variable I could obtain different behaviours from the program, but always linking to the same library.
Is this possible, and how? Or is there a more elegant way to achieve what I want?
To pull this off, the code which depends on the macro must be compiled as part of the code which links to the library, not as part of the library itself. The best you could do is something like this:
In your public .h file:
namespace LibraryPrivate {
void functionForOptionSet();
void functionForOptionUnset();
}
#ifdef OPTION1
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionSet();
}
#else
inline void dependentBehaviour() {
LibraryPrivate::functionForOptionUnset();
}
#endif
In you library's .cpp file:
namespace LibraryPrivate {
void functionForOptionSet()
{ i = 1; }
void functionForOptionUnset()
{ i = 0; }
}
That is, you have to implement both options in the library, but you can (partially) limit the interface based on the macro. Kind of something like what WinAPI does with char vs. wchar_t functions: if provides both SomeFunctionA(char*) and SomeFunctionW(wchar_t*) and then a macro SomeFunction which expands to one of those.
The simple answer is no. Things like #ifdef are entirely
processed by the compiler (and in fact, by a preprocessor phase
of the compiler, before it even parses the code); a .lib file
has already been compiled.
One solution would be to supply the library in source form, and
let the client compile it as part of his project. This has an
additional advantage in that you automatically support all
versions of the compiler, with all possible combinations of
compiler options. And the disadvantage that your library will
be used with versions of the compiler and compiler options that
you've never tested, and that possibly you cannot even test.
Otherwise, you'll need to use a variable, and ifs and ?:,
rather than #ifdef. And you'll have to arrange some means of
setting the variable.
Finally, if there's only one such variable, you might consider
furnishing two different sets of versions of the library: one
with it set, and one without. The client then decides which one
he wants to use. In many ways, this is the simplest solution,
but it definitely doesn't scale—with a hundred such
variables, if they're independent, you'll need 2^100 different
sets of variants, and that won't fit on any disk.

Different definitions for the same classes in C++ - handling multiple targets

my problem is that i would like to organize my code so i can have a debug and release version of the same methods, and i can have multiple definitions of the same methods for different targeted platforms.
Basically the core of the problem is the same for both, i need to have the same signature but with different definitions associated.
What is the best way to organize my code on the filesystem and for compilation and production so i can keep this clean and separated ?
Thanks.
// #define DEBUG - we're making a non debug version
#ifdef DEBUG
// function definition for debug
#else
// function definition for release
#endif
The same can be done for different operating systems. There's of course the problem of recompilating all of it, which can be a pain in the ass in C++.
I suggest you to intervene at source level and not on header files (just to be sure to keep same interfaces), something like:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//Foo.cpp
// common method
Foo::methodA() { }
#ifdef _DEBUG_
Foo::methodB() { }
#elif _PLATFORM_BAR_
Foo::methodB() { }
#else
Foo:methodB() { }
#endif
If, instead, you want to keep everything separated, you will have to work on a higher lever, the preprocessor is not enough to conditionally include a .cpp file instead that another. You will have to work with the makefile or whatever you use.
Another choice could be the one of having source files that simply disappear when not on specific platform, eg:
//Foo.h
class Foo{
void methodA();
void methodB();
};
//FooCommon.cpp
void Foo::methodA() { }
//FooDebug.cpp
#ifdef _DEBUG_H
void Foo::methodB() { }
#endif
//FooRelease.cpp
#ifndef _DEBUG_H_
void Foo::methodB() { }
#endif
If your compiler allows, you can try keeping the source files for each version in a separate subfolder (eg #include "x86_d/test.h") then using global macro definitions to control the flow:
#define MODE_DEBUG
#ifdef MODE_DEBUG
#include "x86dbg/test.h"
#else
#include "x86rel/test.h"
#endif
You can also use a similar structure for member function definitions, so that you can have two different definitions in the same file. Many compilers also use their own defines for global macros as well, so instead of #define MODE_DEBUG above, you might be able to use something like #ifdef _CPP_RELEASE or maybe even define one through a compiler flag.