When to put a whole C++ class framework into a DLL? - c++

I am about to write a C++ framework that will later be used by different C++ applications. The framework will provide one main class, which will be instantiated by the application. That main class will make use of some other classes within the framework. And there will be some helper classes, to be used directly by the application.
Now I am thinking of how I should encapsulate that class framework. I could write the header and source files as usual, and then include those in the applications that will make use of the framework, so that all will be compiled in one go together with the application.
But I am not sure whether this is "the best" approach in my case. Wouldn't it be an option to put the whole framework into a DLL and then link that DLL to the applications? However, I also read that it is often not the best idea to let a DLL export whole classes, and that this approach might lead to difficulties when using STL templates as data members.
Can you recommend an approach to me, maybe something else I have not mentiond above, incl. the pros and cons of all these options?

You can create a C interface using opaque pointers, which is needed in your case because of the varying types and versions of compilers involved. Note that you may not accept or return non-C types, unless you also wrap them in opaque pointers. It's not hard, but there's a bit of work needed on your behalf.
Assuming a class 'YourClass', you would create a YourClassImpl.h and YourClassImpl.cpp (if needed) containing your C++ class code.
YourClassImpl.h
class YourClass
{
private:
int value = 12345;
public:
YourClass() {}
~YourClass() {}
int getThing() { return value; }
void setThing(int newValue) { v = newValue}
};
You would then create a YourClass.h which would be your C header file (to be included by the users of your DLL), containing an opaque pointer typedef and the declarations of your C-style interface.
YourClass.h
#ifdef MAKEDLL
# define EXPORT __declspec(dllexport) __cdecl
#else
# define EXPORT __declspec(dllimport) __cdecl
#endif
extern "C"
{
typedef struct YourClass *YC_HANDLE;
EXPORT YC_HANDLE CreateYourClass();
EXPORT void DestroyYourClass(YC_HANDLE h);
EXPORT int YourClassGetThing(YC_HANDLE h);
EXPORT void YourClassSetThing(YC_HANDLE h, int v);
}
In YourClass.cpp you would define those functions.
YourClass.cpp
#include "YourClass.h"
#include "YourClassImpl.h"
extern "C"
{
EXPORT YC_HANDLE CreateYourClass()
{
return new YourClass{};
}
EXPORT void DestroyYourClass(YC_HANDLE h)
{
delete h;
}
EXPORT int YourClassGetThing(YC_HANDLE h)
{
return h->getThing();
}
EXPORT void YourClassSetThing(YC_HANDLE h, int v)
{
h->setThing(v);
}
}
In your users code they would include YourClass.h.
TheirCode.cpp
#include "YourClass.h"
int ResetValue(int newValue)
{
YC_HANDLE h = CreateYourClass();
auto v = YourClassGetThing(h);
YourClassSetThing(h, newValue);
DestroyYourClass(h);
return v;
}
The most usual way of linking to your DLL would be with LoadLibrary/GetProcAddress - I'd also advise adding a .def file to your project to ensure that the functions are named 'nicely' and are not difficult to access because of any name decoration.
Some issues to keep an eye out for:
Only the standard C fundamental types can be passed back and forth across the interface. Don't use any C++ specific types or classes.
PODs and arrays of PODs may be safe for you to use, but watch out for any packing or alignment issues.
Exceptions must not cross the interface boundaries - catch anything that gets thrown and convert it to a return code or equivalent.
Ensure that memory allocated on either side of the boundary is deallocated on the same side.

Related

Exporting cpp mangled functions from a 3rd party library

I have a third party library as source code. It comes with a Visual Studio project that builds a .lib from it.
I want to access the functionality from C# though, so I copied the project and changed it to create a dll.
The DLL didn't contain any exported functions though, so I also created a module definition (.def) file, and added the functions I need in the EXPORTS section.
This works for all the functions that are declared in extern "C" blocks. However, some of the functions that I need are declared outside of those blocks.
If I add those to the EXPORTS section I get the error LNK2001 unresolved external symbol XYZ :(
I don't want to change the sourcecode of the 3rd party library if I can avoid it.
What's the most elegant and hopefully simplest way to access those functions as well?
Edit
One more point for clarification: As far I can tell there is no C++ functionality involved in the interface I want to expose. I honestly don't understand why the 3rd party authors did not just include the few remaining functions into the extern "C" blocks. These functions are at the bottom of the header file, maybe the person that added them just made a mistake and put them outside of the extern "C" blocks scope.
For C++ one way (IMHO the most elegant) would be using C++/CLI, which is designed for that. Especially if you have not only functions but also classes.
You create a thin wrapper layer which is fairly simple:
Create a CLI class
put a member instance of the original class
wrap all public methods from the original class
Like This (untested):
C++ nativ:
// original c++ class
class Foo {
public:
Foo(int v) : m_value(v) {}
~Foo() {}
int value() { return m_value; }
void value(int v) { m_value = v; }
private:
int m_value;
};
CLI wrapper:
// C++/CLI wrapper
ref class FooWrap
{
public:
FooWrap(int v) { m_instance = new Foo(v); }
!FooWrap() { delete m_instance; }
~FooWrap() { this->!FooWrap(); }
property int value {
int get() { return m_instance->value(); }
void set(int v) { m_instance->value(v); }
}
private:
Foo *m_instance;
};
Here you can find a short howto, which describes it in more detail.
Edit:
From your comments:
[...] I never worked with C++/CLI though and the language looks a little confusing. Since the project deadline is pretty close I'm not sure if there's enough time to learn it. I'll definitely keep this option in mind though!
If you are not looking for the most elegant way, as in your original question, but for the fastest/shortest way: For C++ one way (IMHO the shortest way) would be using C++/CLI, which is designed for that. Especially if you have not only functions but also classes. You create a thin wrapper layer which is fairly simple... Here you can find a short (in 10 min) howto, which describes it in more detail.

Static member initialization using CRTP in separate library

After digging the web, I found some reference to a powerful pattern which exploits CRTP to allow instantiation at run-time of static members:
C++: Compiling unused classes
Initialization class for other classes - C++
And so on.
The proposed approach works well, unless such class hierarchy is placed into an external library.
Doing so, run-time initialization no more works, unless I manually #include somewhere the header file of derived classes. However, this defeats my main purpose - having the change to add new commands to my application without the need of changing other source files.
Some code, hoping it helps:
class CAction
{
protected:
// some non relevant stuff
public:
// some other public API
CAction(void) {}
virtual ~CAction(void) {}
virtual std::wstring Name() const = 0;
};
template <class TAction>
class CCRTPAction : public CAction
{
public:
static bool m_bForceRegistration;
CCRTPAction(void) { m_bForceRegistration; }
~CCRTPAction(void) { }
static bool init() {
CActionManager::Instance()->Add(std::shared_ptr<CAction>(new TAction));
return true;
}
};
template<class TAction> bool CCRTPAction<TAction>::m_bForceRegistration = CCRTPAction<TAction>::init();
Implementations being done this way:
class CDummyAction : public CCRTPAction<CDummyAction>
{
public:
CDummyAction() { }
~CDummyAction() { }
std::wstring Name() const { return L"Dummy"; }
};
Finally, here is the container class API:
class CActionManager
{
private:
CActionManager(void);
~CActionManager(void);
std::vector<std::shared_ptr<CAction>> m_vActions;
static CActionManager* instance;
public:
void Add(std::shared_ptr<CAction>& Action);
const std::vector<std::shared_ptr<CAction>>& AvailableActions() const;
static CActionManager* Instance() {
if (nullptr == instance) {
instance = new CActionManager();
}
return instance;
}
};
Everything works fine in a single project solution. However, if I place the above code in a separate .lib, the magic somehow breaks and the implementation classes (DummyAction and so on) are no longer instantiated.
I see that #include "DummyAction.h" somewhere, either in my library or in the main project makes things work, but
For our project, it is mandatory that adding Actions does not require changes in other files.
I don't really understand what's happening behind the scene, and this makes me uncomfortable. I really hate depending on solutions I don't fully master, since a bug could get out anywhere, anytime, possibly one day before shipping our software to the customer :)
Even stranger, putting the #include directive but not defining constructor/destructor in the header file still breaks the magic.
Thanks all for attention. I really hope someone is able to shed some light...
I can describe the cause of the problem; unfortunately I can't offer a solution.
The problem is that initialisation of a variable with static storage duration may be deferred until any time before the first use of something defined in the same translation unit. If your program never uses anything in the same translation unit as CCRTPAction<CDummyAction>::m_bForceRegistration, then that variable may never be initialised.
As you found, including the header in the translation unit that defines main will force it to be initialised at some point before the start of main; but of course that solution won't meet your first requirement. My usual solution to the problems of initialising static data across multiple translation units is to avoid static data altogether (and the Singleton anti-pattern doubly so, although that's the least of your problems here).
As explained in Mike's answer, the compiler determines that the static member CCRTPAction<CDummyAction>::m_bForceRegistration is never used, and therefore does not need to be initialised.
The problem you're trying to solve is to initialise a set of 'plugin' modules without having to #include their code in a central location. CTRP and templates will not help you here. I'm not aware of a (portable) way in C++ to generate code to initialise a set of plugin modules that are not referenced from main().
If you're willing to make the (reasonable) concession of having to list the plugin modules in a central location (without including their headers), there's a simple solution. I believe this is one of those extremely rare cases where a function-scope extern declaration is useful. You may consider this a dirty hack, but when there's no other way, a dirty hack becomes an elegant solution ;).
This code compiles to the main executable:
core/module.h
template<void (*init)()>
struct Module
{
Module()
{
init();
}
};
// generates: extern void initDummy(); Module<initDummy> DummyInstance
#define MODULE_INSTANCE(name) \
extern void init ## name(); \
Module<init ## name> name ## Instance
core/action.h
struct Action // an abstract action
{
};
void addAction(Action& action); // adds the abstract action to a list
main.cpp
#include "core/module.h"
int main()
{
MODULE_INSTANCE(Dummy);
}
This code implements the Dummy module and compiles to a separate library:
dummy/action.h
#include "core/action.h"
struct DummyAction : Action // a concrete action
{
};
dummy/init.cpp
#include "action.h"
void initDummy()
{
addAction(*new DummyAction());
}
If you wanted to go further (this part is not portable) you could write a separate program to generate a list of MODULE_INSTANCE calls, one for each module in your application, and output a generated header file:
generated/init.h
#include "core/module.h"
#define MODULE_INSTANCES \
MODULE_INSTANCE(Module1); \
MODULE_INSTANCE(Module2); \
MODULE_INSTANCE(Module3);
Add this as a pre-build step, and core/main.cpp becomes:
#include "generated/init.h"
int main()
{
MODULE_INSTANCES
}
If you later decide to load some or all of these modules dynamically, you can use exactly the same pattern to dynamically load, initialise and unload a dll. Please note that the following example is windows-specific, untested and does not handle errors:
core/dynamicmodule.h
struct DynamicModule
{
HMODULE dll;
DynamicModule(const char* filename, const char* init)
{
dll = LoadLibrary(filename);
FARPROC function = GetProcAddress(dll, init);
function();
}
~DynamicModule()
{
FreeLibrary(dll);
}
};
#define DYNAMICMODULE_INSTANCE(name) \
DynamicModule name ## Instance = DynamicModule(#name ".dll", "init" #name)
As Mike Seymour stated the static template stuff will not give you the dynamic loading facilities you want. You could load your modules dynamically as plug ins. Put dlls containing an action each into the working directory of the application and load these dlls dynamically at run-time. This way you will not have to change your source code in order to use different or new implementations of CAction.
Some frameworks make it easy to load custom plug ins, for example Qt.

C++ handling specific impl - #ifdef vs private inheritance vs tag dispatch

I have some classes implementing some computations which I have
to optimize for different SIMD implementations e.g. Altivec and
SSE. I don't want to polute the code with #ifdef ... #endif blocks
for each method I have to optimize so I tried a couple of other
approaches, but unfotunately I'm not very satisfied of how it turned
out for reasons I'll try to clarify. So I'm looking for some advice
on how I could improve what I have already done.
1.Different implementation files with crude includes
I have the same header file describing the class interface with different
"pseudo" implementation files for plain C++, Altivec and SSE only for the
relevant methods:
// Algo.h
#ifndef ALGO_H_INCLUDED_
#define ALGO_H_INCLUDED_
class Algo
{
public:
Algo();
~Algo();
void process();
protected:
void computeSome();
void computeMore();
};
#endif
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
// Algo_Altivec.cpp
void Algo::computeSome()
{
}
void Algo::computeMore()
{
}
... same for the other implementation files
Pros:
the split is quite straightforward and easy to do
there is no "overhead"(don't know how to say it better) to objects of my class
by which I mean no extra inheritance, no addition of member variables etc.
much cleaner than #ifdef-ing all over the place
Cons:
I have three additional files for maintenance; I could put the Scalar
implementation in the Algo.cpp file though and end up with just two but the
inclusion part will look and fell a bit dirtier
they are not compilable units per-se and have to be excluded from the
project structure
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded; ? is it even possible
to do that in the described scenario ?
I do not feel any structural cohesion in the approach
2.Different implementation files with private inheritance
// Algo.h
class Algo : private AlgoImpl
{
... as before
}
// AlgoImpl.h
#ifndef ALGOIMPL_H_INCLUDED_
#define ALGOIMPL_H_INCLUDED_
class AlgoImpl
{
protected:
AlgoImpl();
~AlgoImpl();
void computeSomeImpl();
void computeMoreImpl();
};
#endif
// Algo.cpp
...
void Algo::computeSome()
{
computeSomeImpl();
}
void Algo::computeMore()
{
computeMoreImpl();
}
// Algo_SSE.cpp
AlgoImpl::AlgoImpl()
{
}
AlgoImpl::~AlgoImpl()
{
}
void AlgoImpl::computeSomeImpl()
{
}
void AlgoImpl::computeMoreImpl()
{
}
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef-ing all over the place
still there is no "overhead" to my class - EBCO should kick in
the semantic of the class is much more cleaner at least comparing to the above
that is private inheritance == is implemented in terms of
the different files are compilable, can be included in the project
and selected via the build system
Cons:
I have three additional files for maintenance
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation file
I cannot fallback to the plain C++ implementation if nedded
3.Is basically method 2 but with virtual functions in the AlgoImpl class. That
would allow me to overcome the duplicate implementation of plain C++ code if needed
by providing an empty implementation in the base class and override in the derived
although I will have to disable that behavior when I actually implement the optimized
version. Also the virtual functions will bring some "overhead" to objects of my class.
4.A form of tag dispatching via enable_if<>
Pros:
the split is quite straightforward and easy to do
much cleaner than #ifdef ing all over the place
still there is no "overhead" to my class
will eliminate the need for different files for different implementations
Cons:
templates will be a bit more "cryptic" and seem to bring an unnecessary
overhead(at least for some people in some contexts)
if I do not have the specific optimized implementation yet for let's say
SSE I would have to duplicate some code from the plain(Scalar) C++ implementation
I cannot fallback to the plain C++ implementation if needed
What I couldn't figure out yet for any of the variants is how to properly and
cleanly fallback to the plain C++ implementation.
Also I don't want to over-engineer things and in that respect the first variant
seems the most "KISS" like even considering the disadvantages.
You could use a policy based approach with templates kind of like the way the standard library does for allocators, comparators and the like. Each implementation has a policy class which defines computeSome() and computeMore(). Your Algo class takes a policy as a parameter and defers to its implementation.
template <class policy_t>
class algo_with_policy_t {
policy_t policy_;
public:
algo_with_policy_t() { }
~algo_with_policy_t() { }
void process()
{
policy_.computeSome();
policy_.computeMore();
}
};
struct altivec_policy_t {
void computeSome();
void computeMore();
};
struct sse_policy_t {
void computeSome();
void computeMore();
};
struct scalar_policy_t {
void computeSome();
void computeMore();
};
// let user select exact implementation
typedef algo_with_policy_t<altivec_policy_t> algo_altivec_t;
typedef algo_with_policy_t<sse_policy_t> algo_sse_t;
typedef algo_with_policy_t<scalar_policy_t> algo_scalar_t;
// let user have default implementation
typedef
#if defined(ALTIVEC)
algo_altivec_t
#elif defined(SSE)
algo_sse_t
#else
algo_scalar_t
#endif
algo_default_t;
This lets you have all the different implementations defined within the same file (like solution 1) and compiled into the same program (unlike solution 1). It has no performance overheads (unlike virtual functions). You can either select the implementation at run time or get a default implementation chosen by the compile time configuration.
template <class algo_t>
void use_algo(algo_t algo)
{
algo.process();
}
void select_algo(bool use_scalar)
{
if (!use_scalar) {
use_algo(algo_default_t());
} else {
use_algo(algo_scalar_t());
}
}
As requested in the comments, here's a summary of what I did:
Set up policy_list helper template utility
This maintains a list of policies, and gives them a "runtime check" call before calling the first suitable implementaiton
#include <cassert>
template <typename P, typename N=void>
struct policy_list {
static void apply() {
if (P::runtime_check()) {
P::impl();
}
else {
N::apply();
}
}
};
template <typename P>
struct policy_list<P,void> {
static void apply() {
assert(P::runtime_check());
P::impl();
}
};
Set up specific policies
These policies implement a both a runtime test and an actual implementation of the algorithm in question. For my actual problem impl took another template parameter that specified what exactly it was they were implementing, here though the example assumes there is only one thing to be implemented. The runtime tests are cached in a static bool for some (e.g. the Altivec one I used) the test was really slow. For others (e.g. the OpenCL one) the test is actually "is this function pointer NULL?" after one attempt at setting it with dlsym().
#include <iostream>
// runtime SSE detection (That's another question!)
extern bool have_sse();
struct sse_policy {
static void impl() {
std::cout << "SSE" << std::endl;
}
static bool runtime_check() {
static bool result = have_sse();
// have_sse lives in another TU and does some cpuid asm stuff
return result;
}
};
// Runtime OpenCL detection
extern bool have_opencl();
struct opencl_policy {
static void impl() {
std::cout << "OpenCL" << std::endl;
}
static bool runtime_check() {
static bool result = have_opencl();
// have_opencl lives in another TU and does some LoadLibrary or dlopen()
return result;
}
};
struct basic_policy {
static void impl() {
std::cout << "Standard C++ policy" << std::endl;
}
static bool runtime_check() { return true; } // All implementations do this
};
Set per architecture policy_list
Trivial example sets one of two possible lists based on ARCH_HAS_SSE preprocessor macro. You might generate this from your build script, or use a series of typedefs, or hack support for "holes" in the policy_list that might be void on some architectures skipping straight to the next one, without trying to check for support. GCC sets some preprocessor macors for you that might help, e.g. __SSE2__.
#ifdef ARCH_HAS_SSE
typedef policy_list<opencl_policy,
policy_list<sse_policy,
policy_list<basic_policy
> > > active_policy;
#else
typedef policy_list<opencl_policy,
policy_list<basic_policy
> > active_policy;
#endif
You can use this to compile multiple variants on the same platform too, e.g. and SSE and no-SSE binary on x86.
Use the policy list
Fairly straightforward, call the apply() static method on the policy_list. Trust that it will call the impl() method on the first policy that passes the runtime test.
int main() {
active_policy::apply();
}
If you take the "per operation template" approach I mentioned earlier it might be something more like:
int main() {
Matrix m1, m2;
Vector v1;
active_policy::apply<matrix_mult_t>(m1, m2);
active_policy::apply<vector_mult_t>(m1, v1);
}
In that case you end up making your Matrix and Vector types aware of the policy_list in order that they can decide how/where to store the data. You can also use heuristics for this too, e.g. "small vector/matrix lives in main memory no matter what" and make the runtime_check() or another function test the appropriateness of a particular approach to a given implementation for a specific instance.
I also had a custom allocator for containers, which produced suitably aligned memory always on any SSE/Altivec enabled build, regardless of if the specific machine had support for Altivec. It was just easier that way, although it could be a typedef in a given policy and you always assume that the highest priority policy has the strictest allocator needs.
Example have_altivec():
I've included a sample have_altivec() implementation for completeness, simply because it's the shortest and therefore most appropriate for posting here. The x86/x86_64 CPUID one is messy because you have to support the compiler specific ways of writing inline ASM. The OpenCL one is messy because we check some of the implementation limits and extensions too.
#if HAVE_SETJMP && !(defined(__APPLE__) && defined(__MACH__))
jmp_buf jmpbuf;
void illegal_instruction(int sig) {
// Bad in general - https://www.securecoding.cert.org/confluence/display/seccode/SIG32-C.+Do+not+call+longjmp%28%29+from+inside+a+signal+handler
// But actually Ok on this platform in this scenario
longjmp(jmpbuf, 1);
}
#endif
bool have_altivec()
{
volatile sig_atomic_t altivec = 0;
#ifdef __APPLE__
int selectors[2] = { CTL_HW, HW_VECTORUNIT };
int hasVectorUnit = 0;
size_t length = sizeof(hasVectorUnit);
int error = sysctl(selectors, 2, &hasVectorUnit, &length, NULL, 0);
if (0 == error)
altivec = (hasVectorUnit != 0);
#elif HAVE_SETJMP_H
void (*handler) (int sig);
handler = signal(SIGILL, illegal_instruction);
if (setjmp(jmpbuf) == 0) {
asm volatile ("mtspr 256, %0\n\t" "vand %%v0, %%v0, %%v0"::"r" (-1));
altivec = 1;
}
signal(SIGILL, handler);
#endif
return altivec;
}
Conclusion
Basically you pay no penalty for platforms that can never support an implementation (the compiler generates no code for them) and only a small penalty (potentially just a very predictable by the CPU test/jmp pair if your compiler is half-decent at optimising) for platforms that could support something but don't. You pay no extra cost for platforms that the first choice implementation runs on. The details of the runtime tests vary between the technology in question.
If the virtual function overhead is acceptable, option 3 plus a few ifdefs seems a good compromise IMO. There are two variations that you could consider: one with abstract base class, and the other with the plain C implementation as the base class.
Having the C implementation as the base class lets you gradually add the vector optimized versions, falling back on the non-vectorized versions as you please, using an abstract interface would be a little cleaner to read.
Also, having separate C++ and vectorized versions of your class let you easily write unit tests that
Ensure that the vectorized code is giving the right result (easy to mess this up, and vector floating registers can have different precision than FPU, causing different results)
Compare the performance of the C++ vs the vectorized. It's often good to make sure the vectorized code is actually doing you any good. Compilers can generate very tight C++ code that sometimes does as well or better than vectorized code.
Here's one with the plain-c++ implementations as the base class. Adding an abstract interface would just add a common base class to all three of these:
// Algo.h:
class Algo_Impl // Default Plain C++ implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_SSE.h:
class Algo_Impl_SSE : public Algo_Impl // SSE
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Algo_Altivec.h:
class Algo_Impl_Altivec : public Algo_Impl // Altivec implementation
{
public:
virtual ComputeSome();
virtual ComputeSomeMore();
...
};
// Client.cpp:
Algo_Impl *myAlgo = 0;
#ifdef SSE
myAlgo = new Algo_Impl_SSE;
#elseif defined(ALTIVEC)
myAlgo = new Algo_Impl_Altivec;
#else
myAlgo = new Algo_Impl_Default;
#endif
...
You may consider to employ adapter patterns. There are a few types of adapters and it's quite an extensible concept. Here is an interesting article Structural Patterns: Adapter and Façade
that discusses very similar matter to the one in your question - the Accelerate framework as an example of the Adapter patter.
I think it is a good idea to discuss a solution on the level of design patterns without focusing on implementation detail like C++ language. Once you decide that the adapter states the right solutiojn for you, you can look for variants specific to your implemementation. For example, in C++ world there is known adapter variant called generic adapter pattern.
This isn't really a whole answer: just a variant on one of your existing options. In option 1 you've assumed that you include algo_altivec.cpp &c. into algo.cpp, but you don't have to do this. You could omit algo.cpp entirely, and have your build system decide which of algo_altivec.cpp, algo_sse.cpp, &c. to build. You'd have to do something like this anyway whichever option you use, since each platform can't compile every implementation; my suggestion is only that whichever option you choose, instead of having #if ALTIVEC_ENABLED everywhere in the source, where ALTIVEC_ENABLED is set from the build system, you just have the build system decide directly whether to compile algo_altivec.cpp .
This is a bit trickier to achieve in MSVC than make, scons, &c., but still possible. It's commonplace to switch in a whole directory rather than individual source files; that is, instead of algo_altivec.cpp and friends, you'd have platform/altivec/algo.cpp, platform/sse/algo.cpp, and so one. This way, when you have a second algorithm you need platform-specific implementations for, you can just add the extra source file to each directory.
Although my suggestion's mainly intended to be a variant of option 1, you can combine this with any of your options, to let you decide in the build system and at runtime which options to offer. In that case, though, you'll probably need implementation-specific header files too.
In order to hide the implementation details you may just use an abstract interface with static creator and provide three 3 implementation classes:
// --------------------- Algo.h ---------------------
#pragma once
typedef boost::shared_ptr<class Algo> AlgoPtr;
class Algo
{
public:
static AlgoPtr Create(std::string type);
~Algo();
void process();
protected:
virtual void computeSome() = 0;
virtual void computeMore() = 0;
};
// --------------------- Algo.cpp ---------------------
class PlainAlgo: public Algo { ... };
class AltivecAlgo: public Algo { ... };
class SSEAlgo: public Algo { ... };
static AlgoPtr Algo::Create(std::string type) { /* Factory implementation */ }
Please note, that since PlainAlgo, AlivecAlgo and SSEAlgo classes are defined in Algo.cpp, they are only seen from this compilation unit and therefore the implementation details hidden from the outside world.
Here is how one can use your class then:
AlgoPtr algo = Algo::Create("SSE");
algo->Process();
It seems to me that your first strategy, with separate C++ files and #including the specific implementation, is the simplest and cleanest. I would only add some comments to your Algo.cpp indicating which methods are in the #included files.
e.g.
// Algo.cpp
#include "Algo.h"
Algo::Algo() { }
Algo::~Algo() { }
void Algo::process()
{
computeSome();
computeMore();
}
// The following methods are implemented in separate,
// platform-specific files.
// void Algo::computeSome()
// void Algo::computeMore()
#if defined(ALTIVEC)
#include "Algo_Altivec.cpp"
#elif defined(SSE)
#include "Algo_SSE.cpp"
#else
#include "Algo_Scalar.cpp"
#endif
Policy-like templates (mixins) are fine until the requirement to fall back to default implementation. It's runtime opeation and should be handled by runtime polymorphism. Strategy pattern can handle this fine.
There's one drawback of this approach: Strategy-like algorithm implemented cannot be inlined. Such inlining can provide reasonable performance improvement in rare cases. If this is an issue you'll need to cover higher-level logic by Strategy.

Declaring multiple instances of a c++ class which uses C code

I have some C libraries which I would love to be able to be able to wrap in a c++ class and create multiple, completely separate instances of. I tried doing this but the problem is that the variables in the C code are simply shared among all c++ class instances.
I've tried making a static library and referencing that but to no avail. Does anyone know how one can do this?
Code example below: I have a C file called CCodeTest which simply adds some numbers to some variables in memory. I have a class in MathFuncsLib.cpp, which uses this. I want to be able to create multiple instances of the MyMathFuncs class and have the variables in the C code be independent
CCodeTest.h
#ifndef C_CODE_TEST_H
#define C_CODE_TEST_H
extern int aiExternIntArray[3];
#if defined(__cplusplus)
extern "C" {
#endif
#define CCODE_COUNT 3
void CCodeTest_AddToIntArray(int iIndex_);
int CCodeTest_GetInternInt(int iIndex_);
int CCodeTest_GetExternInt(int iIndex_);
#if defined(__cplusplus)
}
#endif
#endif //defined(C_CODE_TEST_H)
MathFuncsLib.cpp
#include "MathFuncsLib.h"
#include "CCodeTest.h"
using namespace std;
namespace MathFuncs
{
void MyMathFuncs::Add(int iNum_)
{
CCodeTest_AddToIntArray(iNum_);
}
void MyMathFuncs::Print(void)
{
for(int i = 0; i < CCODE_COUNT; i++)
{
printf("Intern Index %i: %i\n", i, CCodeTest_GetInternInt(i));
printf("Extern Index %i: %i\n", i, CCodeTest_GetExternInt(i));
}
}
}
Any help would be much appreciated!
You have a global variable called aiExternIntArray. That is the problem. Each instance of your C++ class operates on that array.
What you need to do is create a struct that holds an int[3] so that you can create separate instances of this type.
typedef struct
{
int aiIntArray[3];
} CodeTestStruct;
void CCodeTest_AddToIntArray(CodeTestStruct* ct, int iIndex_);
int CCodeTest_GetInternInt(CodeTestStruct* ct, int iIndex_);
int CCodeTest_GetExternInt(CodeTestStruct* ct, int iIndex_);
In C++, your class should encapsulate the CodeTestStruct.
class CodeTestClass
{
public:
void AddToIntArray(int iIndex_)
{
CCodeTest_AddToIntArray(&m_ct, iIndex_);
}
int GetInternInt(int iIndex_)
{
CCodeTest_GetInternInt(&m_ct, iIndex_);
}
int GetExternInt(int iIndex_)
{
CCodeTest_GetExternInt(&m_ct, iIndex_);
}
private:
CodeTestStruct m_ct;
};
Thank you for the answers - the suggestions above are all very valid.
My problem is an uncommon one as the C code I'm trying to run will be on an embedded system with very limited resources and so I am trying to minimize pointer dereferencing and state management etc...
I am using this C code in a computer exe to simulate multiple microprocessors running this code in parallel. So I've decided to create two DLLs that both share the same C source but have slightly different interfaces so that the C code, when loaded by importing the DLLs will be in a separate memory space and allow me to simulate multiple micros in one execution stream.
This is why you should never use global variables. That includes euphemisms for global variables like "singletons". There are exceptions to this rule, but they almost all involve systems-level programming and are not applicable to applications or (non-system-level) libraries.
To solve the problem, fix the C code so that it operates on an argument passed to it rather than a global variable.
If you're okay with with providing your own version of the library, you can refactor out the global variables fairly trivially. This might annoy your users if the library is large enough, though.
Similarly to the above, you can use a different library or write your own.
If you don't want want to do that, and the library can be dynamically linked (ie: libfoo.so) you might be able to load the libary multiple times (one for each instance), although this will probably mean guzzling memory is likely to be unacceptably slow in any serious program
If you have to use this version of this library and it is staticly linked, then you're pretty much screwed.

Wrapping C++ class API for C consumption

I have a set of related C++ classes which must be wrapped and exported from a DLL in such a way that it can be easily consumed by C / FFI libraries. I'm looking for some "best practices" for doing this. For example, how to create and free objects, how to handle base classes, alternative solutions, etc...
Some basic guidelines I have so far is to convert methods into simple functions with an extra void* argument representing the 'this' pointer, including any destructors. Constructors can retain their original argument list, but must return a pointer representing the object. All memory should be handled via the same set of process-wide allocation and free routines, and should be hot-swappable in a sense, either via macros or otherwise.
Foreach public method you need a C function.
You also need an opaque pointer to represent your class in the C code.
It is simpler to just use a void* though you could build a struct that contains a void* and other information (For example if you wanted to support arrays?).
Fred.h
--------------------------------
#ifdef __cplusplus
class Fred
{
public:
Fred(int x,int y);
int doStuff(int p);
};
#endif
//
// C Interface.
typedef void* CFred;
//
// Need an explicit constructor and destructor.
extern "C" CFred newCFred(int x,int y);
extern "C" void delCFred(CFred);
//
// Each public method. Takes an opaque reference to the object
// that was returned from the above constructor plus the methods parameters.
extern "C" int doStuffCFred(CFred,int p);
The the implementation is trivial.
Convert the opaque pointer to a Fred and then call the method.
CFred.cpp
--------------------------------
// Functions implemented in a cpp file.
// But note that they were declared above as extern "C" this gives them
// C linkage and thus are available from a C lib.
CFred newCFred(int x,int y)
{
return reinterpret_cast<void*>(new Fred(x,y));
}
void delCFred(CFred fred)
{
delete reinterpret_cast<Fred*>(fred);
}
int doStuffCFred(CFred fred,int p)
{
return reinterpret_cast<Fred*>(fred)->doStuff(p);
}
While Loki Astari's answer is very good, his sample code puts the wrapping code inside the C++ class. I prefer to have the wrapping code in a separate file. Also I think it is better style to prefix the wrapping C functions with the class name.
The following blog posts shows how to do that:
http://blog.eikke.com/index.php/ikke/2005/11/03/using_c_classes_in_c.html
I copied the essential part because the blog is abandoned and might finally vanish (credit to Ikke's Blog):
First we need a C++ class, using one header file (Test.hh)
class Test {
public:
void testfunc();
Test(int i);
private:
int testint;
};
and one implementation file (Test.cc)
#include <iostream>
#include "Test.hh"
using namespace std;
Test::Test(int i) {
this->testint = i;
}
void Test::testfunc() {
cout << "test " << this->testint << endl;
}
This is just basic C++ code.
Then we need some glue code. This code is something in-between C and C++. Again, we got one header file (TestWrapper.h, just .h as it doesn't contain any C++ code)
typedef void CTest;
#ifdef __cplusplus
extern "C" {
#endif
CTest * test_new(int i);
void test_testfunc(const CTest *t);
void test_delete(CTest *t);
#ifdef __cplusplus
}
#endif
and the function implementations (TestWrapper.cc, .cc as it contains C++ code):
#include "TestWrapper.h"
#include "Test.hh"
extern "C" {
CTest * test_new(int i) {
Test *t = new Test(i);
return (CTest *)t;
}
void test_testfunc(const CTest *test) {
Test *t = (Test *)test;
t->testfunc();
}
void test_delete(CTest *test) {
Test *t = (Test *)test;
delete t;
}
}
First, you might not need to convert all your methods to C functions. If you can simplify the API and hide some of the C++ interface, it is better, since you minimize the chance to change the C API when you change C++ logic behind.
So think of a higher level abstraction to be provided through that API. Use that void* solution you described. It looks to me the most appropriate (or typedef void* as HANDLE :) ).
Some opinions from my experience:
functions should return codes to represent errors. It's useful to have a function returning error description in string form. All other return values should be out parameters.
E.g.:
C_ERROR BuildWidget(HUI ui, HWIDGET* pWidget);
put signatures into structures/classes your handles pointer to for checking handles on validness.
E.g. your function should look like:
C_ERROR BuildWidget(HUI ui, HWIDGET* pWidget){
Ui* ui = (Ui*)ui;
if(ui.Signature != 1234)
return BAD_HUI;
}
objects should be created and released using functions exported from DLL, since memory allocation method in DLL and consuming app can differ.
E.g.:
C_ERROR CreateUi(HUI* ui);
C_ERROR CloseUi(HUI hui); // usually error codes don't matter here, so may use void
if you are allocating memory for some buffer or other data that may be required to persist outside of your library, provide size of this buffer/data. This way users can save it to disk, DB or wherever they want without hacking into your internals to find out actual size. Otherwise you'll eventually need to provide your own file I/O api which users will use only to convert your data to byte array of known size.
E.g.:
C_ERROR CreateBitmap(HUI* ui, SIZE size, char** pBmpBuffer, int* pSize);
if your objects has some typical representation outside of your C++ library, provide a mean of converting to this representation (e.g. if you have some class Image and provide access to it via HIMG handle, provide functions to convert it to and from e.g. windows HBITMAP). This will simplify integration with existing API.
E.g.
C_ERROR BitmapToHBITMAP(HUI* ui, char* bmpBuffer, int size, HBITMAP* phBmp);
Use vector (and string::c_str) to exchange data with non C++ APIs. (Guideline #78 from C++ Coding Standards, H. Sutter/ A. Alexandrescu).
PS It's not that true that "constructors can retain their original argument list". This is only true for argument types which are C-compatible.
PS2 Of course, listen to Cătălin and keep your interface as small and simple as possible.
This may be of interest: "Mixing C and C++" at the C++ FAQ Lite. Specifically [32.8] How can I pass an object of a C++ class to/from a C function?