PREFACE: I'm relatively inexperienced in C++ so this very well could be a Day 1 n00b question.
I'm working on something whose long term goal is to be portable across multiple operating systems. I have the following files:
Utilities.h
#include <string>
class Utilities
{
public:
Utilities() { };
virtual ~Utilities() { };
virtual std::string ParseString(std::string const& RawString) = 0;
};
UtilitiesWin.h (for the Windows class/implementation)
#include <string>
#include "Utilities.h"
class UtilitiesWin : public Utilities
{
public:
UtilitiesWin() { };
virtual ~UtilitiesWin() { };
virtual std::string ParseString(std::string const& RawString);
};
UtilitiesWin.cpp
#include <string>
#include "UtilitiesWin.h"
std::string UtilitiesWin::ParseString(std::string const& RawString)
{
// Magic happens here!
// I'll put in a line of code to make it seem valid
return "";
}
So then elsewhere in my code I have this
#include <string>
#include "Utilities.h"
void SomeProgram::SomeMethod()
{
Utilities *u = new Utilities();
StringData = u->ParseString(StringData); // StringData defined elsewhere
}
The compiler (Visual Studio 2008) is dying on the instance declaration
c:\somepath\somecode.cpp(3) : error C2259: 'Utilities' : cannot instantiate abstract class
due to following members:
'std::string Utilities::ParseString(const std::string &)' : is abstract
c:\somepath\utilities.h(9) : see declaration of 'Utilities::ParseString'
So in this case what I'm wanting to do is use the abstract class (Utilities) like an interface and have it know to go to the implemented version (UtilitiesWin).
Obviously I'm doing something wrong but I'm not sure what. It occurs to me as I'm writing this that there's probably a crucial connection between the UtilitiesWin implementation of the Utilities abstract class that I've missed, but I'm not sure where. I mean, the following works
#include <string>
#include "UtilitiesWin.h"
void SomeProgram::SomeMethod()
{
Utilities *u = new UtilitiesWin();
StringData = u->ParseString(StringData); // StringData defined elsewhere
}
but it means I'd have to conditionally go through the different versions later (i.e., UtilitiesMac(), UtilitiesLinux(), etc.)
What have I missed here?
Utilities *u = new Utilities();
tells the compiler to make a new instance of the Utilities class; the fact that UtilitiesWin extends it isn't necessarily known and doesn't affect it. There could be lots of classes extending Utilities, but you told the compiler to make a new instance of Utilities, not those subclasses.
It sounds like you want to use the factory pattern, which is to make a static method in Utilities that returns a Utilities* that points to a particular instance:
static Utilities* Utilities::make(void) {return new UtilitiesWin();}
At some point you're going to have to instantiate a non-abstract subclass; there's no way around specifying UtilitiesWin at that point
You seem a bit confused as to what you want; you have to tell the computer at some stage which implementation of Utilities it is to use, but with the shape you've set out you only need to have
#ifdef windows
Utilities* u = new UtilitiesWin();
#endif
#ifdef spaceos3
Utilities* u = new UtilitiesSpaceOS3();
#endif
once in the program, and most of the source files can just call methods of u without knowing what kind of a u it is - which is I think what you were aiming at.
In C++ you cannot instantiate abstract classes, which is precisely what you are trying to do here:
Utilities *u = new Utilities();
It's very unclear to me why you would want to instantiate such a class, and what you would do with it if you could do so (which you can't). You cannot use an instantiation as an interface - the class definition provides that.
You are "getting" it right, you have to instantiate a concrete type. There are common solutions to this.
Yes, you have to make that decision which class to instantiate somewhere.
The implementation of that depends on the criteria for this decision: is it fixed for the binary? The same choice for each process? Or does it change for every instance of SomeProgram?
Fore the concrete classes you mention, the decision can probably be made at compile time, similar to what Tom suggests.
Second, SomeProgram should not make this choice itself. Rather the type or the instance should be configurable from the outside. The most simple approach is to pass the concrete instance to the constructor of SomeProgram:
class SomeProgram
{
private:
Utilities * m_utilities;
public:
Someprogram(Utilities * util) : m_utilities(util) {}
}
Note that SomeProgramonly "knows" the abstract class, none of the concrete classes.
For delayed construction, use a factory. If the utilities class should be injected as above, is expensive to create but isn't necessary most of the time, you would inject a factory instead: you pass a UtilityFactoryto the class, which SomeProgram can use to create the required instance on demand. The actual factory implementation decides the concrete class to chose. See Factory pattern for more.
If that's a common problem, look at Inversion of Control (IoC) - there are several library implementations out there that make that easier. It has become a buzzword in the wake of agressive unit testing, where replacing "real" implementations with mocks has to happen permanently. (I'm still waiting for a complete MockOS, though). I haven't worked on any application that seriously needed suhc a library in practice, though, and it is very likely overkill for your problem.
Related
I'm currently working on a project where everything is horribly mixed with everything. Every file include some others etc..
I want to focus a separating part of this spaghetti code into a library which has to be completely independent from the rest of the code.
The current problem is that some functions FunctionInternal of my library use some functions FunctionExternal declared somewhere else, hence my library is including some other files contained in the project, which is not conform with the requirement "independent from the rest of the code".
It goes without saying that I can't move FunctionExternal in my library.
My first idea to tackle this problem was to implement a public interface such as described bellow :
But I can't get it to work. Is my global pattern a way I could implement it or is there another way, if possible, to interface two functions without including one file in another causing an unwanted dependency.
How could I abstract my ExternalClass so my library would still be independent of the rest of my code ?
Edit 1:
External.h
#include "lib/InterfaceInternal.h"
class External : public InterfaceInternal {
private:
void ExternalFunction() {};
public:
virtual void InterfaceInternal_foo() override {
ExternalFunction();
};
};
Internal.h
#pragma once
#include "InterfaceInternal.h"
class Internal {
// how can i received there the InterfaceInternal_foo overrided in External.h ?
};
InterfaceInternal.h
#pragma once
class InterfaceInternal {
public:
virtual void InterfaceInternal_foo() = 0;
};
You can do like you suggested, override the internal interface in your external code. Then
// how can i received there the InterfaceInternal_foo overrided in External.h ?
just pass a pointer/reference to your class External that extends class InterfaceInternal. Of course your class Internal needs to have methods that accept InterfaceInternal*.
Or you can just pass the function to your internal interface as an argument. Something around:
class InterfaceInternal {
public:
void InterfaceInternal_foo(std::function<void()> f);
};
or more generic:
class InterfaceInternal {
public:
template <typename F> // + maybe some SFINAE magic, or C++20 concept to make sure it's actually callable
void InterfaceInternal_foo(F f);
};
I am experimenting with the Builder/Fluent style of creating objects trying to extend some ideas presented in a course. One element I immediately didn't like with my test implementation was the large number of additional header files the client needs to include for the process to work, particularly when I wish to make use of public/private headers via the pImpl idiom for purposes of providing a library interface. I'm not entirely certain whether the problem lies with my implementation or I'm just missing an obvious 'last step' to achieve what I want.
The general gist is as follows (using the toy example of Pilots):
Firstly the client code itself:
(Note: for brevity, various boilerplate and irrelevant code has been omitted)
Pilot p = Pilot::create()
.works().atAirline("Sun Air").withRank("Captain")
.lives().atAddress("123 Street").inCity("London")
What's happening here is:
In Pilot.h, the Pilot class is defined with a static member method called create() that returns an instance of a PilotBuilder class defined in PilotBuilder.h and forward declared in Pilot.h
Essentially the PilotBuilder class is a convenience builder only used to present builders of the two different facets of a Pilot (.works() and .lives()), letting you switch from one builder to another.
Pilot.h:
class PilotBuilder;
class Pilot {
private:
// Professional
string airline_name_, rank_;
// Personal
string street_address_, city_;
Pilot(){}
public:
Pilot(Pilot&& other) noexcept;
static PilotBuilder create();
friend class PilotBuilder;
friend class PilotProfessionalBuilder;
friend class PilotPersonalBuilder;
};
Pilot.cpp:
#include "PilotBuilder.h"
PilotBuilder Pilot::create() {
return PilotBuilder();
}
// Other definitions etc
PilotBuilder.h
#include "public/includes/path/Pilot.h"
class PilotProfessionalBuilder;
class PilotPersonalBuilder;
class PilotBuilder {
private:
Pilot p;
protected:
Pilot& pilot_;
explicit PilotBuilder(Pilot& pilot) : pilot_{pilot} {};
public:
PilotBuilder() : pilot_{p} {}
operator Pilot() {
return std::move(pilot_);
}
PilotProfessionalBuilder works();
PilotPersonalBuilder lives();
};
PilotBuilder.cpp
#include "PilotBuilder.h"
#include "PilotProfessionalBuilder.h"
#include "PilotPersonalBuilder.h"
PilotPersonalBuilder PilotBuilder::lives() {
return PilotPersonalBuilder{pilot_};
}
PilotProfessionalBuilder PilotBuilder::works() {
return PilotProfessionalBuilder{pilot_};
}
As you can imagine the PilotProfessionalBuilder class and the PilotPersonalBuilder class simply implement the methods relevant to that particular facet eg(.atAirline()) in the fluent style using the reference provided by the PilotBuilder class, and their implementation isn't relevant to my query.
Avoiding the slightly contentious issue of providing references to private members, my dilemma is that to make use of my pattern as it stands, the client has to look like this:
#include "public/includes/path/Pilot.h"
#include "private/includes/path/PilotBuilder.h"
#include "private/includes/path/PilotProfessionalBuilder.h"
#include "private/includes/path/PilotPersonalBuilder.h"
int main() {
Pilot p = Pilot::create()
.works().atAirline("Sun Air").withRank("Captain")
.lives().atAddress("123 Street").inCity("London");
}
What I cannot figure out is:
How do I reorder or reimplement the code so that I can simply use #include "public/includes/path/Pilot.h" in the client, imagining say, that I'm linking against a Pilots library where the rest of the implementation resides and still keep the same behaviour?
Provided someone can enlighten me on point 1., is there any way it would be then possible to move the private members of Pilot into a unique_ptr<Impl> pImpl and still keep hold of the static create() method? - because the following is obviously not allowed:
:
PilotBuilder Pilot::create() {
pImpl = make_unique(Impl); /* struct containing private members */
return PilotBuilder();
}
Finally, I am by no means an expert at any of this so if any of my terminology is incorrect or coding practices really need fixing I will gladly receive any advice people have to give. Thank you!
In the case where there are multiple desired implementations for a given interface, but where the specific implementation desired is known before compile time, is it wrong simply to direct the make file to different implementation files for the same header?
For example, if have a program defining a car (Car.h)
// Car.h
class Car {
public:
string WhatCarAmI();
}
and at build time we know whether we want it to be a Ferrari or a Fiat, to give each either of the corresponding files:
// Ferrari.cpp
#include "Car.h"
string Car::WhatCarAmI() { return "Ferrari"; }
whilst for the other case (unsurprisingly)
// Fiat.cpp
#include "Car.h"
string Car::WhatCarAmI() { return "Fiat"; }
Now, I am aware that I could make both Fiat and Ferrari derived objects of Car and at runtime pick which I would like to build. Similarly, I could templatize it and make the compiler pick at compile time which to build. However, in this case the two implementations both refer to separate projects which should never intersect.
Given that, is it wrong to do what I propose and simply to select the correct .cpp in the makefile for the given project? What is the best way to do this?
Implementation
As this is static polymorphism, the Curiously Recurring Template Pattern is probably vastly more idiomatic than swapping a cpp file - which seems pretty hacky. CRTP seems to be required if you want to let multiple implementations coexist within one project, while being easy to use with an enforced single-implementation build system. I'd say its well-documented nature and ability to do both (since you never know what you'll need later) give it the edge.
In brief, CRTP looks a little like this:
template<typename T_Derived>
class Car {
public:
std::string getName() const
{
// compile-time cast to derived - trivially inlined
return static_cast<T_Derived const *>(this)->getName();
}
// and same for other functions...
int getResult()
{
return static_cast<T_Derived *>(this)->getResult();
}
void playSoundEffect()
{
static_cast<T_Derived *>(this)->playSoundEffect();
}
};
class Fiat: public Car<Fiat> {
public:
// Shadow the base's function, which calls this:
std::string getName() const
{
return "Fiat";
}
int getResult()
{
// Do cool stuff in your car
return 42;
}
void playSoundEffect()
{
std::cout << "varooooooom" << std::endl;
}
};
(I've previously prefixed derived implementation functions with d_, but I'm not sure this gains anything; in fact, it probably increases ambiguity...)
To understand what's really going on in the CRTP - it's simple once you get it! - there are plenty of guides around. You'll probably find many variations on this, and pick the one you like best.
Compile-time selection of implementation
To get back to the other aspect, if you do want to restrict to one of the implementations at compile-time, then you could use some preprocessor macro(s) to enforce the derived type, e.g.:
g++ -DMY_CAR_TYPE=Fiat
and later
// #include "see_below.hpp"
#include <iostream>
int main(int, char**)
{
Car<MY_CAR_TYPE> myCar;
// Do stuff with your car
std::cout << myCar.getName();
myCar.playSoundEffect();
return myCar.getResult();
}
You could either declare all Car variants in a single header and #include that, or use something like the methods discussed in these threads - Generate include file name in a macro / Dynamic #include based on macro definition - to generate the #include from the same -D macro.
Choosing a .cpp file at compile time is OK and perfectly reasonable... if the ignored .cpp file would not compile. This is one way to choose a platform specific implementation.
But in general - when possible (such as in your trivial example case) - it's better to use templates to achieve static polymorphism. If you need to make a choice at compile time, use a preprocessor macro.
If the two implementations refer to separate projects which should never intersect but still are implementations for a given interface, I would recommend to extract that interface as a separate "project". That way the separate projects are not directly related to each other, even though they both depend on the third project which provides the interface.
In your use case I think it would be best to use ifdef-blocks. This will be checked before compilation! This method is also sometimes used to distinct between different platforms for the same code.
// Car.cpp
#include "Car.h"
#define FERRARI
//#define FIAT
#ifdef FERRARI
string Car::WhatCarAmI() { return "Ferrari"; }
#endif
#ifdef FIAT
string Car::WhatCarAmI() { return "Fiat"; }
#endif
In these code the compiler will ignore the ifdef-block of fiat, because only FERRARI is defined. This way you can still use methods you want to have for both cars. Everything you want different, you can put in ifdefs and simply swap out the defines.
Actually instead of swapping out the defines, you'd leave your code alone and
provide the definitions on the GCC command line using the -D build switch,
depending on what build configuration were selected.
This question is specifically about C++ architecture on embedded, hard real-time systems. This implies that large parts of the data-structures as well as the exact program-flow are given at compile-time, performance is important and a lot of code can be inlined. Solutions preferably use C++03 only, but C++11 inputs are also welcome.
I am looking for established design-patterns and solutions to the architectural problem where the same code-base should be re-used for several, closely related products, while some parts (e.g. the hardware-abstraction) will necessarily be different.
I will likely end up with a hierarchical structure of modules encapsulated in classes that might then look somehow like this, assuming 4 layers:
Product A Product B
Toplevel_A Toplevel_B (different for A and B, but with common parts)
Middle_generic Middle_generic (same for A and B)
Sub_generic Sub_generic (same for A and B)
Hardware_A Hardware_B (different for A and B)
Here, some classes inherit from a common base class (e.g. Toplevel_A from Toplevel_base) while others do not need to be specialized at all (e.g. Middle_generic).
Currently I can think of the following approaches:
(A): If this was a regular desktop-application, I would use virtual inheritance and create the instances at run-time, using e.g. an Abstract Factory.
Drawback: However the *_B classes will never be used in product A and hence the dereferencing of all the virtual function calls and members not linked to an address at run-time will lead to quite some overhead.
(B) Using template specialization as inheritance mechanism (e.g. CRTP)
template<class Derived>
class Toplevel { /* generic stuff ... */ };
class Toplevel_A : public Toplevel<Toplevel_A> { /* specific stuff ... */ };
Drawback: Hard to understand.
(C): Use different sets of matching files and let the build-scripts include the right one
// common/toplevel_base.h
class Toplevel_base { /* ... */ };
// product_A/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// product_B/toplevel.h
class Toplevel : Toplevel_base { /* ... */ };
// build_script.A
compiler -Icommon -Iproduct_A
Drawback: Confusing, tricky to maintain and test.
(D): One big typedef (or #define) file
//typedef_A.h
typedef Toplevel_A Toplevel_to_be_used;
typedef Hardware_A Hardware_to_be_used;
// etc.
// sub_generic.h
class sub_generic {
Hardware_to_be_used the_hardware;
// etc.
};
Drawback: One file to be included everywhere and still the need of another mechnism to actually switch between different configurations.
(E): A similar, "Policy based" configuration, e.g.
template <class Policy>
class Toplevel {
Middle_generic<Policy> the_middle;
// ...
};
// ...
template <class Policy>
class Sub_generic {
class Policy::Hardware_to_be_used the_hardware;
// ...
};
// used as
class Policy_A {
typedef Hardware_A Hardware_to_be_used;
};
Toplevel<Policy_A> the_toplevel;
Drawback: Everything is a template now; a lot of code needs to be re-compiled every time.
(F): Compiler switch and preprocessor
// sub_generic.h
class Sub_generic {
#if PRODUCT_IS_A
Hardware_A _hardware;
#endif
#if PRODUCT_IS_B
Hardware_B _hardware;
#endif
};
Drawback: Brrr..., only if all else fails.
Is there any (other) established design-pattern or a better solution to this problem, such that the compiler can statically allocate as many objects as possible and inline large parts of the code, knowing which product is being built and which classes are going to be used?
I'd go for A. Until it's PROVEN that this is not good enough, go for the same decisions as for desktop (well, of course, allocating several kilobytes on the stack, or using global variables that are many megabytes large may be "obvious" that it's not going to work). Yes, there is SOME overhead in calling virtual functions, but I would go for the most obvious and natural C++ solution FIRST, then redesign if it's not "good enough" (obviously, try to determine performance and such early on, and use tools like a sampling profiler to determine where you are spending time, rather than "guessing" - humans are proven pretty poor guessers).
I'd then move to option B if A is proven to not work. This is indeed not entirely obvious, but it is, roughly, how LLVM/Clang solves this problem for combinations of hardware and OS, see:
https://github.com/llvm-mirror/clang/blob/master/lib/Basic/Targets.cpp
First I would like to point out that you basically answered your own question in the question :-)
Next I would like to point out that in C++
the exact program-flow are given at compile-time, performance is
important and a lot of code can be inlined
is called templates. The other approaches that leverage language features as opposed to build system features will serve only as a logical way of structuring the code in your project to the benefit of developers.
Further, as noted in other answers C is more common for hard real-time systems than are C++, and in C it is customary to rely on MACROS to make this kind of optimization at compile time.
Finally, you have noted under your B solution above that template specialization is hard to understand. I would argue that this depends on how you do it and also on how much experience your team has on C++/templates. I find many "template ridden" projects to be extremely hard to read and the error messages they produce to be unholy at best, but I still manage to make effective use of templates in my own projects because I respect the KISS principle while doing it.
So my answer to you is, go with B or ditch C++ for C
I understand that you have two important requirements :
Data types are known at compile time
Program-flow is known at compile time
The CRTP wouldn't really address the problem you are trying to solve as it would allow the HardwareLayer to call methods on the Sub_generic, Middle_generic or TopLevel and I don't believe it is what you are looking for.
Both of your requirements can be met using the Trait pattern (another reference). Here is an example proving both requirements are met. First, we define empty shells representing two Hardwares you might want to support.
class Hardware_A {};
class Hardware_B {};
Then let's consider a class that describes a general case which corresponds to Hardware_A.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long int64_t;
static int64_t getCPUSerialNumber() {return 0;}
};
Now let's see a specialization for Hardware_B :
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int int64_t;
static int64_t getCPUSerialNumber() {return 1;}
};
Now, here is a usage example within the Sub_generic layer :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::int64_t int64_t;
int64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And finally, a short main that executes both code paths and use both data types :
int main(int argc, const char * argv[]) {
std::cout << "Hardware_A : " << Sub_generic<Hardware_A>().doSomething() << std::endl;
std::cout << "Hardware_B : " << Sub_generic<Hardware_B>().doSomething() << std::endl;
}
Now if your HardwareLayer needs to maintain state, here is another way to implement the HardLayer and Sub_generic layer classes.
template <typename Hardware>
class HardwareLayer
{
public:
typedef long hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 0;
};
template <>
class HardwareLayer<Hardware_B>
{
public:
typedef int hwint64_t;
hwint64_t getCPUSerialNumber() {return mySerial;}
private:
hwint64_t mySerial = 1;
};
template <typename Hardware>
class Sub_generic : public HardwareLayer<Hardware>
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return HwLayer::getCPUSerialNumber();}
};
And here is a last variant where only the Sub_generic implementation changes :
template <typename Hardware>
class Sub_generic
{
public:
typedef HardwareLayer<Hardware> HwLayer;
typedef typename HwLayer::hwint64_t hwint64_t;
hwint64_t doSomething() {return hw.getCPUSerialNumber();}
private:
HwLayer hw;
};
On a similar train of thought to F, you could just have a directory layout like this:
Hardware/
common/inc/hardware.h
hardware1/src/hardware.cpp
hardware2/src/hardware.cpp
Simplify the interface to only assume a single hardware exists:
// sub_generic.h
class Sub_generic {
Hardware _hardware;
};
And then only compile the folder that contains the .cpp files for the hardware for that platform.
The benefits to this approach are:
It's simple to understand whats happening and to add a hardware3
hardware.h still serves as your API
It takes away the abstraction from the compiler (for your speed concerns)
Compiler 1 doesn't need to compile hardware2.cpp or hardware3.cpp which may contain things Compiler 1 can't do (like inline assembly, or some other specific Compiler 2 thing)
hardware3 might be much more complicated for some reason you haven't considered yet.. so giving it a whole directory structure encapsulates it.
Since this is for a hard real time embedded system, usually you would go for a C type of solution not c++.
With modern compilers I'd say that the overhead of c++ is not that great, so it's not entirely a matter of performance, but embedded systems tend to prefer c instead of c++.
What you are trying to build would resemble a classic device drivers library (like the one for ftdi chips).
The approach there would be (since it's written in C) something similar to your F, but with no compile time options - you would specialize the code, at runtime, based on somethig like PID, VID, SN, etc...
Now if you what to use c++ for this, templates should probably be your last option (code readability usually ranks higher than any advantage templates bring to the table). So you would probably go for something similar to A: a basic class inheritance scheme, but no particularly fancy design pattern is required.
Hope this helps...
I am going to assume that these classes only need to be created a single time, and that their instances persist throughout the entire program run time.
In this case I would recommend using the Object Factory pattern since the factory will only get run one time to create the class. From that point on the specialized classes are all a known type.
After digging the web, I found some reference to a powerful pattern which exploits CRTP to allow instantiation at run-time of static members:
C++: Compiling unused classes
Initialization class for other classes - C++
And so on.
The proposed approach works well, unless such class hierarchy is placed into an external library.
Doing so, run-time initialization no more works, unless I manually #include somewhere the header file of derived classes. However, this defeats my main purpose - having the change to add new commands to my application without the need of changing other source files.
Some code, hoping it helps:
class CAction
{
protected:
// some non relevant stuff
public:
// some other public API
CAction(void) {}
virtual ~CAction(void) {}
virtual std::wstring Name() const = 0;
};
template <class TAction>
class CCRTPAction : public CAction
{
public:
static bool m_bForceRegistration;
CCRTPAction(void) { m_bForceRegistration; }
~CCRTPAction(void) { }
static bool init() {
CActionManager::Instance()->Add(std::shared_ptr<CAction>(new TAction));
return true;
}
};
template<class TAction> bool CCRTPAction<TAction>::m_bForceRegistration = CCRTPAction<TAction>::init();
Implementations being done this way:
class CDummyAction : public CCRTPAction<CDummyAction>
{
public:
CDummyAction() { }
~CDummyAction() { }
std::wstring Name() const { return L"Dummy"; }
};
Finally, here is the container class API:
class CActionManager
{
private:
CActionManager(void);
~CActionManager(void);
std::vector<std::shared_ptr<CAction>> m_vActions;
static CActionManager* instance;
public:
void Add(std::shared_ptr<CAction>& Action);
const std::vector<std::shared_ptr<CAction>>& AvailableActions() const;
static CActionManager* Instance() {
if (nullptr == instance) {
instance = new CActionManager();
}
return instance;
}
};
Everything works fine in a single project solution. However, if I place the above code in a separate .lib, the magic somehow breaks and the implementation classes (DummyAction and so on) are no longer instantiated.
I see that #include "DummyAction.h" somewhere, either in my library or in the main project makes things work, but
For our project, it is mandatory that adding Actions does not require changes in other files.
I don't really understand what's happening behind the scene, and this makes me uncomfortable. I really hate depending on solutions I don't fully master, since a bug could get out anywhere, anytime, possibly one day before shipping our software to the customer :)
Even stranger, putting the #include directive but not defining constructor/destructor in the header file still breaks the magic.
Thanks all for attention. I really hope someone is able to shed some light...
I can describe the cause of the problem; unfortunately I can't offer a solution.
The problem is that initialisation of a variable with static storage duration may be deferred until any time before the first use of something defined in the same translation unit. If your program never uses anything in the same translation unit as CCRTPAction<CDummyAction>::m_bForceRegistration, then that variable may never be initialised.
As you found, including the header in the translation unit that defines main will force it to be initialised at some point before the start of main; but of course that solution won't meet your first requirement. My usual solution to the problems of initialising static data across multiple translation units is to avoid static data altogether (and the Singleton anti-pattern doubly so, although that's the least of your problems here).
As explained in Mike's answer, the compiler determines that the static member CCRTPAction<CDummyAction>::m_bForceRegistration is never used, and therefore does not need to be initialised.
The problem you're trying to solve is to initialise a set of 'plugin' modules without having to #include their code in a central location. CTRP and templates will not help you here. I'm not aware of a (portable) way in C++ to generate code to initialise a set of plugin modules that are not referenced from main().
If you're willing to make the (reasonable) concession of having to list the plugin modules in a central location (without including their headers), there's a simple solution. I believe this is one of those extremely rare cases where a function-scope extern declaration is useful. You may consider this a dirty hack, but when there's no other way, a dirty hack becomes an elegant solution ;).
This code compiles to the main executable:
core/module.h
template<void (*init)()>
struct Module
{
Module()
{
init();
}
};
// generates: extern void initDummy(); Module<initDummy> DummyInstance
#define MODULE_INSTANCE(name) \
extern void init ## name(); \
Module<init ## name> name ## Instance
core/action.h
struct Action // an abstract action
{
};
void addAction(Action& action); // adds the abstract action to a list
main.cpp
#include "core/module.h"
int main()
{
MODULE_INSTANCE(Dummy);
}
This code implements the Dummy module and compiles to a separate library:
dummy/action.h
#include "core/action.h"
struct DummyAction : Action // a concrete action
{
};
dummy/init.cpp
#include "action.h"
void initDummy()
{
addAction(*new DummyAction());
}
If you wanted to go further (this part is not portable) you could write a separate program to generate a list of MODULE_INSTANCE calls, one for each module in your application, and output a generated header file:
generated/init.h
#include "core/module.h"
#define MODULE_INSTANCES \
MODULE_INSTANCE(Module1); \
MODULE_INSTANCE(Module2); \
MODULE_INSTANCE(Module3);
Add this as a pre-build step, and core/main.cpp becomes:
#include "generated/init.h"
int main()
{
MODULE_INSTANCES
}
If you later decide to load some or all of these modules dynamically, you can use exactly the same pattern to dynamically load, initialise and unload a dll. Please note that the following example is windows-specific, untested and does not handle errors:
core/dynamicmodule.h
struct DynamicModule
{
HMODULE dll;
DynamicModule(const char* filename, const char* init)
{
dll = LoadLibrary(filename);
FARPROC function = GetProcAddress(dll, init);
function();
}
~DynamicModule()
{
FreeLibrary(dll);
}
};
#define DYNAMICMODULE_INSTANCE(name) \
DynamicModule name ## Instance = DynamicModule(#name ".dll", "init" #name)
As Mike Seymour stated the static template stuff will not give you the dynamic loading facilities you want. You could load your modules dynamically as plug ins. Put dlls containing an action each into the working directory of the application and load these dlls dynamically at run-time. This way you will not have to change your source code in order to use different or new implementations of CAction.
Some frameworks make it easy to load custom plug ins, for example Qt.