Singletons and factories with multiple libraries in C++ - c++

I've been reading and searching the web for a while now but haven't found a nice solution. Here's what I want to do:
I am writing a library that defines an abstract base class - lets call it IFoo.
class IFoo {
public:
virtual void doSomething() = 0;
virtual ~IFoo() {}
};
The library also defines a couple of implementations for that - lets call them FooLibOne and FooLibTwo.
In order to encapsulate the creation process and decide which concrete implementation is used depending on some runtime paramter, I use a factory FooFactory that maps std::string to factory methods (in my case boost::function, but that should not be the point here). It also allows new factory methods to be registered. It looks something like this:
class FooFactory {
public:
typedef boost::function<IFoo* ()> CreatorFunction;
IFoo* create(std::string name);
void registerCreator(std::string name, CreatorFunction f);
private:
std::map<std::string, CreatorFunction> mapping_;
};
For now, I added the implementations provided (FooLibOne, FooLibTwo) by the library directly in the constructor of FooFactory - thus they are always available. Some of the library code uses the FooFactory to initialize certain objects etc. I have refrained from using the Singleton pattern for the factories so far since tee pattern is debated often enough and I wasn't sure, how different implementations of the Singleton pattern would work in combination with possibly multiple shared libraries etc.
However, passing around the factories can be a little cumbersome and I still think, this is one of the occassions the Singleton pattern could be of good use. Especially if I consider, that the users of the library should be able to add more implementations of IFoo which should also be accessible for the (already existing) library code. Of course, Dependency Injection - meaning I pass an instance of a factory through the constructor - could do the trick (and does it for now). But this approach kind of fails if I want to be even more flexible and introduce a second layer of dynamic object creation. Meaning: I want to dynamicly create objects (see above) within dynamically created objects (say implementations of an abstract base class IBar - BarOne and BarTwo - again via a factory BarFactory).
Lets say BarOne requires an IFoo object but BarTwo doesn't. I still have to provide the FooFactory to the BarFactory in any case, since one of the IBar implementations might need it. Having globally accessible factories would mitigate this problem and I wouldn't be forced to forsee, which factories may be needed by implementations of a specific interface. In addition I could register the creation methods directly in the source file of the implementations.
FooFactory::Instance().registerCreator("new_creator", boost::bind(...));
Since I think it is a good idea, what would be the right way to implement it? I was going for a templated approach like the SingletonHolder from Modern C++ Design (see also Loki library) to wrap the factories. However, I'd rather implement it as a Meyer's Singleton instead. But I still think there will be issues with shared libraries. The solution should work with GCC (and preferably MSVC). I'm also open for other ideas from a design point of view but please avoid the common "Singletons are evil"-rants. ;-)
Thanks in advance.

Hopefully, 76 lines of code speak more than a few words - (using C++11 version of these features instead of the boost ones, but they're pretty much the same anyway)
I would put the definition of the factory(ies) and the definition of the creators in the same (or nearby) scope, so that each of the creators can "see" any of their dependent factories - avoiding the need to pass factories around too much, and avoiding singletons
Cars & Sirens:
class ISiren {};
class Siren : public ISiren
{
public:
Siren() { std::cout << "Siren Created" << std::endl; }
};
class ICar{};
class EstateCar : public ICar
{
public:
EstateCar() { std::cout << "EstateCar created" << std::endl;}
};
class PoliceCar : public ICar
{
std::shared_ptr<ISiren> siren;
public:
PoliceCar( std::shared_ptr<ISiren> siren)
: siren( siren )
{
std::cout << "PoliceCar created" << std::endl;
}
};
Factories:
typedef std::function< std::shared_ptr<ICar> () > CreatorType;
class CarFactory
{
std::map<std::string, CreatorType> creators;
public:
void AddFactory( std::string type, CreatorType func )
{
creators.insert( std::make_pair(type, func) );
}
std::shared_ptr<ICar> CreateCar( std::string type )
{
CreatorType& create( creators[type] );
return create();
}
};
class SirenFactory
{
public: // Simple factory creating 1 siren type just for brevity
std::shared_ptr<ISiren> CreateSiren() { return std::make_shared<Siren>(); }
};
"Factory Root" (main function, wherever factories are defined) :
int main()
{
CarFactory car_factory; // Car factory unaware of Siren factory
SirenFactory siren_factory;
auto EstateCarLambda = []() {
return std::make_shared<EstateCar>();
}; // Estate car lambda knows nothing of the Siren Factory
auto PoliceCarLambda = [&siren_factory]() {
return std::make_shared<PoliceCar>( siren_factory.CreateSiren() );
}; // Police car creation lambda using the Siren Factory
car_factory.AddFactory( "EstateCar", EstateCarLambda );
car_factory.AddFactory( "PoliceCar", PoliceCarLambda );
std::shared_ptr<ICar> car1 = car_factory.CreateCar( "EstateCar" );
std::shared_ptr<ICar> car2 = car_factory.CreateCar( "PoliceCar" );
}

Related

C++ API Design Patterns

I have an API implemementation / design question to ask. I need to design a modern, usable DLL API, but I am unsure which approach is best and why. I'm not even sure if what I have below could be considered either!
My implementation of the classes needs to be abstracted from the client, but the client also needs to be able to extend and define additional (supported) classes as needed. Many of the internal classes of the library will need references to these external components, necessating the use of pure virtual classes or polymorphism.
I'm not too worried about GNU/Clang compatability - that may come later (as in much later or possibly never) but the primary need is in the ABI layers of MSVC 15/17.
The two ways I can see to implement this is to either use to use a Pimpl idiom or an external Factory method. I'm not a big fan of either (PIMPL adds to code complexity and maintainability, Factory abstracts class creation) so a well thought out third option is a possibility if viable.
For the sake of the simplicity in this post, I've put the "implementations" in the headers below, but they would in their own respective CPPs. I've also cut out any implied constructors, destructors or includes.
I'll define two pure virtual classes here for the meantime
//! IFoo.h
class IFoo
{
virtual void someAction() noexcept = 0;
}
//! IBar.h
class IBar
{
public:
virtual void Foo(IFoo& other) noexcept = 0;
};
Edit, I'll put this here too:
//! BarImpl.h (Internal)
class BarImpl : public IBar
{
public:
virtual void Foo(IFoo& foo) noexcept
{
foo.someAction();
}
};
First up, we have the PIMPL approach
//! Bar.h (External)
class API_EXPORT Bar : public IBar
{
private:
BarImpl* impl;
public:
virtual void Foo(IFoo& foo) noexcept
{
impl->Foo(foo);
}
};
Ideally, the client code therefore looks alittle something like this:
//! client_code.cpp
class CustomFoo : public IFoo;
int
main(int argc, char** argv)
{
auto foo = CustomFoo();
auto bar = Bar();
bar.Foo(foo); // do a barrel roll
}
Secondly, we have a Factory approach
//! BarFactory.h (External)
class API_EXPORT BarFactory
{
private:
BarImpl impl;
public:
virtual IBar& Allocate() noexcept
{
return impl; // For simplicity
}
}
The resulting client code looking something like:
//! client_code.cpp
class CustomFoo : public IFoo;
int
main(int argc, char** argv)
{
auto foo = CustomFoo();
auto barfactory = BarFactory();
IBar& bar = barfactory.Allocate();
bar.Foo(foo); // do a barrel roll
}
In my eyes, both approaches have their own merits. The PIMPL method, although double abstracted and possibly slower due to all the virtualisation, brings a humble simplicity to the client code. The factory avoids this but necessitates a seperate object creation / management class. As these are external functions, I don't think I can easily get away with templating these functions as you might expect to with an internal class, so all this extra code will have to be written either way.
Both methods seem to ensure that the interface remains stable between versions - ensuring binary compatibility between future minor releases.
Would anyone be able to lend a hand to the conundrum I have created for myself above? It would be much appreciated!
Many Thanks,
Chris

What is the purpose of having such complicated C++ class design?

I came across an open source C++ code and I got curious, why do people design the classes this way?
So first things first, here is the Abstract class:
class BaseMapServer
{
public:
virtual ~BaseMapServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name) = 0;
virtual void LoadMapFromFile(const std::string &map_name) = 0;
virtual void PublishMap() = 0;
virtual void SetMap() = 0;
virtual void ConnectROS() = 0;
};
Nothing special here and having an abstract class can have several well understood reasons. So from this point, I thought maybe author wanted to share common features among other classes. So here is the next class, which is a seperate class but actually holds a pointer of type abstract class mentioned above (actual cpp file, other two classes are header files) :
class MapFactory
{
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(), "map_factory.cpp 15: Cannot load map %s of type %s", file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
};
And now the interesting thing comes, here is the child class of the abstract class:
class OccGridServer : public BaseMapServer
{
public:
explicit OccGridServer(rclcpp::Node::SharedPtr node) : node_(node) {}
OccGridServer(rclcpp::Node::SharedPtr node, std::string file_name);
OccGridServer(){}
~OccGridServer(){}
virtual void LoadMapInfoFromFile(const std::string &file_name);
virtual void LoadMapFromFile(const std::string &map_name);
virtual void PublishMap();
virtual void SetMap();
virtual void ConnectROS();
protected:
enum MapMode { TRINARY, SCALE, RAW };
// Info got from the YAML file
double origin_[3];
int negate_;
double occ_th_;
double free_th_;
double res_;
MapMode mode_ = TRINARY;
std::string frame_id_ = "map";
std::string map_name_;
// In order to do ROS2 stuff like creating a service we need a node:
rclcpp::Node::SharedPtr node_;
// A service to provide the occupancy grid map and the message with response:
rclcpp::Service<nav_msgs::srv::GetMap>::SharedPtr occ_service_;
nav_msgs::msg::OccupancyGrid map_msg_;
// Publish map periodically for the ROS1 via bridge:
rclcpp::TimerBase::SharedPtr timer_;
};
So what is the purpose of the MapFactory class?
To be more specific - what is the advantage of creating a class which holds a pointer of type Abstract class BaseMapServer which is a constructor (I believe) and this weird constructor creates a memory for the new object called OccGridServer and returns it? I got so confused by only writing this. I really want to become a better C++ coder and I am desperate to know the secret behind these code designs.
The MapFactory class is used to create the correct subclass instance of BaseMapServer based on the parameters passed to it.
In this particular case there is only one child class instance, but perhaps there are plans to add more. Then when more are added the factory method can look something like this:
BaseMapServer *CreateMap(
const std::string &map_type,
rclcpp::Node::SharedPtr node, const std::string &file_name)
{
if (map_type == "occupancy") return new OccGridServer(node, file_name);
// create Type2Server
else if (map_type == "type2") return new Type2Server(node, file_name);
// create Type3Server
else if (map_type == "type3") return new Type3Server(node, file_name);
else
{
RCLCPP_ERROR(node->get_logger(),
"map_factory.cpp 15: Cannot load map %s of type %s",
file_name.c_str(), map_type.c_str());
throw std::runtime_error("Map type not supported")
}
}
This has the advantage that the caller doesn't need to know the exact subclass being used, and in fact the underlying subclass could potentially change or even be replaced under the hood without the calling code needing to be modified. The factory method internalizes this logic for you.
Its a Factory pattern. See https://en.wikipedia.org/wiki/Factory_method_pattern. It looks like the current code only supports one implementation (OccGridServer), but more could be added at a future date. Conversely, if there's only ever likely to be one concrete implementation, then it's overdesign.
This is example of the factory design pattern. The use case is this: there are several types of very similar classes that will be used in code. In this case, OccGridServer is the only one actually shown, but a generic explanation might reference hypothetical Dog, Cat, Otter, etc. classes. Because of their similarity, some polymorphism is desired: if they all inherit from a base class Animal they can share virtual class methods like ::genus, ::species, etc., and the derived classes can be pointed to or referred to with base class pointers/references. In your case, OccGridServer inherits from BaseMapServer; presumably there are other derived classes as well, and pointers/references.
If you know which derived class is needed at compile time, you would normally just call its constructor. The point of the factory design pattern is to simplify selection of a derived class when the particular derived class is not known until runtime. Imagine that a user picks their favorite animal by selecting a button or typing in a name. This generally means that somewhere there's a big if/else block that maps from some type of I/O disambiguator (string, enum, etc.) to a particular derived class type, calling its constructor. It's useful to encapsulate this in a factory pattern, which can act like a named constructor that takes this disambiguator as a "constructor" parameter and finds the correct derived class to construct.
Typically, by the way, CreateMap would be a static method of BaseMapServer. I don't see why a separate class for the factory function is needed in this case.

Dependency inversion principle: trying to understand

I'm learning design patterns and things around it (like SOLID and Dependency inversion principle in particular) and it looks like I'm loosing something:
Following the DIP rule I should be able to make classes less fragile by not creating an object in the class (composition) but sending the object reference/pointer to the class constructor (aggregation). But this means that I have to make an instance somewhere else: so the more flexible one class with aggregation is, the more fragile the other.
Please explain me where am I wrong.
You just need to follow the idea through to its logical conclusion. Yes you have to make the instance somewhere else, but this might not be just in the class that is one level above your class, it keeps needing to be pushed out and out until objects are only created at the very outer layer of your application.
Ideally you create all of your objects in a single place, this is called the composition root (the exception being objects which are created from factories, but the factories are created in the composition root). Exactly where this is depends on the type of app you are building.
In a desktop app, that would be in the Main method (or very close to it)
In an ASP.NET (including MVC) application, that would be in Global.asax
In WCF, that would be in a ServiceHostFactory
etc.
this place may end up being 'fragile' but you only have a single place to change things in order to be able to reconfigure your application, and then all other classes are testable and configurable.
See this excellent answer (which is quoted above)
Yes, you would need to instantiate the class somewhere. If you follow the DIP correctly, you will end up creating all the instances in one place. I called this place as class composition. Read my blog on more depth understanding on this topic Here
One important possibility that you've missed is the injection of factories, rather than the class itself. One advantage of this is that it allows your ownership of instances to be cleaner. Note the code is a little uglier since I'm explicitly giving the container ownership of its components. Things can be a little neater if you use shared_ptr, rather than unique_ptr, but then ownership is ambiguous.
So starting with code that looks like this:
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer() : foo(new Foo() ) {};
void doit() { foo->doit(); }
}
void useContainer() {
MyContainer container;
container.doit();
}
The non-factory version would look like this
struct MyContainer {
std::unique_ptr<IFoo> foo;
template<typename T>
explicit MyContainer(std::unique_ptr<T> && f) :
foo(std::move(f))
{}
void doit() { foo->doit(); }
}
void useContainer() {
std::unique_ptr<Foo> foo( new Foo());
MyContainer container(std::move(foo));
container.doit();
}
The factory version would look like
struct FooFactory : public IFooFactory {
std::unique_ptr<IFoo> createFoo() override {
return std::make_unique<Foo>();
}
};
struct MyContainer {
std::unique_ptr<IFoo> foo;
MyContainer(IFooFactory & factory) : foo(factory.createFoo()) {};
void doit() { foo->doit(); }
}
void useContainer() {
FooFactory fooFactory;
MyContainer container(fooFactory);
container.doit();
}
IMO being explicit about ownership & lifetime in C++ is important - its a crucial part of C++'s identity - it lies at the heart of RAII and lots of other C++ patterns.

How to write java like argument-level implementation of interface in C++?

One of the nice things in Java is implementing interface. For example consider the following snippet:
interface SimpleInterface()
{
public: void doThis();
}
...
SimpleInterface simple = new SimpleInterface()
{
#Override public doThis(){ /**Do something here*/}
}
The only way I could see this being done is through Lambda in C++ or passing an instance of function<> to a class. But I am actually checking if this is possible somehow? I have classes which implements a particular interface and these interfaces just contain 1-2 methods. I can't write a new file for it or add a method to a class which accepts a function<> or lambda so that it can determine on what to do. Is this strictly C++ limitation? Will it ever be supported?
Somehow, I wanted to write something like this:
thisClass.setAction(int i , new SimpleInterface()
{
protected:
virtual void doThis(){}
});
One thing though is that I haven't check the latest spec for C++14 and I wanted to know if this is possible somehow.
Thank you!
Will it ever be supported?
You mean, will the language designers ever add a dirty hack where the only reason it ever existed in one language was because those designers were too stupid to add the feature they actually needed?
Not in this specific instance.
You can create a derived class that derives from it and then uses a lambda, and then use that at your various call sites. But you'd still need to create one converter for each interface.
struct FunctionalInterfaceImpl : SimpleInterface {
FunctionalInterfaceImpl(std::function<void()> f)
: func(f) {}
std::function<void()> func;
void doThis() { func(); }
};
You seem to think each class needs a separate .h and .cpp file. C++ allows you to define a class at any scope, including local to a function:
void foo() {
struct SimpleInterfaceImpl : SimpleInterface
{
protected:
void doThis() override {}
};
thisClass.setAction(int i , new SimpleInterfaceImpl());
}
Of course, you have a naked new in there which is probably a bad idea. In real code, you'd want to allocate the instance locally, or use a smart pointer.
This is indeed a "limitation" of C++ (and C#, as I was doing some research some time ago). Anonymous java classes are one of its unique features.
The closest way you can emulate this is with function objects and/or local types. C++11 and later offers lambdas which are semantic sugar of those two things, for this reason, and saves us a lot of writing. Thank goodness for that, before c++11 one had to define a type for every little thing.
Please note that for interfaces that are made up of a single method, then function objects/lambdas/delegates(C#) are actually a cleaner approach. Java uses interfaces for this case as a "limitation" of its own. It would be considered a Java-ism to use single-method interfaces as callbacks in C++.
Local types are actually a pretty good approximation, the only drawback being that you are forced to name the types (see edit) (a tiresome obligation, which one takes over when using static languages of the C family).
You don't need to allocate an object with new to use it polymorphically. It can be a stack object, which you pass by reference (or pointer, for extra anachronism). For instance:
struct This {};
struct That {};
class Handler {
public:
virtual ~Handler ();
virtual void handle (This) = 0;
virtual void handle (That) = 0;
};
class Dispatcher {
Handler& handler;
public:
Dispatcher (Handler& handler): handler(handler) { }
template <typename T>
void dispatch (T&& obj) { handler.handle(std::forward<T>(obj)); }
};
void f ()
{
struct: public Handler {
void handle (This) override { }
void handle (That) override { }
} handler;
Dispatcher dispatcher { handler };
dispatcher.dispatch(This {});
dispatcher.dispatch(That {});
}
Also note the override specifier offered by c++11, which has more or less the same purpose as the #Override annotation (generate a compile error in case this member function (method) does not actually override anything).
I have never heard about this feature being supported or even discussed, and I personally don't see it even being considered as a feature in C++ community.
EDIT right after finishing this post, I realised that there is no need to name local types (naturally), so the example becomes even more java-friendly. The only difference being that you cannot define a new type within an expression. I have updated the example accordingly.
In c++ interfaces are classes which has pure virtual functions in them, etc
class Foo{
virtual Function() = 0;
};
Every single class that inherits this class must implement this function.

Is a "factory" method the right pattern?

So I'm working to improve an existing implementation. I have a number of polymorphic classes that are all composed into a higher level container class. The problem I'm dealing with at the moment is that the higher level container class, well, sucks. It looks something like this, which I really don't have a problem with (as the polymorphic classes in the container should be public). My real issue is the constructor...
/*
* class1 and class 2 derive from the same superclass
*/
class Container
{
public:
boost::shared_ptr<ComposedClass1> class1;
boost::shared_ptr<ComposedClass2> class2;
private:
...
}
/*
* Constructor - builds the objects that we need in this container.
*/
Container::Container(some params)
{
class1.reset(new ComposedClass1(...));
class2.reset(new ComposedClass2(...));
}
What I really need is to make this container class more re-usable. By hard-coding up the member objects and instantiating them, it basically isn't and can only be used once. A factory is one way to build what I need (potentially by supplying a list of objects and their specific types to be created?) Other ways to get around this problem? Seems like someone should have solved it before... Thanks!
Dependency injection springs to mind.
You use a factory method if you change the created type in derived classes, i.e. if descendants of Container use different descendants of class1 and class2.
class1* Container::GetClass1; virtual;
{
return new ComposedClass1;
}
class1* SpecialContainer::GetClass1;
{
return new SpecialComposedClass1;
}
Container::Container(some params)
{
class1.reset(GetClass1);
class2.reset(GetClass2);
}
A different approach is decoupling creation of Container's dependencies from Container.
For starters, you can change the constructor to dependency-inject class1 and class2.
Container::Container(aclass1, aclass2)
{
class1.reset(aclass1);
class2.reset(aclass2);
}
You do know you can just have a second constructor that doesn't do that right?
Container::Container(some params)
{
// any other initialization you need to do
}
You can then set class1 and class2 as you please as they're public.
Of course they shouldn't be but given this code that's what you could do.
As far as I can see from what you've described the factory pattern wouldn't be that helpful.
Based on your description I would recommend an Abstract Factory passed into the constructor.
class ClassFactory {
public:
virtual ~ClassFactory
virtual Class1 * createClass1(...) = 0; // obviously these are real params not varargs
virtual Class2 * createClass2(...) = 0;
};
class DefaultClassFactory : public ClassFactory {
// ....
};
Container::Container(
some params,
boost::smart_ptr<ClassFactory> factory = boost::smart_ptr<ClassFactory>(new DefaultClassFactory))
{
class1.reset(factory->createClass1(...));
class2.reset(factory->createClass1(...));
}
This is a form of dependency injection without the framework. Normal clients can use the DefaultClassFactory transparently as before. Other clients for example unit tests can inject their own ClassFactory implementation.
Depending on how you want to control ownership of the factory parameter this could be a smart_ptr, scoped_ptr or a reference. The reference can lead to slightly cleaner syntax.
Container::Container(
some params,
ClassFactory & factory = DefaultClassFactory())
{
class1.reset(factory.createClass1(...));
class2.reset(factory.createClass1(...));
}
void SomeClient()
{
Container c(some params, SpecialClassFactory());
// do special stuff
}
public class factory
{
public static Class1 Create()
{
return new Class1();
}
}
class Class1
{}
main()
{
Classs1 c=factory.Create();
}
Generally, I agree with Toms comment: not enough info.
Based on your clarification:
It's just a container for a number of "components"
If the list of components varies, provide Add/Enumerate/Remove methods on the container.
If the list of components should be fixed for each instance of the container, provide a builder object with the methods
Builder.AddComponent
Builder.CreateContainer
The first would allow to reuse the container, but often ends up more complex in my experience. The second one often requires a way to swap / replace containers but ends up easier to implement.
Dependency Injection libraries can help you configure the actual types outside of the aplication (though I still need to see why this is such a great leap forward).