Implementing a ReaderWriter class based upon separate stateful Reader and Writer bases - c++

Suppose I have two classes...
We can call the first FooReader and it looks something like this:
class FooReader {
public:
FooReader(const Foo* const foo)
: m_foo(foo) {
}
FooData readFooDataAndAdvance() {
// the point here is that the algorithm is stateful
// and relies upon the m_offset member
return m_foo[m_offset++];
}
private:
const Foo* const m_foo;
size_t m_offset = 0; // used in readFooDataAndAdvance
};
We can call the second FooWriter and it looks something like this:
class FooWriter {
public:
FooWriter(Foo* const foo)
: m_foo(foo) {
}
void writeFooDataAndAdvance(const FooData& foodata) {
// the point here is that the algorithm is stateful
// and relies upon the m_offset member
m_foo[m_offset++] = foodata;
}
private:
Foo* const m_foo;
size_t m_offset = 0;
};
These both work wonderfully and do their job as intended. Now suppose I want to create a FooReaderWriter class. Note that the
I naturally want to say that this new class "is a" FooReader and "is a" FooWriter; the interface is simply the amalgamation of the two classes and the semantics remain the same. I don't want to reimplement perfectly good member functions.
One could model this relationship using inheritance like so:
class FooReaderWriter : public FooReader, public FooWriter { };
This is nice because I get the shared interface, I get the implementation and I nicely model the relationship between the classes. However there are problems:
The Foo* member is duplicated in the base classes. This is a waste of memory.
The m_offset member is separate for each base type, but they need to share it (i.e. calling either readFooDataAndAdvance and writeFooDataAndAdvance should advance the same m_offset member).
I can't use the PIMPL pattern and store m_foo and m_offset in there, because I'd lose the const-ness of the m_foo pointer in the base FooReader class.
Is there anything else I can do to resolve these issues, without reimplementing the functionality contained within those classes?

This seems ready made for the mixin pattern. We have our most base class which just declares the members:
template <class T>
class members {
public:
members(T* f) : m_foo(f) { }
protected:
T* const m_foo;
size_t m_offset = 0;
};
and then we write some wrappers around it to add reading:
template <class T>
struct reader : T {
using T::T;
Foo readAndAdvance() {
return this->m_foo[this->m_offset++];
};
};
and writing:
template <class T>
struct writer : T {
using T::T;
void writeAndAdvance(Foo const& f) {
this->m_foo[this->m_offset++] = f;
}
};
and then you just use those as appropriate:
using FooReader = reader<members<Foo const>>;
using FooWriter = writer<members<Foo>>;
using FooReaderWriter = writer<reader<members<Foo>>>;

CRTP.
template<class Storage>
class FooReaderImpl {
public:
FooData readFooDataAndAdvance() {
// the point here is that the algorithm is stateful
// and relies upon the m_offset member
return get_storage()->m_foo[get_storage()->m_offset++];
}
private:
Storage const* get_storage() const { return static_cast<Storage const*>(this); }
Storage * get_storage() { return static_cast<Storage*>(this); }
};
template<class Storage>
class FooWriterImpl {
public:
void writeFooDataAndAdvance(const FooData& foodata) {
// the point here is that the algorithm is stateful
// and relies upon the m_offset member
get_storage()->m_foo[get_storage()->m_offset++] = foodata;
}
private:
Storage const* get_storage() const { return static_cast<Storage const*>(this); }
Storage * get_storage() { return static_cast<Storage*>(this); }
};
template<class T>
struct storage_with_offset {
T* m_foo = nullptr;
std::size_t m_offset = 0;
};
struct FooReader:
FooReaderImpl<FooReader>,
storage_with_offset<const Foo>
{
FooReader(Foo const* p):
storage_with_offset<const Foo>{p}
{}
};
struct FooWriter:
FooWriterImpl<FooWriter>,
storage_with_offset<Foo>
{
FooWriter(Foo* p):
storage_with_offset<Foo>{p}
{}
};
struct FooReaderWriter:
FooWriterImpl<FooReaderWriter>,
FooReaderImpl<FooReaderWriter>,
storage_with_offset<Foo>
{
FooReaderWriter(Foo const* p):
storage_with_offset<Foo>{p}
{}
};
If you need an abstract interface for runtime polymorphism, inherit FooReaderImpl and FooWriterImpl from them.
Now, FooReaderWriter obeys the ducktype contract of FooReader and FooWriter. So if you use type erasure instead of inheritance, it will qualify for either (at point of use).
I'd be tempted to change them to
using FooReader = std::function<FooData()>;
using FooWriter = std::function<void(FooData const&)>;
and then implement a multi-signature std::function for FooReaderWriter. But I'm strange and a bit unhinged that way.

Related

Overwrite Base Class Member with New Type

I'm trying to use C++ to emulate something like dynamic typing. I'm approaching the problem with inherited classes. For example, a function could be defined as
BaseClass* myFunction(int what) {
if (what == 1) {
return new DerivedClass1();
} else if (what == 2) {
return new DerivedClass2();
}
}
The base class and each derived class would have the same members, but of different types. For example, BaseClass may have int xyz = 0 (denoting nothing), DerivedClass1 might have double xyz = 123.456, and DerivedClass2 might have bool xyz = true. Then, I could create functions that returned one type but in reality returned several different types. The problem is, when ere I try to do this, I always access the base class's version of xyz. I've tried using pointers (void* for the base, and "correct" ones for the derived classes), but then every time I want to access the member, I have to do something like *(double*)(obj->xyz) which ends up being very messy and unreadable.
Here's an outline of my code:
#include <iostream>
using std::cout;
using std::endl;
class Foo {
public:
Foo() {};
void* member;
};
class Bar : public Foo {
public:
Bar() {
member = new double(123.456); // Make member a double
};
};
int main(int argc, char* args[]) {
Foo* obj = new Bar;
cout << *(double*)(obj->member);
return 0;
};
I guess what I'm trying to ask is, is this "good" coding practice? If not, is there a different approach to functions that return multiple types or accept multiple types?
That is not actually the way to do it.
There are two typical ways to implement something akin to dynamic typing in C++:
the Object-Oriented way: a class hierarchy and the Visitor pattern
the Functional-Programming way: a tagged union
The latter is rather simple using boost::variant, the former is well documented on the web. I would personally recommend boost::variant to start with.
If you want to go down the full dynamic typing road, then things get trickier. In dynamic typing, an object is generally represented as a dictionary containing both other objects and functions, and a function takes a list/dictionary of objects and returns a list/dictionary of objects. Modelling it in C++ is feasible, but it'll be wordy...
How is an object represented in a dynamically typed language ?
The more generic representation is for the language to represent an object as both a set of values (usually named) and a set of methods (named as well). A simplified representation looks like:
struct Object {
using ObjectPtr = std::shared_ptr<Object>;
using ObjectList = std::vector<ObjectPtr>;
using Method = std::function<ObjectList(ObjectList const&)>;
std::map<std::string, ObjectPtr> values;
std::map<std::string, Method> methods;
};
If we take Python as an example, we realize we are missing a couple things:
We cannot implement getattr for example, because ObjectPtr is a different type from Method
This is a recursive implementation, but without the basis: we are lacking innate types (typically Bool, Integer, String, ...)
Dealing with the first issue is relatively easy, we transform our object to be able to become callable:
class Object {
public:
using ObjectPtr = std::shared_ptr<Object>;
using ObjectList = std::vector<ObjectPtr>;
using Method = std::function<ObjectList(ObjectList const&)>;
virtual ~Object() {}
//
// Attributes
//
virtual bool hasattr(std::string const& name) {
throw std::runtime_error("hasattr not implemented");
}
virtual ObjectPtr getattr(std::string const&) {
throw std::runtime_error("gettattr not implemented");
}
virtual void setattr(std::string const&, ObjectPtr) {
throw std::runtime_error("settattr not implemented");
}
//
// Callable
//
virtual ObjectList call(ObjectList const&) {
throw std::runtime_error("call not implemented");
}
virtual void setcall(Method) {
throw std::runtime_error("setcall not implemented");
}
}; // class Object
class GenericObject: public Object {
public:
//
// Attributes
//
virtual bool hasattr(std::string const& name) override {
return values.count(name) > 0;
}
virtual ObjectPtr getattr(std::string const& name) override {
auto const it = values.find(name);
if (it == values.end) {
throw std::runtime_error("Unknown attribute");
}
return it->second;
}
virtual void setattr(std::string const& name, ObjectPtr object) override {
values[name] = std::move(object);
}
//
// Callable
//
virtual ObjectList call(ObjectList const& arguments) override {
if (not method) { throw std::runtime_error("call not implemented"); }
return method(arguments);
}
virtual void setcall(Method m) {
method = std::move(m);
}
private:
std::map<std::string, ObjectPtr> values;
Method method;
}; // class GenericObject
And dealing with the second issue requires seeding the recursion:
class BoolObject final: public Object {
public:
static BoolObject const True = BoolObject{true};
static BoolObject const False = BoolObject{false};
bool value;
}; // class BoolObject
class IntegerObject final: public Object {
public:
int value;
}; // class IntegerObject
class StringObject final: public Object {
public:
std::string value;
}; // class StringObject
And now you need to add capabilities, such as value comparison.
You can try the following design:
#include <iostream>
using std::cout;
using std::endl;
template<typename T>
class Foo {
public:
Foo() {};
virtual T& member() = 0;
};
class Bar : public Foo<double> {
public:
Bar() : member_(123.456) {
};
virtual double& member() { return member_; }
private:
double member_;
};
int main(int argc, char* args[]) {
Foo<double>* obj = new Bar;
cout << obj->member();
return 0;
};
But as a consequence the Foo class already needs to be specialized and isn't a container for any type anymore.
Other ways to do so, are e.g. using a boost::any in the base class
If you need a dynamic solution you should stick to using void* and size or boost::any. Also you need to pass around some type information as integer code or string so that you can decode the actual type of the content.
See also property design pattern.
For example, you can have a look at zeromq socket options https://github.com/zeromq/libzmq/blob/master/src/options.cpp

What are alternatives to this typelist-based class hierarchy generation code?

I'm working with a simple object model in which objects can implement interfaces to provide optional functionality. At it's heart, an object has to implement a getInterface method which is given a (unique) interface ID. The method then returns a pointer to an interface - or null, in case the object doesn't implement the requested interface. Here's a code sketch to illustrate this:
struct Interface { };
struct FooInterface : public Interface { enum { Id = 1 }; virtual void doFoo() = 0; };
struct BarInterface : public Interface { enum { Id = 2 }; virtual void doBar() = 0; };
struct YoyoInterface : public Interface { enum { Id = 3 }; virtual void doYoyo() = 0; };
struct Object {
virtual Interface *getInterface( int id ) { return 0; }
};
To make things easier for clients who work in this framework, I'm using a little template which automatically generates the 'getInterface' implementation so that clients just have to implement the actual functions required by the interfaces. The idea is to derive a concrete type from Object as well as all the interfaces and then let getInterface just return pointers to this (casted to the right type). Here's the template and a demo usage:
struct NullType { };
template <class T, class U>
struct TypeList {
typedef T Head;
typedef U Tail;
};
template <class Base, class IfaceList>
class ObjectWithIface :
public ObjectWithIface<Base, typename IfaceList::Tail>,
public IfaceList::Head
{
public:
virtual Interface *getInterface( int id ) {
if ( id == IfaceList::Head::Id ) {
return static_cast<IfaceList::Head *>( this );
}
return ObjectWithIface<Base, IfaceList::Tail>::getInterface( id );
}
};
template <class Base>
class ObjectWithIface<Base, NullType> : public Base
{
public:
virtual Interface *getInterface( int id ) {
return Base::getInterface( id );
}
};
class MyObjectWithFooAndBar : public ObjectWithIface< Object, TypeList<FooInterface, TypeList<BarInterface, NullType> > >
{
public:
// We get the getInterface() implementation for free from ObjectWithIface
virtual void doFoo() { }
virtual void doBar() { }
};
This works quite well, but there are two problems which are ugly:
A blocker for me is that this doesn't work with MSVC6 (which has poor support for templates, but unfortunately I need to support it). MSVC6 yields a C1202 error when compiling this.
A whole range of classes (a linear hierarchy) is generated by the recursive ObjectWithIface template. This is not a problem for me per se, but unfortunately I can't just do a single switch statement to map an interface ID to a pointer in getInterface. Instead, each step in the hierarchy checks for a single interface and then forwards the request to the base class.
Does anybody have suggestions how to improve this situation? Either by fixing the above two problems with the ObjectWithIface template, or by suggesting alternatives which would make the Object/Interface framework easier to use.
dynamic_cast exists within the language to solve this exact problem.
Example usage:
class Interface {
virtual ~Interface() {}
}; // Must have at least one virtual function
class X : public Interface {};
class Y : public Interface {};
void func(Interface* ptr) {
if (Y* yptr = dynamic_cast<Y*>(ptr)) {
// Returns a valid Y* if ptr is a Y, null otherwise
}
if (X* xptr = dynamic_cast<X*>(ptr)) {
// same for X
}
}
dynamic_cast will also seamlessly handle things like multiple and virtual inheritance, which you may well struggle with.
Edit:
You could check COM's QueryInterface for this- they use a similar design with a compiler extension. I've never seen COM code implemented, only used the headers, but you could search for it.
What about something like that ?
struct Interface
{
virtual ~Interface() {}
virtual std::type_info const& type() = 0;
};
template <typename T>
class InterfaceImplementer : public virtual Interface
{
std::type_info const& type() { return typeid(T); }
};
struct FooInterface : InterfaceImplementer<FooInterface>
{
virtual void foo();
};
struct BarInterface : InterfaceImplementer<BarInterface>
{
virtual void bar();
};
struct InterfaceNotFound : std::exception {};
struct Object
{
void addInterface(Interface *i)
{
// Add error handling if interface exists
interfaces.insert(&i->type(), i);
}
template <typename I>
I* queryInterface()
{
typedef std::map<std::type_info const*, Interface*>::iterator Iter;
Iter i = interfaces.find(&typeid(I));
if (i == interfaces.end())
throw InterfaceNotFound();
else return static_cast<I*>(i->second);
}
private:
std::map<std::type_info const*, Interface*> interfaces;
};
You may want something more elaborate than type_info const* if you want to do this across dynamic libraries boundaries. Something like std::string and type_info::name() will work fine (albeit a little slow, but this kind of extreme dispatch will likely need something slow). You can also manufacture numeric IDs, but this is maybe harder to maintain.
Storing hashes of type_infos is another option:
template <typename T>
struct InterfaceImplementer<T>
{
std::string const& type(); // This returns a unique hash
static std::string hash(); // This memoizes a unique hash
};
and use FooInterface::hash() when you add the interface, and the virtual Interface::type() when you query.

minimal reflection in C++

I want to create a class factory and I would like to use reflection for that. I just need to
create a object with given string and invoke only few known methods.
How i can do that?
You will have to roll your own. Usually you have a map of strings to object creation functions.
You will need something like the follwing:
class thing {...};
/*
class thing_A : public thing {...};
class thing_B : public thing {...};
class thing_C : public thing {...};
*/
std::shared_ptr<thing> create_thing_A();
std::shared_ptr<thing> create_thing_C();
std::shared_ptr<thing> create_thing_D();
namespace {
typedef std::shared_ptr<thing> (*create_func)();
typedef std::map<std::string,create_func> creation_map;
typedef creation_map::value_type creation_map_entry;
const creation_map_entry creation_map_entries[] = { {"A", create_thing_A}
, {"B", create_thing_B}
, {"C", create_thing_C} };
const creation_map creation_funcs(
creation_map_entries,
creation_map_entries + sizeof(creation_map_entries)
/ sizeof(creation_map_entries[0] );
}
std::shared_ptr<thing> create_thing(const std::string& type)
{
const creation_ma::const_iterator it = creation_map.find(type);
if( it == creation_map.end() ) {
throw "Dooh!"; // or return NULL or whatever suits you
}
return it->second();
}
There are other ways to do this (like having a map of strings to objects from which to clone), but I think they all boil down to having a map of strings to something related to the specific types.
There is no reflection in C++, directly supported by the standard.
However C++ is sufficiently low-level that you can implement some minimal support for reflection to complete the task at hand.
For the simple task of creating a Factory, you usually use the Prototype approach:
class Base
{
public:
virtual Base* clone() const = 0;
virtual ~Base();
};
class Factory
{
public:
std::unique_ptr<Base> get(std::string const& name);
void set(std::string const& name, std::unique_ptr<Base> b);
private:
boost::ptr_map<std::string,Base> mExemplars;
};
Of course, those "known methods" that you are speaking about should be defined within the Base class, which acts as an interface.
There is no reflection in C++, so you should restate your question trying to explain what are the requirements that you would have fulfilled with the reflection part of it.
Depending on your actual constraints and requirements, there are a few things that you can do. The first approach that I would take would be creating an abstract factory where concrete factories can register and provide a simple interface:
class Base {}; // shared base by all created objects
class ConcreteFactoryBase {
public:
virtual ~ConcreteFactoryBase() {}
virtual Base* create() const = 0; // actual construction
virtual std::string id() const = 0; // id of the types returned
};
class AbstractFactory
{
typedef std::map<std::string, ConcreteFactory* > factory_map_t;
public:
void registerFactory( ConcreteFactoryBase* factory ) {
factories[ factory->id() ] = factory;
}
Base* create( std::string const & id ) const {
factory_map_t::const_iterator it = factories.find( id );
if ( it == factories.end() ) {
return 0; // or throw, or whatever makes sense in your case
}
return (*it)->create();
}
~AbstractFactory(); // ensure that the concrete factories are deleted
private:
std::map<ConcreteFactoryBase*> factories;
};
The actual concrete factories can be implemented manually but they can probably be templated, unless the constructors for the different types require different arguments:
template <typename T>
class ConcreteFactory : public ConcreteFactoryBase {
public:
ConcreteFactory( std::string const & id ) : myid(id) {}
virtual Base* create() const {
return new T;
}
virtual std::string id() const {
return myid;
}
private:
std::string myid;
};
class Test : public Base {};
int main() {
AbstracFactory factory;
factory.register_factory( new ConcreteFactory<Test>("Test") );
}
Optionally you could adapt the signatures so that you can pass arguments to the constructor through the different layers.
Then again, by knowing the actual constraints some other approaches might be better. The clone() approach suggested elsewhere is good (either by actually cloning or by creating an empty object of the same type). That is basically blending the factory with the objects themselves so that each object is a factory of objects of the same type. I don't quite like mixing those two responsabilities but it might be one of the simplest approaches with less code to write.
You could use typeid & templates to implement the factory so you won't need strings at all.
#include <string>
#include <map>
#include <typeinfo>
//***** Base *****
class Base
{
public:
virtual ~Base(){} //needs to be virtual to make typeid work
};
//***** C1 *****
class C1 : public Base
{};
//***** Factory *****
class Factory
{
public:
template <class T>
Base& get();
private:
typedef std::map<std::string, Base> BaseMap;
BaseMap m_Instances;
};
template <class T>
Base& Factory::get()
{
BaseMap::const_iterator i = m_Instances.find(typeid(T).name());
if(i == m_Instances.end()) {
m_Instances[typeid(T).name()] = T();
}
return m_Instances[typeid(T).name()];
}
//***** main *****
int main(int argc, char *argv[])
{
Factory f;
Base& c1 = f.get<C1>();
return 0;
}

Looking for a better C++ class factory

I have an application that has several objects (about 50 so far, but growing). There is only one instance of each of these objects in the app and these instances get shared among components.
What I've done is derive all of the objects from a base BrokeredObject class:
class BrokeredObject
{
virtual int GetInterfaceId() = 0;
};
And each object type returns a unique ID. These IDs are maintained in a header file.
I then have an ObjectBroker "factory". When someone needs an object, then call GetObjectByID(). The boker looks in an STL list to see if the object already exists, if it does, it returns it. If not, it creates it, puts it in the list and returns it. All well and good.
BrokeredObject *GetObjectByID(int id)
{
BrokeredObject *pObject;
ObjectMap::iterator = m_objectList.find(id);
// etc.
if(found) return pObject;
// not found, so create
switch(id)
{
case 0: pObject = new TypeA; break;
case 1: pObject = new TypeB; break;
// etc.
// I loathe this list
}
// add it to the list
return pObject;
}
What I find painful is maintaining this list of IDs and having to have each class implement it. I have at least made my consumer's lives slightly easier by having each type hold info about it's own ID like this:
class TypeA : public BrokeredObject
{
static int get_InterfaceID() { return IID_TYPEA; }
int GetInterfaceID() { return get_InterfaceID(); }
};
So I can get an object like this:
GetObjectByID(TypeA::get_InterfaceID());
Intead of having to actually know what the ID mapping is but I still am not thrilled with the maintenance and the potential for errors. It seems that if I know the type, why should I also have to know the ID?
What I long for is something like this in C#:
BrokeredObject GetOrCreateObject<T>() where T : BrokeredObject
{
return new T();
}
Where the ObjectBroker would create the object based on the type passed in.
Has C# spoiled me and it's just a fact of life that C++ can't do this or is there a way to achieve this that I'm not seeing?
Yes, there is a way. A pretty simple even in C++ to what that C# code does (without checking for inheritance though):
template<typename T>
BrokeredObject * GetOrCreateObject() {
return new T();
}
This will work and do the same as the C# code. It is also type-safe: If the type you pass is not inherited from BrokeredObject (or isn't that type itself), then the compiler moans at the return statement. It will however always return a new object.
Singleton
As another guy suggested (credits to him), this all looks very much like a fine case for the singleton pattern. Just do TypeA::getInstance() to get the one and single instance stored in a static variable of that class. I suppose that would be far easier than the above way, without the need for IDs to solve it (i previously showed a way using templates to store IDs in this answer, but i found it effectively is just what a singleton is).
I've read that you will leave the chance open to have multiple instances of the classes. One way to do that is to have a Mingleton (i made up that word :))
enum MingletonKind {
SINGLETON,
MULTITON
};
// Singleton
template<typename D, MingletonKind>
struct Mingleton {
static boost::shared_ptr<D> getOrCreate() {
static D d;
return boost::shared_ptr<D>(&d, NoopDel());
}
struct NoopDel {
void operator()(D const*) const { /* do nothing */ }
};
};
// Multiton
template<typename D>
struct Mingleton<D, MULTITON> {
static boost::shared_ptr<D> getOrCreate() {
return boost::shared_ptr<D>(new D);
}
};
class ImASingle : public Mingleton<ImASingle, SINGLETON> {
public:
void testCall() { }
// Indeed, we have to have a private constructor to prevent
// others to create instances of us.
private:
ImASingle() { /* ... */ }
friend class Mingleton<ImASingle, SINGLETON>;
};
class ImAMulti : public Mingleton<ImAMulti, MULTITON> {
public:
void testCall() { }
// ...
};
int main() {
// both do what we expect.
ImAMulti::getOrCreate()->testCall();
ImASingle::getOrCreate()->testCall();
}
Now, you just use SomeClass::getOrCreate() and it cares about the details. The custom deleter in the singleton case for shared_ptr makes deletion a no-op, because the object owned by the shared_ptr is allocated statically. However, be aware of problems of destruction order of static variables: Static initialization order fiasco
The way I would solve this problem is using what I would call the Static Registry Pattern, which in my mine mind is the C++ version of dependency injection.
Basically you have a static list of builder objects of a type that you use to build objects of another type.
A basic static registry implementation would look like:
template <class T>
class StaticRegistry
{
public:
typedef std::list<T*> Container;
static StaticRegistry<T>& GetInstance()
{
if (Instance == 0)
{
Instance = new StaticRegistry<T>;
}
return *Instance;
}
void Register(T* item)
{
Items.push_back(item);
}
void Deregister(T* item)
{
Items.remove(item);
if (Items.empty())
{
delete this;
Instance = 0;
}
}
typedef typename Container::const_iterator const_iterator;
const_iterator begin() const
{
return Items.begin();
}
const_iterator end() const
{
return Items.end();
}
protected:
StaticRegistry() {}
~StaticRegistry() {}
private:
Container Items;
static StaticRegistry<T>* Instance;
};
template <class T>
StaticRegistry<T>* StaticRegistry<T>::Instance = 0;
An implementation of BrokeredObjectBuilder could look like this:
class BrokeredObjectBuilderBase {
public:
BrokeredObjectBuilderBase() { StaticRegistry<BrokeredObjectBuilderBase>::GetInstance().Register(this); }
virtual ~BrokeredObjectBuilderBase() { StaticRegistry<BrokeredObjectBuilderBase>::GetInstance().Deregister(this); }
virtual int GetInterfaceId() = 0;
virtual BrokeredObject* MakeBrokeredObject() = 0;
};
template<class T>
class BrokeredObjectBuilder : public BrokeredObjectBuilderBase {
public:
BrokeredObjectBuilder(unsigned long interface_id) : m_InterfaceId(interface_id) { }
virtual int GetInterfaceId() { return m_InterfaceId; }
virtual T* MakeBrokeredObject() { return new T; }
private:
unsigned long m_InterfaceId;
};
class TypeA : public BrokeredObject
{
...
};
// Create a global variable for the builder of TypeA so that it's
// included in the BrokeredObjectBuilderRegistry
BrokeredObjectBuilder<TypeA> TypeABuilder(TypeAUserInterfaceId);
typedef StaticRegistry<BrokeredObjectBuilderBase> BrokeredObjectBuilderRegistry;
BrokeredObject *GetObjectByID(int id)
{
BrokeredObject *pObject(0);
ObjectMap::iterator = m_objectList.find(id);
// etc.
if(found) return pObject;
// not found, so create
BrokeredObjectBuilderRegistry& registry(BrokeredObjectBuilderRegistry::GetInstance());
for(BrokeredObjectBuilderRegistry::const_iterator it = registry.begin(), e = registry.end(); it != e; ++it)
{
if(it->GetInterfaceId() == id)
{
pObject = it->MakeBrokeredObject();
break;
}
}
if(0 == pObject)
{
// userinterface id not found, handle this here
...
}
// add it to the list
return pObject;
}
Pros:
All the code that knows about creating the types is seperated out into the builders and the BrokeredObject classes don't need to know about it.
This implementation can be used in libraries and you can control on a per project level what builders are pulled into a project using a number of different techniques.
The builders can be as complex or as simple (like above) as you want them to be.
Cons:
There is a wee bit of infrastructure involved (but not too much).
The flexability of defining the global variables to include what builders to include in your project does make it a little messy to work with.
I find that people find it hard to understand this pattern, I'm not sure why.
It's sometimes not easy to know what is in the static registry at any one time.
The above implementation leaks one bit of memory. (I can live with that...)
The above implementation is very simple, you can extend it in lots of different ways depending on the requirements you have.
Use a template class as the broker.
Make the instance a static member of the function. It will be created on first use and automagically-destroyed when the program exits.
template <class Type>
class BrokeredObject
{
public:
static Type& getInstance()
{
static Type theInstance;
return theInstance;
}
};
class TestObject
{
public:
TestObject()
{}
};
int main()
{
TestObject& obj =BrokeredObject<TestObject>::getInstance();
}
Instead of GetInterfaceId() in the BrokeredObject base class, you could define that pure virtual method:
virtual BrokeredObject& GetInstance()=0;
And in the derived classes you'll return from that method the instance of the particular derived class, if it's already created, if not, you'll first create it and then return it.
It doesn't look like you need the global object to do the management, so why not move everything into the classes themselves?
template <class Type>
class BrokeredObject
{
protected:
static Type *theInstance;
public:
static Type *getOrCreate()
{
if (!theInstance) {
theInstance = new Type();
}
return theInstance;
}
static void free()
{
delete theInstance;
}
};
class TestObject : public BrokeredObject<TestObject>
{
public:
TestObject()
{}
};
int
main()
{
TestObject *obj = TestObject::getOrCreate();
}
If you have RTTI enabled, you can get the class name using typeid.
One question, why are you using a factory rather than using a singleton pattern for each class?
Edit: OK, so you don't want to be locked into a singleton; no problem. The wonderful thing about C++ is it gives you so much flexibility. You could have a GetSharedInstance() member function that returns a static instance of the class, but leave the constructor public so that you can still create other instances.
If you always know the type at compile time there is little point in calling BrokeredObject* p = GetObjectByID(TypeA::get_InterfaceID()) instead of TypeA* p = new TypeA or TypeA o directly.
If you on the other hand don't know the exact type at compile time, you could use some kind of type registry.
template <class T>
BrokeredObject* CreateObject()
{
return new T();
}
typedef int type_identity;
typedef std::map<type_identity, BrokeredObject* (*)()> registry;
registry r;
class TypeA : public BrokeredObject
{
public:
static const type_identity identity;
};
class TypeB : public BrokeredObject
{
public:
static const type_identity identity;
};
r[TypeA::identity] = &CreateObject<TypeA>;
r[TypeB::identity] = &CreateObject<TypeB>;
or if you have RTTI enabled you could use type_info as type_identity:
typedef const type_info* type_identity;
typedef std::map<type_identity, BrokeredObject* (*)()> registry;
registry r;
r[&typeid(TypeA)] = &CreateObject<TypeA>;
r[&typeid(TypeB)] = &CreateObject<TypeB>;
Each new class could of course, in any case, be self-registering in the registry, making the registration decentralized instead of centralized.
You should almost certainly be using dependency injection.
Why not this?
template
BrokeredObject* GetOrCreateObject()
{
return new T();
}
My use-case tended to get a little more complex - I needed the ability to do a little bit of object initialization and I needed to be able to load objects from different DLLs based on configuration (e.g. simulated versus actual for hardware). It started looking like COM and ATL was where I was headed, but I didn't want to add the weight of COM to the OS (this is being done in CE).
What I ended up going with was template-based (thanks litb for putting me on track) and looks like this:
class INewTransModule
{
public:
virtual bool Init() { return true; }
virtual bool Shutdown() { return true; }
};
template <typename T>
struct BrokeredObject
{
public:
inline static T* GetInstance()
{
static T t;
return &t;
}
};
template <>
struct BrokeredObject<INewTransModule>
{
public:
inline static INewTransModule* GetInstance()
{
static INewTransModule t;
// do stuff after creation
ASSERT(t.Init());
return &t;
}
};
class OBJECTBROKER_API ObjectBroker
{
public:
// these calls do configuration-based creations
static ITraceTool *GetTraceTool();
static IEeprom *GetEeprom();
// etc
};
Then to ensure that the objects (since they're templated) actually get compiled I added definitions like these:
class EepromImpl: public BrokeredObject<EepromImpl>, public CEeprom
{
};
class SimEepromImpl: public BrokeredObject<SimEepromImpl>, public CSimEeprom
{
};

How can I use covariant return types with smart pointers?

I have code like this:
class RetInterface {...}
class Ret1: public RetInterface {...}
class AInterface
{
public:
virtual boost::shared_ptr<RetInterface> get_r() const = 0;
...
};
class A1: public AInterface
{
public:
boost::shared_ptr<Ret1> get_r() const {...}
...
};
This code does not compile.
In visual studio it raises
C2555: overriding virtual function return type differs and is not
covariant
If I do not use boost::shared_ptr but return raw pointers, the code compiles (I understand this is due to covariant return types in C++). I can see the problem is because boost::shared_ptr of Ret1 is not derived from boost::shared_ptr of RetInterface. But I want to return boost::shared_ptr of Ret1 for use in other classes, else I must cast the returned value after the return.
Am I doing something wrong?
If not, why is the language like this - it should be extensible to handle conversion between smart pointers in this scenario? Is there a desirable workaround?
Firstly, this is indeed how it works in C++: the return type of a virtual function in a derived class must be the same as in the base class. There is the special exception that a function that returns a reference/pointer to some class X can be overridden by a function that returns a reference/pointer to a class that derives from X, but as you note this doesn't allow for smart pointers (such as shared_ptr), just for plain pointers.
If your interface RetInterface is sufficiently comprehensive, then you won't need to know the actual returned type in the calling code. In general it doesn't make sense anyway: the reason get_r is a virtual function in the first place is because you will be calling it through a pointer or reference to the base class AInterface, in which case you can't know what type the derived class would return. If you are calling this with an actual A1 reference, you can just create a separate get_r1 function in A1 that does what you need.
class A1: public AInterface
{
public:
boost::shared_ptr<RetInterface> get_r() const
{
return get_r1();
}
boost::shared_ptr<Ret1> get_r1() const {...}
...
};
Alternatively, you can use the visitor pattern or something like my Dynamic Double Dispatch technique to pass a callback in to the returned object which can then invoke the callback with the correct type.
There is a neat solution posted in this blog post (from Raoul Borges)
An excerpt of the bit prior to adding support for mulitple inheritance and abstract methods is:
template <typename Derived, typename Base>
class clone_inherit<Derived, Base> : public Base
{
public:
std::unique_ptr<Derived> clone() const
{
return std::unique_ptr<Derived>(static_cast<Derived *>(this->clone_impl()));
}
private:
virtual clone_inherit * clone_impl() const override
{
return new Derived(*this);
}
};
class concrete: public clone_inherit<concrete, cloneable>
{
};
int main()
{
std::unique_ptr<concrete> c = std::make_unique<concrete>();
std::unique_ptr<concrete> cc = c->clone();
cloneable * p = c.get();
std::unique_ptr<clonable> pp = p->clone();
}
I would encourage reading the full article. Its simply written and well explained.
You can't change return types (for non-pointer, non-reference return types) when overloading methods in C++. A1::get_r must return a boost::shared_ptr<RetInterface>.
Anthony Williams has a nice comprehensive answer.
What about this solution:
template<typename Derived, typename Base>
class SharedCovariant : public shared_ptr<Base>
{
public:
typedef Base BaseOf;
SharedCovariant(shared_ptr<Base> & container) :
shared_ptr<Base>(container)
{
}
shared_ptr<Derived> operator ->()
{
return boost::dynamic_pointer_cast<Derived>(*this);
}
};
e.g:
struct A {};
struct B : A {};
struct Test
{
shared_ptr<A> get() {return a_; }
shared_ptr<A> a_;
};
typedef SharedCovariant<B,A> SharedBFromA;
struct TestDerived : Test
{
SharedBFromA get() { return a_; }
};
Here is my attempt :
template<class T>
class Child : public T
{
public:
typedef T Parent;
};
template<typename _T>
class has_parent
{
private:
typedef char One;
typedef struct { char array[2]; } Two;
template<typename _C>
static One test(typename _C::Parent *);
template<typename _C>
static Two test(...);
public:
enum { value = (sizeof(test<_T>(nullptr)) == sizeof(One)) };
};
class A
{
public :
virtual void print() = 0;
};
class B : public Child<A>
{
public:
void print() override
{
printf("toto \n");
}
};
template<class T, bool hasParent = has_parent<T>::value>
class ICovariantSharedPtr;
template<class T>
class ICovariantSharedPtr<T, true> : public ICovariantSharedPtr<typename T::Parent>
{
public:
T * get() override = 0;
};
template<class T>
class ICovariantSharedPtr<T, false>
{
public:
virtual T * get() = 0;
};
template<class T>
class CovariantSharedPtr : public ICovariantSharedPtr<T>
{
public:
CovariantSharedPtr(){}
CovariantSharedPtr(std::shared_ptr<T> a_ptr) : m_ptr(std::move(a_ptr)){}
T * get() final
{
return m_ptr.get();
}
private:
std::shared_ptr<T> m_ptr;
};
And a little example :
class UseA
{
public:
virtual ICovariantSharedPtr<A> & GetPtr() = 0;
};
class UseB : public UseA
{
public:
CovariantSharedPtr<B> & GetPtr() final
{
return m_ptrB;
}
private:
CovariantSharedPtr<B> m_ptrB = std::make_shared<B>();
};
int _tmain(int argc, _TCHAR* argv[])
{
UseB b;
UseA & a = b;
a.GetPtr().get()->print();
}
Explanations :
This solution implies meta-progamming and to modify the classes used in covariant smart pointers.
The simple template struct Child is here to bind the type Parent and inheritance. Any class inheriting from Child<T> will inherit from T and define T as Parent. The classes used in covariant smart pointers needs this type to be defined.
The class has_parent is used to detect at compile time if a class defines the type Parent or not. This part is not mine, I used the same code as to detect if a method exists (see here)
As we want covariance with smart pointers, we want our smart pointers to mimic the existing class architecture. It's easier to explain how it works in the example.
When a CovariantSharedPtr<B> is defined, it inherits from ICovariantSharedPtr<B>, which is interpreted as ICovariantSharedPtr<B, has_parent<B>::value>. As B inherits from Child<A>, has_parent<B>::value is true, so ICovariantSharedPtr<B> is ICovariantSharedPtr<B, true> and inherits from ICovariantSharedPtr<B::Parent> which is ICovariantSharedPtr<A>. As A has no Parent defined, has_parent<A>::value is false, ICovariantSharedPtr<A> is ICovariantSharedPtr<A, false> and inherits from nothing.
The main point is as Binherits from A, we have ICovariantSharedPtr<B>inheriting from ICovariantSharedPtr<A>. So any method returning a pointer or a reference on ICovariantSharedPtr<A> can be overloaded by a method returning the same on ICovariantSharedPtr<B>.
Mr Fooz answered part 1 of your question. Part 2, it works this way because the compiler doesn't know if it will be calling AInterface::get_r or A1::get_r at compile time - it needs to know what return value it's going to get, so it insists on both methods returning the same type. This is part of the C++ specification.
For the workaround, if A1::get_r returns a pointer to RetInterface, the virtual methods in RetInterface will still work as expected, and the proper object will be deleted when the pointer is destroyed. There's no need for different return types.
maybe you could use an out parameter to get around "covariance with returned boost shared_ptrs.
void get_r_to(boost::shared_ptr<RetInterface>& ) ...
since I suspect a caller can drop in a more refined shared_ptr type as argument.