I'm trying to figure out how I can iterate through a container (like std::vector) of objects that share a common base parent class contiguously in memory.
To demonstrate the problem let's use the following examples.
class Base
{
public:
Base();
virtual void doStuff() = 0;
};
class DerivedA : public Base
{
private:
//specific A member variables
public:
DerivedA();
virtual void doStuff();
};
class DerivedB : public Base
{
private:
//specific B member variables
public:
DerivedB();
virtual void doStuff();
};
Now, using std::vector to iterate would keep the objects in contiguous memory but we would experience slicing as there isn't room for the derived properties.
So we have to use polymorphic technique using pointers like so
int main ()
{
std::vector<Base*> container;
container.push_back(new DerivedA());
container.push_back(new DerivedB());
for (std::vector<Base*>::iterator i = container.begin(); i!=container.end(); i++)
{
(*(*i)).doStuff();
}
}
As far as I know that should work fine given that the classes are implemented.
Problem:
Now, the vector contains pointers in contiguous memory but that does not mean that the addresses they are pointing to are.
So if I want to be able to delete and insert objects into the vector on the fly at any time, the objects will be spread all over the place in memory.
Question:
It seems like everyone suggests doing it the std::vector way but why isn't it considered problematic that it isn't iterable contiguously in memory (assuming we actually use the pointer)?
Am I forced to do it the copy-pasta way?
int main ()
{
std::vector<DerivedA> containerA;
DerivedA a;
containerA.push_back(a);
std::vector<DerivedB> containerB;
DerivedB b;
containerB.push_back(b);
for (std::vector<DerivedA>::iterator i = containerA.begin(); i!=container.end(); i++)
{
(*i).doStuff();
}
for (std::vector<DerivedB>::iterator i = containerB.begin(); i!=container.end(); i++)
{
(*i).doStuff();
}
}
I'm guessing there might not be a real solution to this since keeping objects of various sizes linearly in memory doesn't really make sense but if anyone can give me some advice I'd appreciate it.
Let's take the questions in order.
Q: How I can create a contiguous, heterogeneous container?
A: You can't.
Suppose you used some
placement new
shenanigans to arrange your objects in memory like this:
[B ][DA ][DB ][B ][B ][DB ][DA ]
How would the iteration mechanism know how far to advance the iteration
pointer from one object to the next? The number of bytes from the first
element to the second is different from the second to the third.
The reason contiguous arrays have to be homogenous is so the distance
from one object to the next is the same for all elements. You might
try to embed a size in each object, but then you basically have a linked
list rather than an array (albeit one with good
locality).
This reasoning leads to the idea to use an array of pointers, about which
you have posed the next question:
Q: Why isn't it considered problematic that it isn't iterable contiguously
A: It is not as slow as you think.
Your concern seems to be the performance of following pointers to
scattered memory locations. But the cost of following these pointers is
unlikely to be dominant. Don't get hung up on micro-optimizations like
memory layout until you have solid evidence they are needed.
Q: Am I forced to do it the copy-pasta way?
A: No!
Here, the concern seems to be maintainability rather than performance.
Maintainability is much more important in my opinion, and a good thing
to think about early.
For maintainability, you already have a fine solution: maintain a
vector of Base*.
If you really want to use multiple vectors, there is still a better way
than copy and paste: use a template function, like this (untested):
template <class T>
void doStuffToVector(std::vector<T> &vec)
{
for (std::vector<T>::iterator i = vec.begin(); i!=vec.end(); ++i) {
(*i).doStuff();
}
}
Then call it on each container:
doStuffToVector(containerA);
doStuffToVector(containerB);
If your only concern is maintainability, either the vector of pointers
or the template function should suffice.
Q: Any advice?
A: For starters, ignore performance, at least as far as constant
factors are concerned. Concentrate on correctness and maintainability.
Then, measure performance. Observe that this question did not begin
with a statement of current and desired speed. You don't yet have an
actual problem to solve!
After measurement, if you conclude it's too slow, use a
profiler
to find out where the slow spots are. They are almost never where you
think they will be.
Case in point: this entire question and answers have been focused on
the iteration, but no one has raised the point that the virtual function
calls to doStuff are much more likely to be the bottleneck! Virtual
function calls are expensive because they are indirect control flow,
which causes major problems for the
pipeline;
indirect data access is much less expensive because the
data cache is usually very
effective at satisfying data access requests quickly.
Q: (Implied) How would I optimize this?
A: After careful measurement, you might possibly find that this code
(the iteration itself, including virtual function dispatch; not what's
inside doStuff) is a bottleneck. That must mean it's being executed
for billions of iterations, minimum.
First, look into algorithmic improvements that will reduce the number
of iterations required.
Next, eliminate the virtual function call, for example by embedding an
explicit indicator of object type and testing it with if or switch.
That will allow the processor's
branch predictor to
be much more effective.
Finally, yes, you'd probably want to put all of the elements into one
contiguous array to improve locality and eliminate the indirect data
access. That will mean eliminating the class hierarchy too so all
objects are the same type, either combining all of the fields into a
single class and/or using a union. This will harm your program's
maintainability! That is sometimes one of the costs of writing high
performance code, but is actually necessary only very rarely.
A very simple solution is to sort your pointer array by address value. Then if you iterate your vector, they will be in memory order. Perhaps not contiguous, but in order nonetheless, which reduces cache misses.
The only way to truly have contiguous memory is to allocate it as such, for example have vectors of objects of derived type stored in their own container, which you then reference in your pointer vector.
It seems like everyone suggests doing it the std::vector way but why
isn't it considered problematic that it isn't iterable contiguously in
memory (assuming we actually use the pointer)?
I don't know who considers it problematic or not. As in the other answers, in a lot of cases you just don't care. Do the profiling and you'll see if you have to optimize it or not.
In most of the cases, people would recomment you to use a std::vector<std::unique_ptr<...>>.
In many cases though, it is very important to have your objects in contiguous memory. Gaming is one of those cases. I write a lot of computational code (finite element libraries), where it is also very important. You could read about how to organize your data in a different way in order to have everything line up. For instance, it might be interesting to store all Arm objects in a std::vector rather than store each Arm in a Hero object and access the Arm objects through the Hero object.
Anyways, here is an easy way to store your objects from your example in a contiguous container.
For the base class, use alignas to fix the size of the object. Make sure it's big enough so all derived objects fit in it. In my below example, DerivedA has size 16, DerivedB has size 24. The specified align size must be a power of 2, so we choose 32.
struct alignas(32) Base
{
virtual void print() const {}
};
struct DerivedA : Base
{
void print() const final override { std::cout << "num: " << i << std::endl; }
int i = 1;
};
struct DerivedB : Base
{
void print() const final override { std::cout << "num: " << i << std::endl; }
int i = 2;
double j = 100.0;
};
Now we can write instances of DerivedA and DerivedB with placement new:
int main ()
{
std::vector<Base> v(2);
new (&v[0]) DerivedA();
new (&v[1]) DerivedB();
for (const auto& e : v)
e.print();
return 0;
}
EDIT
The problem here is that you need to manage the sizes manually. Also, as has been pointed out to me recently, alignas is designed for positioning the object in memory, not to allocate size. Maybe a better way would just be using std::variant.
int main()
{
std::vector<std::variant<DerivedA, DerivedB>> vec;
vec.emplace_back(DerivedA());
vec.emplace_back(DerivedB());
for (const auto& e : vec)
std::visit(VisitPackage(), e);
return 0;
}
where VisitPackage could be something like this:
struct VisitPackage
{
void operator()(const DerivedA& d) { d.print(); }
void operator()(const DerivedB& d) { d.print(); }
};
Below is a full and short example of how to get what you want using std::variant.
#include <iostream>
#include <vector>
#include <variant>
struct Base { virtual void print() const = 0; };
struct DerivedA : Base { void print() const final override { std::cout << "DerivedA\n"; } };
struct DerivedB : Base { void print() const final override { std::cout << "DerivedB\n"; } };
struct Print
{
template <typename T>
// note that the operator() calls print from DerivedA or DerivedB directly
void operator()(const T& obj) const { obj.print(); }
};
int main ()
{
using var_t = std::variant<DerivedA, DerivedB>;
std::vector<var_t> vec { DerivedA(), DerivedB() };
for (auto& e : vec)
std::visit(Print(), e);
return 0;
}
If we have to store objects in array, their type must be fixed. Then we have these variants:
allocate dynamically and store pointers - if pointed objects are required to be continuous in memory, use custom allocator
use polymorphic type of a fixed size, e.g. union, as a storage type
For the second variant, the code could be something like this:
#include <new>
struct A {
A() {}
virtual void f() {}
};
struct B : A {
B() {}
void f() override {}
};
union U {
A a;
B b;
U() {}
};
int main() {
U u[2];
new (&u[0]) A;
new (&u[1]) B;
((A*)&u[0])->f(); // A::f
((A*)&u[1])->f(); // B::f
}
std::vector<T> iterators assume that the objects in the contiguous memory are of type T, std::vector<T>::iterator::operator++ considers sizeof T to be invariant - that is, it doesn't consult the specific instance for size data.
In essence, you can think of vector and vector::iterator as a thin facade over a T* m_data pointer, such that iterator++ is really just a basic pointer operation.
You will likely need to use a custom allocator and in-place new to prepare your data, accompanied by either indexing, linking etc. Perhaps consider something like http://www.boost.org/doc/libs/1_58_0/doc/html/intrusive/slist.html
See also boost::stable_vector
std::vector allocateS the objects in continuous memory but the object pointers you are storing within the vector are not. This is how you iterate through vector. Following code is written in c++14. The problem described is not solvable by this solution as the object pointers are stored in continuous memory but not the actual objects.
#include <iostream>
#include <memory>
#include <vector>
#include <algorithm>
using namespace std;
class Base
{
public:
Base() {}
virtual void doStuff() = 0;
};
class DerivedA : public Base
{
private:
//specific A member variables
public:
DerivedA() : Base() {}
virtual void doStuff() {
std::cout << "Derived Class A - Do Stuff" << std::endl;
}
};
class DerivedB : public Base
{
private:
//specific B member variables
public:
DerivedB() : Base() {}
virtual void doStuff() {
std::cout << "Derived Class B - Do Stuff" << std::endl;
}
};
int main() {
// your code goes here
std::vector<std::unique_ptr<Base> > container;
container.push_back(std::make_unique<DerivedA>());
container.push_back(std::make_unique<DerivedB>());
std::for_each(container.begin(), container.end(),[](std::unique_ptr<Base> & b) {
b->doStuff();
});
return 0;
}
Live Demo here.
Related
I would like to make the runtime type of a local variable depend on some condition. Say we have this situation:
#include <iostream>
class Base{
public:
virtual void foo()=0;
};
class Derived1 : public Base {
virtual void foo(){
std::cout << "D1" << std::endl;
}
};
class Derived2 : public Base {
virtual void foo(){
std::cout << "D2" << std::endl;
}
};
In Java-like languages where objects are always handled through "references" the solution is simple (pseudocode):
Base x = condition ? Derived1() : Derived2();
The C++ solution will obviously involve pointers (at least behind the scenes), since there is no other way to bring two different types under the same variable (which must have a type). It cannot be simply Base as Base objects cannot be constructed (it has a pure virtual function).
The simplest way would be to use raw pointers:
Base* x = condition ? static_cast<Base*>(new Derived1()) : static_cast<Base*>(new Derived2());
(The casts are needed to make the two branches of the ternary operator have the same type)
Manual pointer handling is error-prone and old school, this situation calls for a unique_ptr.
std::unique_ptr<Base> x{condition ? static_cast<Base*>(new Derived1()) : static_cast<Base*>(new Derived2())};
Eh... Not exactly what I'd call elegant. It uses explicit new and casting. I hoped to use something like std::make_unique to hide the new but it doesn't seem possible.
Is this just one of those situations where you conclude "C++ is like that, if you need elegance use other languages (perhaps making a trade-off on other aspects)"?
Or is this whole idea just totally un-C++-ish? Am I in the wrong mindset here, trying to force ideas from different languages on C++?
Is this just one of those situations where you conclude "C++ is like that, if you need elegance use other languages (perhaps making a trade-off on other aspects)"?
Or is this whole idea just totally un-C++-ish? Am I in the wrong mindset here, trying to force ideas from different languages on C++?
It really depends on what you are going to use x for.
Variants
The C++ solution will obviously involve pointers (at least behind the scenes), since there is no other way to bring two different types under the same variable (which must have a type).
You can also use boost::variant (or boost::any, but boost::variant might be better in this case). For example, given that Derived1 is default constructible:
boost::variant<Derived1, Derived2> x;
if (!condition) x = Derived2();
This will work even if Derived1 and Derived2 don't share a base class. Then you can use the visitor pattern to operate on x. Given, for example:
struct Derived1 {
void foo1(){
std::cout << "D1" << std::endl;
}
};
struct Derived2 {
void foo2(){
std::cout << "D2" << std::endl;
}
};
then you can define the visitor as:
class some_visitor : public boost::static_visitor<void> {
public:
void operator()(Derived1& x) const {
x.foo1();
}
void operator()(Derived2& x) const {
x.foo2();
}
};
and use it as:
boost::apply_visitor(some_visitor(), x);
Live demo
Polymorphic calls
If you really need to use x polymorphically, then yes, std::unique_ptr is ok. And just call your polymorphic function as x->foo():
std::unique_ptr<Base> x = condition ? std::unique_ptr<Base>(new Derived1()) : std::unique_ptr<Base>(new Derived2());
Live demo
Concepts/Templates
If you just need to call a function than you might just be better off defining a concept and expressing it with templates:
template<class Type>
void my_func(Type& x) { x.foo(); }
You'll be able to define concepts explicitly in future C++ versions too.
Live demo
One 'radical' possibility is to create a new kind of make_unique that will create the right typed return value
template<typename TReal, typename TOutside, typename... Args>
auto make_base_unique(Args&&... args) -> std::unique_ptr<TOutside>
{
return std::unique_ptr<TOutside>(new TReal(std::forward<Args>(args)...));
}
Then use it like:
auto x = (condition ? make_base_unique<Derived1,Base>() : make_base_unique<Derived2,Base>());
I have a number of unrelated types that all support the same operations through overloaded free functions (ad hoc polymorphism):
struct A {};
void use(int x) { std::cout << "int = " << x << std::endl; }
void use(const std::string& x) { std::cout << "string = " << x << std::endl; }
void use(const A&) { std::cout << "class A" << std::endl; }
As the title of the question implies, I want to store instances of those types in an heterogeneous container so that I can use() them no matter what concrete type they are. The container must have value semantics (ie. an assignment between two containers copies the data, it doesn't share it).
std::vector<???> items;
items.emplace_back(3);
items.emplace_back(std::string{ "hello" });
items.emplace_back(A{});
for (const auto& item: items)
use(item);
// or better yet
use(items);
And of course this must be fully extensible. Think of a library API that takes a vector<???>, and client code that adds its own types to the already known ones.
The usual solution is to store (smart) pointers to an (abstract) interface (eg. vector<unique_ptr<IUsable>>) but this has a number of drawbacks -- from the top of my head:
I have to migrate my current ad hoc polymorphic model to a class hierarchy where every single class inherits from the common interface. Oh snap! Now I have to write wrappers for int and string and what not... Not to mention the decreased reusability/composability due to the free member functions becoming intimately tied to the interface (virtual member functions).
The container loses its value semantics: a simple assignment vec1 = vec2 is impossible if we use unique_ptr (forcing me to manually perform deep copies), or both containers end up with shared state if we use shared_ptr (which has its advantages and disadvantages -- but since I want value semantics on the container, again I am forced to manually perform deep copies).
To be able to perform deep copies, the interface must support a virtual clone() function which has to be implemented in every single derived class. Can you seriously think of something more boring than that?
To sum it up: this adds a lot of unnecessary coupling and requires tons of (arguably useless) boilerplate code. This is definitely not satisfactory but so far this is the only practical solution I know of.
I have been searching for a viable alternative to subtype polymorphism (aka. interface inheritance) for ages. I play a lot with ad hoc polymorphism (aka. overloaded free functions) but I always hit the same hard wall: containers have to be homogeneous, so I always grudgingly go back to inheritance and smart pointers, with all the drawbacks already listed above (and probably more).
Ideally, I'd like to have a mere vector<IUsable> with proper value semantics, without changing anything to my current (absence of) type hierarchy, and keep ad hoc polymorphism instead of requiring subtype polymorphism.
Is this possible? If so, how?
Different alternatives
It is possible. There are several alternative approaches to your problem. Each one has different advantages and drawbacks (I will explain each one):
Create an interface and have a template class which implements this interface for different types. It should support cloning.
Use boost::variant and visitation.
Blending static and dynamic polymorphism
For the first alternative you need to create an interface like this:
class UsableInterface
{
public:
virtual ~UsableInterface() {}
virtual void use() = 0;
virtual std::unique_ptr<UsableInterface> clone() const = 0;
};
Obviously, you don't want to implement this interface by hand everytime you have a new type having the use() function. Therefore, let's have a template class which does that for you.
template <typename T> class UsableImpl : public UsableInterface
{
public:
template <typename ...Ts> UsableImpl( Ts&&...ts )
: t( std::forward<Ts>(ts)... ) {}
virtual void use() override { use( t ); }
virtual std::unique_ptr<UsableInterface> clone() const override
{
return std::make_unique<UsableImpl<T>>( t ); // This is C++14
// This is the C++11 way to do it:
// return std::unique_ptr<UsableImpl<T> >( new UsableImpl<T>(t) );
}
private:
T t;
};
Now you can actually already do everything you need with it. You can put these things in a vector:
std::vector<std::unique_ptr<UsableInterface>> usables;
// fill it
And you can copy that vector preserving the underlying types:
std::vector<std::unique_ptr<UsableInterface>> copies;
std::transform( begin(usables), end(usables), back_inserter(copies),
[]( const std::unique_ptr<UsableInterface> & p )
{ return p->clone(); } );
You probably don't want to litter your code with stuff like this. What you want to write is
copies = usables;
Well, you can get that convenience by wrapping the std::unique_ptr into a class which supports copying.
class Usable
{
public:
template <typename T> Usable( T t )
: p( std::make_unique<UsableImpl<T>>( std::move(t) ) ) {}
Usable( const Usable & other )
: p( other.clone() ) {}
Usable( Usable && other ) noexcept
: p( std::move(other.p) ) {}
void swap( Usable & other ) noexcept
{ p.swap(other.p); }
Usable & operator=( Usable other )
{ swap(other); }
void use()
{ p->use(); }
private:
std::unique_ptr<UsableInterface> p;
};
Because of the nice templated contructor you can now write stuff like
Usable u1 = 5;
Usable u2 = std::string("Hello usable!");
And you can assign values with proper value semantics:
u1 = u2;
And you can put Usables in an std::vector
std::vector<Usable> usables;
usables.emplace_back( std::string("Hello!") );
usables.emplace_back( 42 );
and copy that vector
const auto copies = usables;
You can find this idea in Sean Parents talk Value Semantics and Concepts-based Polymorphism. He also gave a very brief version of this talk at Going Native 2013, but I think this is to fast to follow.
Moreover, you can take a more generic approach than writing your own Usable class and forwarding all the member functions (if you want to add other later). The idea is to replace the class Usable with a template class. This template class will not provide a member function use() but an operator T&() and operator const T&() const. This gives you the same functionality, but you don't need to write an extra value class every time you facilitate this pattern.
A safe, generic, stack-based discriminated union container
The template class boost::variant is exactly that and provides something like a C style union but safe and with proper value semantics. The way to use it is this:
using Usable = boost::variant<int,std::string,A>;
Usable usable;
You can assign from objects of any of these types to a Usable.
usable = 1;
usable = "Hello variant!";
usable = A();
If all template types have value semantics, then boost::variant also has value semantics and can be put into STL containers. You can write a use() function for such an object by a pattern that is called the visitor pattern. It calls the correct use() function for the contained object depending on the internal type.
class UseVisitor : public boost::static_visitor<void>
{
public:
template <typename T>
void operator()( T && t )
{
use( std::forward<T>(t) );
}
}
void use( const Usable & u )
{
boost::apply_visitor( UseVisitor(), u );
}
Now you can write
Usable u = "Hello";
use( u );
And, as I already mentioned, you can put these thingies into STL containers.
std::vector<Usable> usables;
usables.emplace_back( 5 );
usables.emplace_back( "Hello world!" );
const auto copies = usables;
The trade-offs
You can grow the functionality in two dimensions:
Add new classes which satisfy the static interface.
Add new functions which the classes must implement.
In the first approach I presented it is easier to add new classes. The second approach makes it easier to add new functionality.
In the first approach it it impossible (or at least hard) for client code to add new functions. In the second approach it is impossible (or at least hard) for client code to add new classes to the mix. A way out is the so-called acyclic visitor pattern which makes it possible for clients to extend a class hierarchy with new classes and new functionality. The drawback here is that you have to sacrifice a certain amount of static checking at compile-time. Here's a link which describes the visitor pattern including the acyclic visitor pattern along with some other alternatives. If you have questions about this stuff, I'm willing to answer.
Both approaches are super type-safe. There is not trade-off to be made there.
The run-time-costs of the first approach can be much higher, since there is a heap allocation involved for each element you create. The boost::variant approach is stack based and therefore is probably faster. If performance is a problem with the first approach consider to switch to the second.
Credit where it's due: When I watched Sean Parent's Going Native 2013 "Inheritance Is The Base Class of Evil" talk, I realized how simple it actually was, in hindsight, to solve this problem. I can only advise you to watch it (there's much more interesting stuff packed in just 20 minutes, this Q/A barely scratches the surface of the whole talk), as well as the other Going Native 2013 talks.
Actually it's so simple it hardly needs any explanation at all, the code speaks for itself:
struct IUsable {
template<typename T>
IUsable(T value) : m_intf{ new Impl<T>(std::move(value)) } {}
IUsable(IUsable&&) noexcept = default;
IUsable(const IUsable& other) : m_intf{ other.m_intf->clone() } {}
IUsable& operator =(IUsable&&) noexcept = default;
IUsable& operator =(const IUsable& other) { m_intf = other.m_intf->clone(); return *this; }
// actual interface
friend void use(const IUsable&);
private:
struct Intf {
virtual ~Intf() = default;
virtual std::unique_ptr<Intf> clone() const = 0;
// actual interface
virtual void intf_use() const = 0;
};
template<typename T>
struct Impl : Intf {
Impl(T&& value) : m_value(std::move(value)) {}
virtual std::unique_ptr<Intf> clone() const override { return std::unique_ptr<Intf>{ new Impl<T>(*this) }; }
// actual interface
void intf_use() const override { use(m_value); }
private:
T m_value;
};
std::unique_ptr<Intf> m_intf;
};
// ad hoc polymorphic interface
void use(const IUsable& intf) { intf.m_intf->intf_use(); }
// could be further generalized for any container but, hey, you get the drift
template<typename... Args>
void use(const std::vector<IUsable, Args...>& c) {
std::cout << "vector<IUsable>" << std::endl;
for (const auto& i: c) use(i);
std::cout << "End of vector" << std::endl;
}
int main() {
std::vector<IUsable> items;
items.emplace_back(3);
items.emplace_back(std::string{ "world" });
items.emplace_back(items); // copy "items" in its current state
items[0] = std::string{ "hello" };
items[1] = 42;
items.emplace_back(A{});
use(items);
}
// vector<IUsable>
// string = hello
// int = 42
// vector<IUsable>
// int = 3
// string = world
// End of vector
// class A
// End of vector
As you can see, this is a rather simple wrapper around a unique_ptr<Interface>, with a templated constructor that instantiates a derived Implementation<T>. All the (not quite) gory details are private, the public interface couldn't be any cleaner: the wrapper itself has no member functions except construction/copy/move, the interface is provided as a free use() function that overloads the existing ones.
Obviously, the choice of unique_ptr means that we need to implement a private clone() function that is called whenever we want to make a copy of an IUsable object (which in turn requires a heap allocation). Admittedly one heap allocation per copy is quite suboptimal, but this is a requirement if any function of the public interface can mutate the underlying object (ie. if use() took non-const references and modified them): this way we ensure that every object is unique and thus can freely be mutated.
Now if, as in the question, the objects are completely immutable (not only through the exposed interface, mind you, I really mean the whole objects are always and completely immutable) then we can introduce shared state without nefarious side effects. The most straightforward way to do this is to use a shared_ptr-to-const instead of a unique_ptr:
struct IUsableImmutable {
template<typename T>
IUsableImmutable(T value) : m_intf(std::make_shared<const Impl<T>>(std::move(value))) {}
IUsableImmutable(IUsableImmutable&&) noexcept = default;
IUsableImmutable(const IUsableImmutable&) noexcept = default;
IUsableImmutable& operator =(IUsableImmutable&&) noexcept = default;
IUsableImmutable& operator =(const IUsableImmutable&) noexcept = default;
// actual interface
friend void use(const IUsableImmutable&);
private:
struct Intf {
virtual ~Intf() = default;
// actual interface
virtual void intf_use() const = 0;
};
template<typename T>
struct Impl : Intf {
Impl(T&& value) : m_value(std::move(value)) {}
// actual interface
void intf_use() const override { use(m_value); }
private:
const T m_value;
};
std::shared_ptr<const Intf> m_intf;
};
// ad hoc polymorphic interface
void use(const IUsableImmutable& intf) { intf.m_intf->intf_use(); }
// could be further generalized for any container but, hey, you get the drift
template<typename... Args>
void use(const std::vector<IUsableImmutable, Args...>& c) {
std::cout << "vector<IUsableImmutable>" << std::endl;
for (const auto& i: c) use(i);
std::cout << "End of vector" << std::endl;
}
Notice how the clone() function has disappeared (we don't need it any more, we just share the underlying object and it's no bother since it's immutable), and how copy is now noexcept thanks to shared_ptr guarantees.
The fun part is, the underlying objects have to be immutable, but you can still mutate their IUsableImmutable wrapper so it's still perfectly OK to do this:
std::vector<IUsableImmutable> items;
items.emplace_back(3);
items[0] = std::string{ "hello" };
(only the shared_ptr is mutated, not the underlying object itself so it doesn't affect the other shared references)
Maybe boost::variant?
#include <iostream>
#include <string>
#include <vector>
#include "boost/variant.hpp"
struct A {};
void use(int x) { std::cout << "int = " << x << std::endl; }
void use(const std::string& x) { std::cout << "string = " << x << std::endl; }
void use(const A&) { std::cout << "class A" << std::endl; }
typedef boost::variant<int,std::string,A> m_types;
class use_func : public boost::static_visitor<>
{
public:
template <typename T>
void operator()( T & operand ) const
{
use(operand);
}
};
int main()
{
std::vector<m_types> vec;
vec.push_back(1);
vec.push_back(2);
vec.push_back(std::string("hello"));
vec.push_back(A());
for (int i=0;i<4;++i)
boost::apply_visitor( use_func(), vec[i] );
return 0;
}
Live example: http://coliru.stacked-crooked.com/a/e4f4ccf6d7e6d9d8
The other answers earlier (use vtabled interface base class, use boost::variant, use virtual base class inheritance tricks) are all perfectly good and valid solutions for this problem, each with a difference balance of compile time versus run time costs. I would suggest though that instead of boost::variant, on C++ 11 and later use eggs::variant instead which is a reimplementation of boost::variant using C++ 11/14 and it is enormously superior on design, performance, ease of use, power of abstraction and it even provides a fairly full feature subset on VS2013 (and a full feature set on VS2015). It's also written and maintained by a lead Boost author.
If you are able to redefine the problem a bit though - specifically, that you can lose the type erasing std::vector in favour of something much more powerful - you could use heterogenous type containers instead. These work by returning a new container type for each modification of the container, so the pattern must be:
newtype newcontainer=oldcontainer.push_back(newitem);
These were a pain to use in C++ 03, though Boost.Fusion makes a fair fist of making them potentially useful. Actually useful usability is only possible from C++ 11 onwards, and especially so from C++ 14 onwards thanks to generic lambdas which make working with these heterogenous collections very straightforward to program using constexpr functional programming, and probably the current leading toolkit library for that right now is proposed Boost.Hana which ideally requires clang 3.6 or GCC 5.0.
Heterogeneous type containers are pretty much the 99% compile time 1% run time cost solution. You'll see a lot of compiler optimiser face plants with current compiler technology e.g. I once saw clang 3.5 generate 2500 opcodes for code which should have generated two opcodes, and for the same code GCC 4.9 spat out 15 opcodes 12 of which didn't actually do anything (they loaded memory into registers and did nothing with those registers). All that said, in a few years time you will be able to achieve optimal code generation for heterogeneous type containers, at which point I would expect they'll become the next gen form of C++ metaprogramming where instead of arsing around with templates we'll be able to functionally program the C++ compiler using actual functions!!!
Heres an idea I got recently from std::function implementation in libstdc++:
Create a Handler<T> template class with a static member function that knows how to copy, delete and perform other operations on T.
Then store a function pointer to that static functon in the constructor of your Any class. Your Any class doesn't need to know about T then, it just needs this function pointer to dispatch the T-specific operations. Notice that the signature of the function is independant of T.
Roughly like so:
struct Foo { ... }
struct Bar { ... }
struct Baz { ... }
template<class T>
struct Handler
{
static void action(Ptr data, EActions eAction)
{
switch (eAction)
{
case COPY:
call T::T(...);
case DELETE:
call T::~T();
case OTHER:
call T::whatever();
}
}
}
struct Any
{
Ptr handler;
Ptr data;
template<class T>
Any(T t)
: handler(Handler<T>::action)
, data(handler(t, COPY))
{}
Any(const Any& that)
: handler(that.handler)
, data(handler(that.data, COPY))
{}
~Any()
{
handler(data, DELETE);
}
};
int main()
{
vector<Any> V;
Foo foo; Bar bar; Baz baz;
v.push_back(foo);
v.push_back(bar);
v.push_back(baz);
}
This gives you type erasure while still maintaining value semantics, and does not require modification of the contained classes (Foo, Bar, Baz), and doesn't use dynamic polymorphism at all. It's pretty cool stuff.
I would like to ask a general advise. The code below fully compiles and roughly represents the structure of the code i deal with.
In a nutshell i want to pass a series of objects derived from the based class (Class1) and some other parameters from one place to another. More precisely, implement different child classes of the parent class, gather instances of those and pass for processing with parameters.
The question is, would you recommend to use a vector of objects or vector of pointers? I don't mind going for some new stuff from C++11 (std::unique_ptr, std::shared_ptr) if this is better/safer/less memory leaks/etc for some reason. I would really appreciate if someone could arguably advise on container for such a case and/or provide an example using C++11.
p/s/ here UncleBens said that using pointers could lead to memory leaks if/when exceptions are thrown. So maybe i should really use smart pointers for the task? How would this look?
p/p/s/ funny enough, the real life example gives me Bus error: 10 when i try to use those Class2 objects from std::vector< Container<d>*> / std::vector< Container<d>> . However, i'm not able to reproduce the error in a simple case...
#include <string>
#include <iostream>
#include <vector>
template<int dim>
class Class1 {
public:
Class1() {};
~Class1() {};
};
template<int dim>
class Class2 : public Class1<dim>
{
public:
Class2() :
Class1<dim>() {};
};
template <int dim>
class Container
{
public:
Container( Class1<dim> & f, int param1) : c1(f), param_(param1) {}
Class1<dim> & c1;
int param_;
};
static const int d = 2;
int main()
{
int p = 1;
Class2<d> c2;
std::vector< Container<d> *> p_list;
std::vector< Container<d> > list;
{
p_list.push_back ( new Container<d> ( c2,p ) );
}
std::cout<<"from pointers: "<<p_list[0]->param_<<std::endl;
{
list.push_back( Container<d> ( c2,p ) );
}
std::cout<<"from objects: "<<list[0].param_<<std::endl;
}
Firstly, the destructor of Class1 should be marked virtual, otherwise when an instance of a deriving class (Class2 for example) is destroyed, it's destructor wont be called correctly.
As for your question, the consequences of using a container of objects are:
The container might need to make copies of the objects, so you need to make sure there is a copy constructor (your class in the example gets the default one generated by the compiler). Copying objects can have a performance impact, and you need to properly define the semantics of the copy (is it deep or shallow, i.e. do you create a new copy of the class1 object, or just copy the reference).
You can't have any polymorphism, so you couldn't subclass Container and then put instances of the base and subclass in the same container.
Depending on the container, your objects will be contiguous in memory (this is the case for a vector) which can have performance benefits.
If you use a container of raw pointers, then the container only needs to copy pointers (faster) and you can add derived instances of the contained type. The downside is that you'll have to destroy the objects manually after use and as you mentioned, it's easy to leak memory.
shared_ptrs have similar benefits/downsides to raw pointers, but the key benefit is the the shared_ptr destroys the object for you when nothing is referencing it any more, this makes it less likely that you'll introduce memory leaks (but it's still not impossible to do so when exceptions are involved).
Given that your handing these objects over for further processing, I would say a shared_ptr based approach is a good option. The consequences of using shared ptrs over and above those of raw pointers are:
There can be a performance overhead, as in order to be thread safe, most shared_ptr implementations need to check/set locks (this might involve a system call to the OS).
you can still leak memory by introducing circular references between objects.
you'll have to use a compiler implementing C++11 or use external libraries (most people use boost).
An example using shared_ptrs would look something like this (not tested).
#include <string>
#include <iostream>
#include <vector>
template<int dim>
class Class1 {
public:
Class1() {};
virtual ~Class1() {};
};
template<int dim>
class Class2 : public Class1<dim>
{
public:
Class2() :
Class1<dim>() {};
};
template <int dim>
class Container
{
public:
Container( boost::shared_ptr<Class1<dim>> f, int param1) : c1(f), param_(param1) {}
boost::shared_ptr<Class1<dim>> c1;
int param_;
};
static const int d = 2;
int main()
{
int p = 1;
boost::shared_ptr<Class1<d>> c2 = boost::make_shared<Class2<d>>();
std::vector<boost::shared_ptr<Container<d>>> list;
list.push_back(boost::make_shared<Container<d>>(c2,p));
std::cout << "from objects: " << list[0]->param_ << std::endl;
}
In summary, if the code receiving the containers doesn't store refs to them anywhere, and you don't need polymorphism, then a container of objects is probably ok. If it is necessary for the code receiving the containers to store them somewhere, and/or you want polymorphic containers, use shared ptrs.
There's this one thing in C++ which has been making me feel uncomfortable for quite a long time, because I honestly don't know how to do it, even though it sounds simple:
How do I implement Factory Method in C++ correctly?
Goal: to make it possible to allow the client to instantiate some object using factory methods instead of the object's constructors, without unacceptable consequences and a performance hit.
By "Factory method pattern", I mean both static factory methods inside an object or methods defined in another class, or global functions. Just generally "the concept of redirecting the normal way of instantiation of class X to anywhere else than the constructor".
Let me skim through some possible answers which I have thought of.
0) Don't make factories, make constructors.
This sounds nice (and indeed often the best solution), but is not a general remedy. First of all, there are cases when object construction is a task complex enough to justify its extraction to another class. But even putting that fact aside, even for simple objects using just constructors often won't do.
The simplest example I know is a 2-D Vector class. So simple, yet tricky. I want to be able to construct it both from both Cartesian and polar coordinates. Obviously, I cannot do:
struct Vec2 {
Vec2(float x, float y);
Vec2(float angle, float magnitude); // not a valid overload!
// ...
};
My natural way of thinking is then:
struct Vec2 {
static Vec2 fromLinear(float x, float y);
static Vec2 fromPolar(float angle, float magnitude);
// ...
};
Which, instead of constructors, leads me to usage of static factory methods... which essentially means that I'm implementing the factory pattern, in some way ("the class becomes its own factory"). This looks nice (and would suit this particular case), but fails in some cases, which I'm going to describe in point 2. Do read on.
another case: trying to overload by two opaque typedefs of some API (such as GUIDs of unrelated domains, or a GUID and a bitfield), types semantically totally different (so - in theory - valid overloads) but which actually turn out to be the same thing - like unsigned ints or void pointers.
1) The Java Way
Java has it simple, as we only have dynamic-allocated objects. Making a factory is as trivial as:
class FooFactory {
public Foo createFooInSomeWay() {
// can be a static method as well,
// if we don't need the factory to provide its own object semantics
// and just serve as a group of methods
return new Foo(some, args);
}
}
In C++, this translates to:
class FooFactory {
public:
Foo* createFooInSomeWay() {
return new Foo(some, args);
}
};
Cool? Often, indeed. But then- this forces the user to only use dynamic allocation. Static allocation is what makes C++ complex, but is also what often makes it powerful. Also, I believe that there exist some targets (keyword: embedded) which don't allow for dynamic allocation. And that doesn't imply that the users of those platforms like to write clean OOP.
Anyway, philosophy aside: In the general case, I don't want to force the users of the factory to be restrained to dynamic allocation.
2) Return-by-value
OK, so we know that 1) is cool when we want dynamic allocation. Why won't we add static allocation on top of that?
class FooFactory {
public:
Foo* createFooInSomeWay() {
return new Foo(some, args);
}
Foo createFooInSomeWay() {
return Foo(some, args);
}
};
What? We can't overload by the return type? Oh, of course we can't. So let's change the method names to reflect that. And yes, I've written the invalid code example above just to stress how much I dislike the need to change the method name, for example because we cannot implement a language-agnostic factory design properly now, since we have to change names - and every user of this code will need to remember that difference of the implementation from the specification.
class FooFactory {
public:
Foo* createDynamicFooInSomeWay() {
return new Foo(some, args);
}
Foo createFooObjectInSomeWay() {
return Foo(some, args);
}
};
OK... there we have it. It's ugly, as we need to change the method name. It's imperfect, since we need to write the same code twice. But once done, it works. Right?
Well, usually. But sometimes it does not. When creating Foo, we actually depend on the compiler to do the return value optimisation for us, because the C++ standard is benevolent enough for the compiler vendors not to specify when will the object created in-place and when will it be copied when returning a temporary object by value in C++. So if Foo is expensive to copy, this approach is risky.
And what if Foo is not copiable at all? Well, doh. (Note that in C++17 with guaranteed copy elision, not-being-copiable is no problem anymore for the code above)
Conclusion: Making a factory by returning an object is indeed a solution for some cases (such as the 2-D vector previously mentioned), but still not a general replacement for constructors.
3) Two-phase construction
Another thing that someone would probably come up with is separating the issue of object allocation and its initialisation. This usually results in code like this:
class Foo {
public:
Foo() {
// empty or almost empty
}
// ...
};
class FooFactory {
public:
void createFooInSomeWay(Foo& foo, some, args);
};
void clientCode() {
Foo staticFoo;
auto_ptr<Foo> dynamicFoo = new Foo();
FooFactory factory;
factory.createFooInSomeWay(&staticFoo);
factory.createFooInSomeWay(&dynamicFoo.get());
// ...
}
One may think it works like a charm. The only price we pay for in our code...
Since I've written all of this and left this as the last, I must dislike it too. :) Why?
First of all... I sincerely dislike the concept of two-phase construction and I feel guilty when I use it. If I design my objects with the assertion that "if it exists, it is in valid state", I feel that my code is safer and less error-prone. I like it that way.
Having to drop that convention AND changing the design of my object just for the purpose of making factory of it is.. well, unwieldy.
I know that the above won't convince many people, so let's me give some more solid arguments. Using two-phase construction, you cannot:
initialise const or reference member variables,
pass arguments to base class constructors and member object constructors.
And probably there could be some more drawbacks which I can't think of right now, and I don't even feel particularly obliged to since the above bullet points convince me already.
So: not even close to a good general solution for implementing a factory.
Conclusions:
We want to have a way of object instantiation which would:
allow for uniform instantiation regardless of allocation,
give different, meaningful names to construction methods (thus not relying on by-argument overloading),
not introduce a significant performance hit and, preferably, a significant code bloat hit, especially at client side,
be general, as in: possible to be introduced for any class.
I believe I have proven that the ways I have mentioned don't fulfil those requirements.
Any hints? Please provide me with a solution, I don't want to think that this language won't allow me to properly implement such a trivial concept.
First of all, there are cases when
object construction is a task complex
enough to justify its extraction to
another class.
I believe this point is incorrect. The complexity doesn't really matter. The relevance is what does. If an object can be constructed in one step (not like in the builder pattern), the constructor is the right place to do it. If you really need another class to perform the job, then it should be a helper class that is used from the constructor anyway.
Vec2(float x, float y);
Vec2(float angle, float magnitude); // not a valid overload!
There is an easy workaround for this:
struct Cartesian {
inline Cartesian(float x, float y): x(x), y(y) {}
float x, y;
};
struct Polar {
inline Polar(float angle, float magnitude): angle(angle), magnitude(magnitude) {}
float angle, magnitude;
};
Vec2(const Cartesian &cartesian);
Vec2(const Polar &polar);
The only disadvantage is that it looks a bit verbose:
Vec2 v2(Vec2::Cartesian(3.0f, 4.0f));
But the good thing is that you can immediately see what coordinate type you're using, and at the same time you don't have to worry about copying. If you want copying, and it's expensive (as proven by profiling, of course), you may wish to use something like Qt's shared classes to avoid copying overhead.
As for the allocation type, the main reason to use the factory pattern is usually polymorphism. Constructors can't be virtual, and even if they could, it wouldn't make much sense. When using static or stack allocation, you can't create objects in a polymorphic way because the compiler needs to know the exact size. So it works only with pointers and references. And returning a reference from a factory doesn't work too, because while an object technically can be deleted by reference, it could be rather confusing and bug-prone, see Is the practice of returning a C++ reference variable, evil? for example. So pointers are the only thing that's left, and that includes smart pointers too. In other words, factories are most useful when used with dynamic allocation, so you can do things like this:
class Abstract {
public:
virtual void do() = 0;
};
class Factory {
public:
Abstract *create();
};
Factory f;
Abstract *a = f.create();
a->do();
In other cases, factories just help to solve minor problems like those with overloads you have mentioned. It would be nice if it was possible to use them in a uniform way, but it doesn't hurt much that it is probably impossible.
Simple Factory Example:
// Factory returns object and ownership
// Caller responsible for deletion.
#include <memory>
class FactoryReleaseOwnership{
public:
std::unique_ptr<Foo> createFooInSomeWay(){
return std::unique_ptr<Foo>(new Foo(some, args));
}
};
// Factory retains object ownership
// Thus returning a reference.
#include <boost/ptr_container/ptr_vector.hpp>
class FactoryRetainOwnership{
boost::ptr_vector<Foo> myFoo;
public:
Foo& createFooInSomeWay(){
// Must take care that factory last longer than all references.
// Could make myFoo static so it last as long as the application.
myFoo.push_back(new Foo(some, args));
return myFoo.back();
}
};
Have you thought about not using a factory at all, and instead making nice use of the type system? I can think of two different approaches which do this sort of thing:
Option 1:
struct linear {
linear(float x, float y) : x_(x), y_(y){}
float x_;
float y_;
};
struct polar {
polar(float angle, float magnitude) : angle_(angle), magnitude_(magnitude) {}
float angle_;
float magnitude_;
};
struct Vec2 {
explicit Vec2(const linear &l) { /* ... */ }
explicit Vec2(const polar &p) { /* ... */ }
};
Which lets you write things like:
Vec2 v(linear(1.0, 2.0));
Option 2:
you can use "tags" like the STL does with iterators and such. For example:
struct linear_coord_tag linear_coord {}; // declare type and a global
struct polar_coord_tag polar_coord {};
struct Vec2 {
Vec2(float x, float y, const linear_coord_tag &) { /* ... */ }
Vec2(float angle, float magnitude, const polar_coord_tag &) { /* ... */ }
};
This second approach lets you write code which looks like this:
Vec2 v(1.0, 2.0, linear_coord);
which is also nice and expressive while allowing you to have unique prototypes for each constructor.
You can read a very good solution in: http://www.codeproject.com/Articles/363338/Factory-Pattern-in-Cplusplus
The best solution is on the "comments and discussions", see the "No need for static Create methods".
From this idea, I've done a factory. Note that I'm using Qt, but you can change QMap and QString for std equivalents.
#ifndef FACTORY_H
#define FACTORY_H
#include <QMap>
#include <QString>
template <typename T>
class Factory
{
public:
template <typename TDerived>
void registerType(QString name)
{
static_assert(std::is_base_of<T, TDerived>::value, "Factory::registerType doesn't accept this type because doesn't derive from base class");
_createFuncs[name] = &createFunc<TDerived>;
}
T* create(QString name) {
typename QMap<QString,PCreateFunc>::const_iterator it = _createFuncs.find(name);
if (it != _createFuncs.end()) {
return it.value()();
}
return nullptr;
}
private:
template <typename TDerived>
static T* createFunc()
{
return new TDerived();
}
typedef T* (*PCreateFunc)();
QMap<QString,PCreateFunc> _createFuncs;
};
#endif // FACTORY_H
Sample usage:
Factory<BaseClass> f;
f.registerType<Descendant1>("Descendant1");
f.registerType<Descendant2>("Descendant2");
Descendant1* d1 = static_cast<Descendant1*>(f.create("Descendant1"));
Descendant2* d2 = static_cast<Descendant2*>(f.create("Descendant2"));
BaseClass *b1 = f.create("Descendant1");
BaseClass *b2 = f.create("Descendant2");
I mostly agree with the accepted answer, but there is a C++11 option that has not been covered in existing answers:
Return factory method results by value, and
Provide a cheap move constructor.
Example:
struct sandwich {
// Factory methods.
static sandwich ham();
static sandwich spam();
// Move constructor.
sandwich(sandwich &&);
// etc.
};
Then you can construct objects on the stack:
sandwich mine{sandwich::ham()};
As subobjects of other things:
auto lunch = std::make_pair(sandwich::spam(), apple{});
Or dynamically allocated:
auto ptr = std::make_shared<sandwich>(sandwich::ham());
When might I use this?
If, on a public constructor, it is not possible to give meaningful initialisers for all class members without some preliminary calculation, then I might convert that constructor to a static method. The static method performs the preliminary calculations, then returns a value result via a private constructor which just does a member-wise initialisation.
I say 'might' because it depends on which approach gives the clearest code without being unnecessarily inefficient.
Loki has both a Factory Method and an Abstract Factory. Both are documented (extensively) in Modern C++ Design, by Andei Alexandrescu. The factory method is probably closer to what you seem to be after, though it's still a bit different (at least if memory serves, it requires you to register a type before the factory can create objects of that type).
I don't try to answer all of my questions, as I believe it is too broad. Just a couple of notes:
there are cases when object construction is a task complex enough to justify its extraction to another class.
That class is in fact a Builder, rather than a Factory.
In the general case, I don't want to force the users of the factory to be restrained to dynamic allocation.
Then you could have your factory encapsulate it in a smart pointer. I believe this way you can have your cake and eat it too.
This also eliminates the issues related to return-by-value.
Conclusion: Making a factory by returning an object is indeed a solution for some cases (such as the 2-D vector previously mentioned), but still not a general replacement for constructors.
Indeed. All design patterns have their (language specific) constraints and drawbacks. It is recommended to use them only when they help you solve your problem, not for their own sake.
If you are after the "perfect" factory implementation, well, good luck.
This is my c++11 style solution. parameter 'base' is for base class of all sub-classes. creators, are std::function objects to create sub-class instances, might be a binding to your sub-class' static member function 'create(some args)'. This maybe not perfect but works for me. And it is kinda 'general' solution.
template <class base, class... params> class factory {
public:
factory() {}
factory(const factory &) = delete;
factory &operator=(const factory &) = delete;
auto create(const std::string name, params... args) {
auto key = your_hash_func(name.c_str(), name.size());
return std::move(create(key, args...));
}
auto create(key_t key, params... args) {
std::unique_ptr<base> obj{creators_[key](args...)};
return obj;
}
void register_creator(const std::string name,
std::function<base *(params...)> &&creator) {
auto key = your_hash_func(name.c_str(), name.size());
creators_[key] = std::move(creator);
}
protected:
std::unordered_map<key_t, std::function<base *(params...)>> creators_;
};
An example on usage.
class base {
public:
base(int val) : val_(val) {}
virtual ~base() { std::cout << "base destroyed\n"; }
protected:
int val_ = 0;
};
class foo : public base {
public:
foo(int val) : base(val) { std::cout << "foo " << val << " \n"; }
static foo *create(int val) { return new foo(val); }
virtual ~foo() { std::cout << "foo destroyed\n"; }
};
class bar : public base {
public:
bar(int val) : base(val) { std::cout << "bar " << val << "\n"; }
static bar *create(int val) { return new bar(val); }
virtual ~bar() { std::cout << "bar destroyed\n"; }
};
int main() {
common::factory<base, int> factory;
auto foo_creator = std::bind(&foo::create, std::placeholders::_1);
auto bar_creator = std::bind(&bar::create, std::placeholders::_1);
factory.register_creator("foo", foo_creator);
factory.register_creator("bar", bar_creator);
{
auto foo_obj = std::move(factory.create("foo", 80));
foo_obj.reset();
}
{
auto bar_obj = std::move(factory.create("bar", 90));
bar_obj.reset();
}
}
Factory Pattern
class Point
{
public:
static Point Cartesian(double x, double y);
private:
};
And if you compiler does not support Return Value Optimization, ditch it, it probably does not contain much optimization at all...
extern std::pair<std::string_view, Base*(*)()> const factories[2];
decltype(factories) factories{
{"blah", []() -> Base*{return new Blah;}},
{"foo", []() -> Base*{return new Foo;}}
};
I know this question has been answered 3 years ago, but this may be what your were looking for.
Google has released a couple of weeks ago a library allowing easy and flexible dynamic object allocations. Here it is: http://google-opensource.blogspot.fr/2014/01/introducing-infact-library.html
HI! Anyone know how I can make the line "chug(derlist);" in the code below work?
#include <iostream>
#include <list>
using namespace std;
class Base
{
public:
virtual void chug() { cout << "Base chug\n"; }
};
class Derived : public Base
{
public:
virtual void chug() { cout << "Derived chug\n"; }
void foo() { cout << "Derived foo\n"; }
};
void chug(list<Base*>& alist)
{
for (list<Base*>::iterator i = alist.begin(), z = alist.end(); i != z; ++i)
(*i)->chug();
}
int main()
{
list<Base*> baselist;
list<Derived*> derlist;
baselist.push_back(new Base);
baselist.push_back(new Base);
derlist.push_back(new Derived);
derlist.push_back(new Derived);
chug(baselist);
// chug(derlist); // How do I make this work?
return 0;
}
The reason I need this is basically, I have a container of very complex objects, which I need to pass to certain functions that only care about one or two virtual functions in those complex objects.
I know the short answer is "you can't," I'm really looking for any tricks/idioms that people use to get around this problem.
Thanks in advance.
Your question is odd; the subject asks "how do I put items in a container without losing polymorphism" - but that is begging the question; items in containers do not lose polymorphism. You just have a container of the base type and everything works.
From your sample, it looks what you're asking is "how do I convert a container of child pointers to a container of base pointers?" - and the answer to that is, you can't. child pointers are convertible to base pointers, containers of child pointers are not. They are unrelated types. Although, note that a shared_ptr is convertible to shared_ptr, but only because they have extra magic to make that work. The containers have no such magic.
One answer would be to make chug a template function (disclaimer: I'm not on a computer with a compiler, so I haven't tried compiling this):
template<typename C, typename T>
void chug(const C<T>& container)
{
typedef typename C<T>::iterator iter;
for(iter i = container.begin(); i < container.end(); ++i)
{
(*i)->chug();
}
}
Then chug can take any container of any type, as long as it's a container of pointers and has a chug method.
Either store them by pointer (boost::shared_ptr is a popular option), or use Boost ptr_containers that store pointers internally, but offer a nice API on the outside (and of course fully automated deletion).
#include <boost/ptr_container/ptr_vector.hpp>
boost::ptr_vector<FooBase> foos;
foos.push_back(new FooDerived(...));
foos[0].memberFunc();
Polymorphic conversions of containers are simply not possible, so always pass a ptr_vector<Base>& and downcast in the function itself.
Why not make chug a template based function so it can have instances for base and derived types?
Or, for a better solution you can use std::for_each.
Maybe you can make chug a template function that casts the template parameter to a Base* type, then calls chug on that.