I'm working with an API that has the form:
void setup() {
//..
}
void render() {
//..
}
void clean_up() {
//..
}
I'm trying to figure what is the most elegant, thread-safe and efficient way to have a persistent class C consume instances of class B that internally refer to memory demanding instances of class A. What I am currently doing is along these lines:
C global_c_obj;
void setup() {
auto b_obj {std::make_shared<B>()}; // b_obj is parametrised in the actual code
global_c_obj.push_back(b_obj); // so that b_obj will survive this scope
}
void render() {
// every several cycles func() is called in a new thread
auto results = global_c_obj.get_results();
// do work with results
}
void func() {
auto new_b_obj {std::make_shared<B>()}; // new object with new parameters
global_c_obj.push_back(new_b_obj);
}
With class B having the form:
class B {
private:
std::shared_ptr<A> memory_intensive_obj;
// ..
public:
// ...
};
There two things I don't like in this approach but I can't think of a better way at the moment:
The object of type C is a global one, and I'd rather not use globals at all
C's pubic interface is made so as to expect std::shared_ptr<B> as arguments, while I'd much rather prefer an interface expecting B* or const B& so that I could use the same interface in different contexts and with different APIs.
As for point 2. above, and since B already holds a std::shared_ptr<A> so that it is not particularly large as an object, I could simply pass B by value to C. But B is still larger than std::shared_ptr<B> and I think that it'd be more expensive to copy-construct rather than used a std::shared_ptr.
Any other tactics around such an architecture?
Related
Consider three classes A, B, and C, where A is the parent of B and C with a function A::doThing().
Is there any difference between the following two methods of calling doThing() in terms of performance (assuming B and C don't override doThing()?
B b1;
C c1;
A* a1 = &b1;
A* a2 = &c1;
//Option 1:
b1.doThing();
c1.doThing();
//Option 2:
a1->doThing();
a2->doThing();
In a tutorial app I saw they claimed that the second option was faster. I understand that if B or C overrides doThing(), then the two different calls could have different results, but I don't get why the second way of calling the function would be faster? The direct quote (in the example they use option 2):
We would have achieved the same result by calling the functions directly on the objects. However, it's faster and more efficient to use pointers.
Edit: Code from app as some have suggested I misunderstood:
#include <iostream>
using namespace std;
class Enemy {
protected:
int attackPower;
public:
void setAttackPower(int a){
attackPower = a;
}
};
class Ninja: public Enemy {
public:
void attack() {
cout << "Ninja! - "<<attackPower<<endl;
}
};
class Monster: public Enemy {
public:
void attack() {
cout << "Monster! - "<<attackPower<<endl;
}
};
int main() {
Ninja n;
Monster m;
Enemy *e1 = &n;
Enemy *e2 = &m;
e1->setAttackPower(20);
e2->setAttackPower(80);
n.attack();
m.attack();
}
If doThing is not virtual, it's all the same, as in all cases the exact method to call is resolved at compile time.
Otherwise:
calling directly on the object should be always as fast as it gets, since the compiler is sure of the a actual type of the object, so there's no extra indirection step (no vtable lookup, possible inlining);
when calling through pointer to base class, it's down to the ability of the compiler to prove the dynamic type of the object (or realizing that the method is never overridden); it everything is local to the function this may be easy, but otherwise it quickly gets difficult (for the compiler is not even trivial to understand that nobody is redefining a virtual method, because - barring LTCG and similar mechanisms - it has no knowledge of what happens in other translation units).
We would have achieved the same result by calling the functions directly on the objects. However, it's faster and more efficient to use pointers.
If the context is as you reported, this is complete bullshit. Throw away whatever guide where found this, the author has no idea of what he is talking about.
Basically, I have one class that owns another object:
class A()
{
A() { initSystems() };
void initSystems();
B b;
}
class B()
{
B() { //Does stuff that requires 'initSystems()' to be called before }
}
and for 'B' to function, the init systems function needs to be called in A. Is there any 'nice' way to work around this? Like creating the 'B' object later or something?
Sounds like your classes are too tightly coupled. There's many ways to fix this, but it depends on the rest of your design.
Maybe A shouldn't own a B, since A is a dependency of B. You could inject an instance of A into each B as they get instantiated.
Maybe B shouldn't exist and all, and it should be merged into A:
class A()
{
A() {
initSystems();
//Does stuff that requires 'initSystems()' to be called before
}
void initSystems();
// B's methods
}
It's my opinion that most initialization methods are code smells (that is, it suggests a bad design). Some people have given this pattern a name: "Design Smell: Temporal Coupling"
If you desire to keep the B a regular member of A, the two places you can run code before the construction of b are:
the constructor of a base class of A,
in the initializer list of a member of A, for a member declared above b.
If you wish to defer construction of the B, you need to hold the object by indirection or later construct it onto raw storage and when destroying, perform placement destruction.
Most of this has strange smells to it, it may be beneficial to reorganize your code to avoid this kind of structure.
You simply need to change your design so initSystems() is requirement for both A and B.
If you can't do this (although you really should), there are other ways, like dynamic allocation:
class A()
{
A() {
initSystems();
b = std::make_unique<B>();
};
void initSystems();
std::unique_ptr<B> b;
}
I agree with #nanny over the decoupling of the class and merging if possible.
But your scenario seems like in which B is separate entity in your system, hence a class. And A contains a B, hence the composition.
So one way of solving this would be, you keep a reference (pointer) to B in A instead of making B a part of A.
Or you should not access the stuff that is created in initInstance() of A in the constructor of B, and create a method postConstruct in B that you can call from A after call initInstance. Following code explains it,
class A()
{
A()
{
initSystems();
b.postContruct()
}
void initSystems()
{
// do the init instance stuff here.
}
B b;
}
class B()
{
B() {}
void postContruct()
{
//Does stuff that requires 'initSystems()' to be called before
}
}
I have 3 classes
class A {
//...
}
class B : public A {
//...
}
And a class C:
#include "A.h"
#include "B.h"
class C
{
void method(A anObjOfA);
void method(B anObjOfB);
}
Now if I do
B* ptr = new B();
A* ptrToB = ptr;
c.method(*ptrToB);
It calls the method for Objects of type A, not the inherited actual type B.. How can I make sure the right function for the object deepest in the inheritence-tree is called, without actually knowing it's type at compile-time?
PS: I'm sure this is a noob question, for the life of me I can't find any results on this here, as everyone is busy understanding the "virtual" keyword, which is perfectly clear to me but is not the issue here.
Because resolving a function overload is done at compile-time. When you call the function it only sees the A part of the pointer, even though it could point to a B.
Perhaps what you want is the following:
class A
{
public:
virtual void DoWorkInC()
{
cout << "A's work";
}
virtual ~A() {}
};
class B : public A
{
public:
virtual void DoWorkInC()
{
cout << "B's work";
}
};
class C
{
void method(A& a)
{
a.DoWorkInC();
}
}
Let your class A, B have virtual function implemented in their respectivbe classes:
class A {
//...
public:
virtual void doTask();
};
class B : public A {
//...
public:
void doTask();
};
Ket A::doTask() and B::doTask() do respective tasks in object specific way, i.e. A::doTask() to do tasks with visibility of the object set as an A object, and B::doTask() to do tasks with visibility of the object set as an B object.
Now, let the call be like this:
B* ptr = new B();
A* ptrToB = ptr;
c.method(ptrToB); // pointer is passed
Within C::method(A *ptr), it may be something like:
void C::method(A * ptr) {
ptr->doTask(); this would actuall call A::doTask() or B::doTask() as dynamically binded
}
thanks to #texasbruce I found the answer, RTTI
The code will look like this:
A* someAOrBPtr = ...
...
B* testBPtr = dynamic_cast<B*>(someAOrBPtr);
if( testBPtr ){
// our suspicions are confirmed -- it really was a B
C->method(testBPtr);
}else{
// our suspicions were incorrect -- it is definitely not a B.
// The someAOrBPtr points to an instance of some other child class of the base A.
C->method(someAOrBPtr);
};
EDIT: In fact, I'll probably do the dynamic cast inside the C->method so there is only one
C::method(A* ptrOfBase)
and then do the appropriate thing (taking in or out the respective container-member-variable of C) inside the one 'method' of C.
Compiler is not smart enough to guess which method you wanna call. In the same situation of yours, you might actually want to call the the first version since you are using a A*. This leaves the programmer to work on: be specific. If you don't want to use ptr (which call the second version as you wished), you need to specifically cast it:
c.method(*((B*)ptrToB));
or better using dynamic cast:
c.method(*dynamic_cast<B*>(ptrToB));
This could be unsafe because you are "downcasting" in which case dynamic cast may throw exception and C style cast won't but will cause memory leak. You have to be very careful.
I have never used multiple inheritance but while reading about it recently I started to think about how I could use it practically within my code. When I use polymorphism normally I usually use it by creating new derived instances declared as base class pointers such as
BaseClass* pObject = new DerivedClass();
so that I get the correct polymorphic behaviour when calling virtual functions on the derived class. In this way I can have collections of different polymorphic types that manage themselves with regards to behaviour through their virtual functions.
When considering using multiple inheritance, I was thinking about the same approach but how would I do this if I had the following hierarchy
class A {
virtual void foo() = 0;
};
class B : public A {
virtual void foo() {
// implementation
}
};
class C {
virtual void foo2() = 0;
};
class D : public C {
virtual void foo2() {
// implementation
}
};
class E : public C, public B {
virtual void foo() {
// implementation
}
virtual void foo2() {
// implementation
}
};
with this hierarchy, I could create a new instance of class E as
A* myObject = new E();
or
C* myObject = new E();
or
E* myObject = new E();
but if I declare it as a A* then I will lose the polymorphism of the class C and D inheritance hierarchy. Similarly if I declare it as C* then I lose the class A and B polymorphism. If I declare it as E* then I cannot get the polymorphic behaviour in the way I usually do as the objects are not accessed through base class pointers.
So my question is what is the solution to this? Does C++ provide a mechanism that can get around these problems, or must the pointer types be cast back and forth between the base classes? Surely this is quite cumbersome as I could not directly do the following
A* myA = new E();
C* myC = dynamic_cast<C*>(myA);
because the cast would return a NULL pointer.
With multiple inheritance, you have a single object that you can view any of multiple different ways. Consider, for example:
class door {
virtual void open();
virtual void close();
};
class wood {
virtual void burn();
virtual void warp();
};
class wooden_door : public wood, public door {
void open() { /* ... */ }
void close() { /* ... */ }
void burn() { /* ... */ }
void warp() { /* ... */ }
};
Now, if we create a wooden_door object, we can pass it to a function that expects to work with (a reference or pointer to) a door object, or a function that expects to work with (again, a pointer or reference to) a wood object.
It's certainly true that multiple inheritance will not suddenly give functions that work with doors any new capability to work with wood (or vice versa) -- but we don't really expect that. What we expect is to be able to treat our wooden door as either a door than can open and close, or as a piece of wood that can burn or warp -- and that's exactly what we get.
In this case, classes A and C are interfaces, and E implements two
interfaces. (Typically, you wouldn't have intermediate classes C and
D in such a case.) There are several ways of dealing with this.
The most frequent is probably to define a new interface, which is a sum
of A and C:
class AandC : public A, public C {};
and have E derive from this. You'd then normally manage E through a
AandC*, passing it indifferently to functions taking an A* or a
C*. Functions that need both interfaces in the same object will deal
with AandC*.
If the interfaces A and C are somehow related, say C offers
additional facilities which some A (but not all) might want to
support, then it might make sense for A to have a getB() function,
which returns the C* (or a null pointer, if the object doesn't support
the C interface).
Finally, if you have mixins and multiple interfaces, the cleanest
solution is to maintain two independent hierarchies, one for the
interfaces, and another with the implementation parts:
// Interface...
class AandC : public virtual A, public virtual C {};
class B : public virtual A
{
// implement A...
};
class D : public virtual C
{
// implement C...
};
class E : public AandC, private B, private D
{
// may not need any additional implementation!
};
(I'm tempted to say that from a design point of view, inheritance of
interface should always be virtual, to allow this sort of thing in the
future, even if it isn't needed now. In practice, however, it seems
fairly rare to not be able to predict this sort of use in advance.)
If you want more information about this sort of thing, you might want to
read Barton and Nackman. There book is fairly dated now (it describes
pre C++98), but most of the information is still valid.
This should work
A* myA = new E();
E* myC = dynamic_cast<E*>(myA);
myC->Foo2();
C can't cast to A because it isn't an A; it can only cast down to D or E.
Using A* you can make an E* and through that you can always explicitly say things like C::foo() but yes, there is no way for A to implicitly call functions in C that might have overrides or might not.
In weird cases like this, templates are often a good solution because they can allow classes to act as if they have common inheritance even if they don't. For instance, you might write a template that works with anything that can have foo2() invoked on it.
I'm working on C++ framework and would like to apply automatic memory management to a number of core classes. So far, I have the standard approach which is
class Foo
{
public:
static
shared_ptr<Foo> init()
{
return shared_ptr<Foo>(new Foo);
}
~Foo()
{
}
protected:
Foo()
{
}
};
// Example of use
shared_ptr<Foo> f = Foo::init();
However, the above breaks when I subclass Foo, since even tho init() is inherited, it still returns shared_ptr<Foo> which contains a pointer to instance of Foo.
Can anyone think of an elegant solution to this? Should I perhaps just stick with (semi-)manually wrapping instances of class with shared_ptr? This would also give ability to expose parameterized constructors without declaring new named constructors...
Ie.
template <typename T>
shared_ptr<T> make_shared(T* ptr)
{
return shared_ptr<T>(ptr)
}
// Example
shared_ptr<T>
f1 = make_shared(new Foo()),
f2 = make_shared(new Foo(1,2));
I would try something like this:
template<class T>
class creator
{
public:
static shared_ptr<T> init()
{
return(shared_ptr<T>(new T));
}
};
class A : public creator<A>
{
};
class B : public A, public creator<B>
{
public:
using make_shared<B>::init;
};
// example use
shared_ptr<A> a = A::init();
shared_ptr<B> b = B::init();
But this isn't necessarily saving you a thing compared to standalone template you proposed.
Edit: I missed previous answer, this seems to be the same idea.
I don't understand what this achieves, you don't appear to be getting any extra memory management using this init function than by simply declaring a shared_ptr.
int main( void )
{
shared_ptr<foo> a = foo::init();
shared_ptr<foo> b( new foo );
}
What's the difference. shared_ptr provides the memory management, not anything in init.
It seems that the goal is to make it impossible for users of the classes to call the constructors directly, and only expose a routine which returns shared_ptr's.
But if you want to apply this pattern, you need to replicate it in all the subclasses. The subclasses cannot automatically "inherit" init() so that init() would still call the subclass constructor, because init() is not a virtual method and is called without an object.
I would leave the constructors exposed as usual and just use the standard
shared_ptr<X> x = new X();
This keeps cognitive burden low, is readable, and remains flexible. This is how we program in our company with reference counted objects, anyway.
How about...
template<typename Derived>
class Foo
{
public:
static shared_ptr<Derived> init()
{
return shared_ptr<Derived>(new Derived);
}
~Foo()
{
}
protected:
Foo()
{
}
};
class Bar : public Foo<Bar>
{
};
int _tmain(int argc, _TCHAR* argv[])
{
shared_ptr<Bar> b = Foo<Bar>::init();
return 0;
}
Why not introduce a common base with a virtual destructor, inherit all necessary classes from it and simply use new?
It's generally not a good idea to force creation of objects using shared_ptr by hiding the constructors. I'm speaking from personal experience here working with an internal company lib that did exactly that. If you want to ensure people always wrap their allocated objects, just make sure that all arguments and members which store instances of these types expect a shared_ptr or weak_ptr instead of a naked pointer or reference. You might also want to derive these classes from enable_shared_from_this, because in a system where all objects are shared, at some point you'll have to pass the this pointer to one of these other objects' methods, and since they're designed only to accept shared_ptr, you're in pretty bad shape if your object has no internal_weak_this to ensure it isn't destroyed.
You need the static factory function in every type of the entire hierarchy.
class Foo
{
public:
static shared_ptr< Foo > instantiate( /* potential arguments */ )
{
return shared_ptr< Foo >( new Foo( /* potential arguments */ );
}
// blah blah blah
};
class Bar : public Foo
{
public:
static shared_ptr< Bar > instantiate( /* potential arguments */ )
{
return shared_ptr< Bar >( new Bar( /* potential arguments */ );
}
// blah blah blah
};
If you still have any confusion, please search CppCodeProvider on sourceforge and see how its done there.
By the way, in large C++ frameworks it's common to hide the "automatic memory management" from the coder. This lets him write shorter and simpler code. For example, in Qt you can do this:
QPixmap foo() {
QPixmap pixmap(10, 10);
return pixmap;
}
void bar() {
QPixmap a = foo(); // no copying occurs, internal refcount incremented.
QPixmap b = a; // ditto.
QPainter p(&b);
p.drawPoint(5, 5); // data can no longer be shared, so a copy is made.
// at this point 'a' is still unchanged!
p.end();
}
Like many things in Qt, this mimics the Java object model, but it goes further by implementing copy-on-write (which it calls implicit sharing). This is intended to make the API behavior less suprising to C++ coders, who aren't used to having to call clone().
This is implemented via the d-pointer idiom, which kills two birds with one stone - you provide automatic memory management, and you insulate your implementation from the user (pimpl).
You can look at the actual implementation of QPixmap here: qpixmap.cpp, qpixmap.h.