A lot of the examples for using std::unique_ptr to manage ownership of class dependencies look like the following:
class Parent
{
public:
Parent(Child&& child) :
_child(std::make_unique<Child>(std::move(child))){}
private:
std::unique_ptr<Child> _child;
};
My question is whether marking the _child member as const have any unexpected side effects? (Aside from being ensuring that reset(), release() etc. cannot be called on _child).
I ask since I have not yet seen it in an example and don't whether that is intentional or just for brevity/generality.
Because of the nature of a std::unique_ptr(sole ownership of an object) it's required to have no copy constructor whatsoever. The move constructor(6) only takes non-const rvalue-references which means that if you'd try to make your _child const and move it you'd get a nice compilation error :)
Even if a custom unique_ptr would take a const rvalue-reference it would be impossible to implement.
The downsides are like with any const member: That the assignment and move-assignment operators don't work right (they would require you to overwrite _child) and that moving from the parent would not steal the _child (performance bug). Also it is uncommon to write code like this, possibly confusing people.
The gain is marginal because _child is private and therefore can only be accessed from inside the Parent, so the amount of code that can break invariants revolving around changing _child is limited to member functions and friends which need to be able to maintain invariants anyways.
I cannot imagine a situation where the gain would outweigh the downsides, but if you do you can certainly do it your way without breakage of other parts of the program.
Yes, you can do this, and it's what I regularly do when implementing UI classes in Qt:
namespace Ui {
class MyWidget
}
class MyWidget : public QWidget
{
Q_OBJECT
const std::unique_ptr<Ui::MyWidget> ui;
public:
explicit MyWidgetQWidget *parent = 0)
: ui(new Ui::MyWidget{})
{
}
~MyWidgetQWidget();
// destructor defined as = default in implementation file,
// where Ui::MyWidget is complete
}
It's exactly comparable to writing Ui::MyWidget *const ui in C++03 code.
The above code creates a new object, but there's no reason not to pass one in using std::move() as in the question.
Related
There's some code at my company that takes the following form:
class ObjectA {
public:
ObjectA(ObjectB &objectB): objectB(objectB) { }
void Initialize() { ... }
private:
ObjectB &objectB;
};
class ObjectB {
public:
ObjectB(ObjectA &objectA): objectA(objectA) { }
void Initialize() { ... }
private:
ObjectA &objectA;
};
Basically, two classes have a mutual dependency on the other.
This in itself isn't what bothers me, (although it's not great design IMO), it's that the mutual dependencies are passed through the constructor by reference in both situations.
So, the only way to actually construct this pair of objects (they are both required in the application) is to pass an incomplete reference as a dependency. To account for this, the separate Initialize methods are added as a 2nd stage constructor which are then called after both objects are created. Consequently, the constructors often times do nothing except assign the constructor parameters to the object internals, and initialize everything in the Intizialize method.
Although the code may be valid, my inclination is that this is a fundamentally flawed design for the following reasons:
The state of a dependent object can't be guaranteed in the
constructor
It requires the developers to make an additional call to Initialize
It precludes the use of compiler warnings that check if member variables have been initialized
There's nothing preventing Initialize from being called multiple times, which can result in strange, hard to track down errors
My coworkers tell me that having a separate initialize method simplifies the object construction since the developer doesn't have to worry about what order objects are declared in the application root. They can be declared and constructed in completely arbitrary order since everything is guaranteed to be valid once Initialize gets called.
I've been advocating that object construction should follow the same order of the dependency graph and that these objects should be redesigned to remove the mutual dependency, but I wanted to see what the SO gurus had to say about it. Am I wrong to think this is bad practice?
Edit:
The way these classes actually get constructed is through a factory class like below:
Factory {
public:
Factory():
objectA(objectB),
objectB(objectA) {
}
private:
ObjectA objectA;
ObjectB objectB;
};
This is bad practice yes. Passing a reference to objectB for objectA to work with while it's not properly initialized yet is a no-no.
It might be standard compliant and safe code now because ObjectA doesn't try to access the passed reference to ObjectB in its constructor but that safety is hanging by a thread. If later on someone decides to just initialize in the constructor or access anything from objectB, or change it any other way where objectB will be used you end up with UB.
This sounds like a use case for pointers that are reseated after both constructors have run.
I too don't like the code as posted - it is unnecessarily complicated and fragile. There is usually some concept of ownership when two classes cooperate like this, so, if objects of type A own objects of type B (which they probably should), then you would do something like:
#include <memory>
class A;
class B
{
public:
B (A& a) : a (a) { }
private:
A& a;
};
class A
{
public:
A () { b = std::make_unique <B> (*this); }
private:
std::unique_ptr <B> b;
};
Live demo
I have situation in which I am getting a crash on clearing the memories in destructor. Please see the code below
C++ Code:
class Key{
....
};
Class Object {
...
};
Class E {
private:
vector<Object> m_vector;
};
Class A {
public:
virtual void check()=0;
protected:
hashmap<Key*,E*> myMap;
};
A:~A() {
EMapItr eMapItr = myMap.beginRandom();
for(; eMaptr; ++eMapItr) {
delete eMapItr.value();
}
E:~E() {
m_vector.clear();
}
class B: public A {
public:
virtual void check()
private:
DB* db;
}
void B::check() {
db->create(&myMap);
}
QT Code:
class MyQtAction {
public:
void act() ;
private:
GUIObject* guiWindowObject
};
MyQtAction::act() {
A* aobj = new B();
if(!guiWindowObject) {
guiWindowObject = new GUIObject(aobj);
}
};
class GUIObject:public QWidget {
private:
A* guiAobj;
};
GUIObject:GUIObject(A* obj) {
guiAobj= obj;
}
GUIObject:~GUIObject {
}
Now Can you please where shall I delete the point of A class because object of A is created multiple times
Since you're using Qt, I'd suggest your leverage on its very powerful "Object Trees & Ownership" model that comes for free with it:
QObjects organize themselves in object trees. When you create a
QObject with another object as parent, it's added to the parent's
children() list, and is deleted when the parent is.
http://doc.qt.io/qt-5/objecttrees.html
derive your classes from QObject, and make them use/require a QObject as parent parameter.
then, at instanciation, specify the parent object (this is why there
is a "parent" parameter all over the place in Qt's constructors
documentation),
no more delete, enjoy automagic deletion of child, grand child,
grand-grand child etc when the parent is deleted / goes out of scope.
You can use the QObject::dumpObjectTree() to visualise the parent/child tree at run time.
http://doc.qt.io/qt-5/qobject.html#dumpObjectTree
Now, there can be some tricky situations (see the doc above) which may require manual explicit delete, but in my experience 95% of the time it's fine - and in any case much less error prone than handling this yourself as your application grow big and complex. If you use Qt, you definitely want to use this. It's a great feature of the framework.
EDIT:
Now, about the context of your development and "is it C++ or is it Qt"?
If you want to learn C++ the hard and good way (and just add a bit of
fun with a few Qt graphical widgets), then you must understand the
destructors and be able to to handle it by yourself.
On the other hand, if you want to create a serious Qt application,
then you must embrace the Qt framework as a whole, and this
parent/child QObject fundamental base class, from which all others Qt's
class are derived is fundamental and oh-so-useful. There are a couple
of others root fundamental Qt concept, such as No Copy Constructor or Assignment
Operator, with
its very important consequence that you'll use pointers everywhere,
and its rationale
here, or Qt's
"no exceptions" motto, explained
here, from
"historical and practical reasons" to quote the documentation, and if
I'm not mistaken was not supported at all before Qt>=5. When doing
Qt, do full Qt. The hardcore C++, thought available, is buried under the
surface.
You should delete it in act(). Otherwise you may lose trace of it. It is not a good idea to initialize memory and then transfer the pointer to an object (unless of course the situation requires it). My advice is to move the new for A in the GUIObject's constructor. When GUIObject is deleted you can safely eliminate A without running into a double memory free.
Something like this:
class GUIObject {
A* aobj;
public:
GUIObject();
~GUIObject();
};
GUIObject::GUIObject()
{
A* aobj = new B();
}
GUIObject::~GUIObject()
{
delete aobj;
}
class ClassOne : boost::noncopyable
{
};
class ClassTwo : boost::noncopyable
{
public:
ClassTwo(const ClassOne& one)
: m_one ( one ) {}
private:
const ClassOne& m_one;
};
class ClassThree : private boost::noncopyable
{
public:
ClassThree(boost::shared_ptr<const ClassOne> one)
: m_one ( one ) {}
private:
boost::shared_ptr<const ClassOne> m_one;
};
class ClassFour : private boost::noncopyable
{
public:
ClassFour(const boost::shared_ptr<const ClassOne>& one)
: m_one ( one ) {}
private:
boost::shared_ptr<const ClassOne> m_one;
};
Question> During codereview, I was told that the code (similar as ClassTwo) should be replaced by the code (similar as ClassThree) because storing a const reference to outside class is NOT safe.
Is that true?
Thank you
Const reference and shared_ptr models two similar, but different concepts.
If you have a const reference, you "know" something, you can inspect this thing (through const methods) but you can't change this thing, and even more important, this thing might vanish any time: you don't own it.
On the other hand, shared_ptr models a shared ownership. You own the object pointed by the pointer. You can change it and it wont be destructed, unless every owner gets destructed.
Returning const reference to a private member is safe; accepting such a reference is a different thing. You have to make sure the reference remains valid.
shared_ptr is easier to handle but it's a more expensive solution.
Regarding the exact dynamics, read the manual of shared_ptr
I think erenon has a good writeup.
I'd like to add a little from the pragmatic angle:
const reference members make classes non-copyable (in fact, they could be copyable, but generation of default special members is inhibited (except for the destructor)
shared_ptr, on the other hand, make stuff inherently copyable (with shallow-clone semantics). This turns out to be really useful in functors where state will be kept/passed along. Boost Asio is a prime example, because logical threads of execution meander across threads and the lifetime is largely unpredictable.
I suggest using a shared_ptr<const T>; This adds immutability in the context of sharing. You will need to clone the object pointed to replace it with a changed version, and the shared object will not be modified through the shared_ptr<const T>
Effective C++ by Scott Meyers tells in Chapter 5, Item 28 to avoid returning "handles" (pointers, references or iterators) to object internals and it definitely makes a good point.
I.e. don't do this:
class Family
{
public:
Mother& GetMother() const;
}
because it destroys encapsulation and allows to alter private object members.
Don't even do this:
class Family
{
public:
const Mother& GetMother() const;
}
because it can lead to "dangling handles", meaning that you keep a reference to a member of an object that is already destroyed.
Now, my question is, are there any good alternatives? Imagine Mother is heavy! If I now return a copy of Mother instead of a reference, GetMother is becoming a rather costly operation.
How do you handle such cases?
First, let me re-iterate: the biggest issue is not one of lifetime, but one of encapsulation.
Encapsulation does not only mean that nobody can modify an internal without you being aware of it, encapsulation means that nobody knows how things are implemented within your class, so that you can change the class internals at will as long as you keep the interface identical.
Now, whether the reference you return is const or not does not matter: you accidentally expose the fact that you have a Mother object inside of your Family class, and now you just cannot get rid of it (even if you have a better representation) because all your clients might depend on it and would have to change their code...
The simplest solution is to return by value:
class Family {
public:
Mother mother() { return _mother; }
void mother(Mother m) { _mother = m; }
private:
Mother _mother;
};
Because in the next iteration I can remove _mother without breaking the interface:
class Family {
public:
Mother mother() { return Mother(_motherName, _motherBirthDate); }
void mother(Mother m) {
_motherName = m.name();
_motherBirthDate = m.birthDate();
}
private:
Name _motherName;
BirthDate _motherBirthDate;
};
See how I managed to completely remodel the internals without changing the interface one iota ? Easy Peasy.
Note: obviously this transformation is for effect only...
Obviously, this encapsulation comes at the cost of some performance, there is a tension here, it's your judgement call whether encapsulation or performance should be preferred each time you write a getter.
Possible solutions depend on actual design of your classes and what do you consider "object internals".
Mother is just implementation detail of Family and could be completely hidden from Family user
Family is considered as composition of other public objects
In first case you shall completely encapsulate subobject and provide access to it only via Family function members (possibly duplicating Mother public interface):
class Family
{
std::string GetMotherName() const { return mommy.GetName(); }
unsigned GetMotherAge() const { return mommy.GetAge(); }
...
private:
Mother mommy;
...
};
Well, it can be boring if Mother interface is quite large, but possibly this is design problem (good interfaces shall have 3-5-7 members) and this will make you revisit and redesign it in some better way.
In second case you still need to return entire object. There are two problems:
Encapsulation breakdown (end-user code will depend on Mother definition)
Ownership problem (dangling pointers/references)
To adress problem 1 use interface instead of specific class, to adress problem 2 use shared or weak ownership:
class IMother
{
virtual std::string GetName() const = 0;
...
};
class Mother: public IMother
{
// Implementation of IMother and other stuff
...
};
class Family
{
std::shared_ptr<IMother> GetMother() const { return mommy; }
std::weak_ptr<IMother> GetMotherWeakPtr() const { return mommy; }
...
private:
std::shared_ptr<Mother> mommy;
...
};
If a read-only view is what you're after, and for some reason you need to avoid dangling handles, then you can consider returning a shared_ptr<const Mother>.
That way, the Mother object can out-live the Family object. Which must also store it by shared_ptr, of course.
Part of the consideration is whether you're going to create reference loops by using too many shared_ptrs. If you are, then you can consider weak_ptr and you can also consider just accepting the possibility of dangling handles but writing the client code to avoid it. For example, nobody worries too much about the fact that std::vector::at returns a reference that becomes stale when the vector is destroyed. But then, containers are the extreme example of a class that intentionally exposes the objects it "owns".
This goes back to a fundamental OO principle:
Tell objects what to do rather than doing it for them.
You need Mother to do something useful? Ask the Family object to do it for you. Hand it any external dependencies wrapped up in a nice interface (Class in c++) through the parameters of the method on the Family object.
because it can lead to "dangling handles", meaning that you keep a
reference to a member of an object that is already destroyed.
Your user could also de-reference null or something equally stupid, but they're not going to, and nor are they going to do this as long as the lifetime is clear and well-defined. There's nothing wrong with this.
It's just a matter of semantics. In your case, Mother is not Family internals, not its implementation details. Mother class instance can be referenced in a Family, as well as in many other entities. Moreover, Mother instance lifetime may even not correlate with Family lifetime.
So better design would be to store in Family a shared_ptr<Mother>, and expose it in Family interface without worries.
I recently switched back from Java and Ruby to C++, and much to my surprise I have to recompile files that use the public interface when I change the method signature of a private method, because also the private parts are in the .h file.
I quickly came up with a solution that is, I guess, typical for a Java programmer: interfaces (= pure virtual base classes). For example:
BananaTree.h:
class Banana;
class BananaTree
{
public:
virtual Banana* getBanana(std::string const& name) = 0;
static BananaTree* create(std::string const& name);
};
BananaTree.cpp:
class BananaTreeImpl : public BananaTree
{
private:
string name;
Banana* findBanana(string const& name)
{
return //obtain banana, somehow;
}
public:
BananaTreeImpl(string name)
: name(name)
{}
virtual Banana* getBanana(string const& name)
{
return findBanana(name);
}
};
BananaTree* BananaTree::create(string const& name)
{
return new BananaTreeImpl(name);
}
The only hassle here, is that I can't use new, and must instead call BananaTree::create(). I do not think that that is really an problem, especially since I expect to be using factories a lot anyway.
Now, the wise men of C++ fame, however, came up with another solution, the pImpl idiom. With that, if I understand it correctly, my code would look like:
BananaTree.h:
class BananaTree
{
public:
Banana* addStep(std::string const& name);
private:
struct Impl;
shared_ptr<Impl> pimpl_;
};
BananaTree.cpp:
struct BananaTree::Impl
{
string name;
Banana* findBanana(string const& name)
{
return //obtain banana, somehow;
}
Banana* getBanana(string const& name)
{
return findBanana(name);
}
Impl(string const& name) : name(name) {}
}
BananaTree::BananaTree(string const& name)
: pimpl_(shared_ptr<Impl>(new Impl(name)))
{}
Banana* BananaTree::getBanana(string const& name)
{
return pimpl_->getBanana(name);
}
This would mean I have to implement a decorator-style forwarding method for every public method of BananaTree, in this case getBanana. This sounds like an added level of complexity and maintenance effort that I prefer not to require.
So, now for the question: What is wrong with the pure virtual class approach? Why is the pImpl approach so much better documented? Did I miss anything?
I can think of a few differences:
With the virtual base class you break some of the semantics people expect from well-behaved C++ classes:
I would expect (or require, even) the class to be instantiated on the stack, like this:
BananaTree myTree("somename");
otherwise, I lose RAII, and I have to manually start tracking allocations, which leads to a lot of headaches and memory leaks.
I also expect that to copy the class, I can simply do this
BananaTree tree2 = mytree;
unless of course, copying is disallowed by marking the copy constructor private, in which case that line won't even compile.
In the above cases, we obviously have the problem that your interface class doesn't really have meaningful constructors. But if I tried to use code such as the above examples, I'd also run afoul of a lot of slicing issues.
With polymorphic objects, you're generally required to hold pointers or references to the objects, to prevent slicing. As in my first point, this is generally not desirable, and makes memory management much harder.
Will a reader of your code understand that a BananaTree basically doesn't work, that he has to use BananaTree* or BananaTree& instead?
Basically, your interface just doesn't play that well with modern C++, where we prefer to
avoid pointers as much as possible, and
stack-allocate all objects to benefit form automatic lifetime management.
By the way, your virtual base class forgot the virtual destructor. That's a clear bug.
Finally, a simpler variant of pimpl that I sometimes use to cut down on the amount of boilerplate code is to give the "outer" object access to the data members of the inner object, so you avoid duplicating the interface. Either a function on the outer object just accesses the data it needs from the inner object directly, or it calls a helper function on the inner object, which has no equivalent on the outer object.
In your example, you could remove the function and Impl::getBanana, and instead implement BananaTree::getBanana like this:
Banana* BananaTree::getBanana(string const& name)
{
return pimpl_->findBanana(name);
}
then you only have to implement one getBanana function (in the BananaTree class), and one findBanana function (in the Impl class).
Actually, this is just a design decision to make. And even if you make the "wrong" decision, it's not that hard to switch.
pimpl is also used to provide ligthweight objects on stack or to present "copies" by referencing to the same implementation object.
The delegation-functions can be a hassle, but it's a minor issue (simple, so no real added complexity), especially with limited classes.
interfaces in C++ are typically more used in strategy-like ways where you expect to be able to choose implementations, although that is not required.