I can understand defaulted constructors, since user defined constructors will disable the compiler generated ones making the object non trivially copyable etc.
In the destructor case, apart from changing access category, what use is there to define a defaulted destructor considering that no user defined member function can disable them (you can't overload destructors anyway) ?
// Which version should I choose ?
struct Example
{
//1. ~Example() = default;
//2. ~Example() {}
//3.
};
Even in the case of virtual destructors, defaulting them would not make them trivial so what good is it doing it?
The exception for trivial destructors omission has to do with the derived class' destructor, not the base one. So virtual ~Foo() = default; is a useful construct to keep the default destructor, but virtualize it.
One use is making the destructor protected or private while potentially keeping the class trivial: just list it after the desired access specifier.
Another: when writing classes, some programmers like to order the class's functions: e.g. constructors, then the destructor, then the non-const "mutating" members, then the const "accessor" members, then static functions. By being able to explicitly = default the destructor, you can list it in the expected order and the reader looking there knows there can't be another misplaced version of it. In large classes, that may have some documentary/safety value.
It also gives you something concrete around which to add comments, which can help some documentation tools realise the comments relate to destruction.
Basically it is about communicating the intent, but pretty redundant.
But in case you're using std::unique_ptr as a member of class you'll need to declare destructor (but only declare) in header. Then you can make it use default implementation in source file like so:
MyClass:~MyClass() = default;
Considering your options I would use first or third one.
As mentioned by Nikos Athanasiou in a comment, a default constructor makes the type trivially destructible, where a user defined one does not. A little code sample will show it:
#include <iostream>
#include <type_traits>
struct A { ~A() = default; };
struct B { ~B() {} };
struct C { ~C() noexcept {} };
int main() {
std::cout
<< std::is_trivially_destructible<A>::value
<< std::is_trivially_destructible<B>::value
<< std::is_trivially_destructible<C>::value
<< std::endl;
return 0;
}
Displays
100
As for virtual destructors, consistency with non-virtual ones and Quentin's answer are appropriate reasons. My personal advice is that you should always use the default when you can, as this is a way to stick to the most canonic behavior.
Related
For what situations we need to have a default generated destructor? It's pretty clear why we would need default generated constructors and operator=, but can't think of situation when default generated destructor should be used.
class A
{
...
~A() = default;
...
};
In cases where you'd like to hide the implementation of a class inside an inner class and keep a unique_ptr to an instance of that inner class (the pimpl idiom) you need to move the default destructor definition out of the class definition since unique_ptr can't work with incomplete types.
Example:
A.hpp (the header a user of the class will include)
#pragma once
#include <memory>
class A {
public:
A();
~A();
void foo() const;
private:
struct A_impl; // just forward declared
std::unique_ptr<A_impl> pimpl;
};
A_impl.hpp ("hidden" - not to be included in normal usage of A)
#pragma once
#include "A.hpp"
struct A::A_impl {
void foo() const;
};
A.cpp
#include "A_impl.hpp"
A::A() : pimpl(std::make_unique<A_impl>()) {}
A::~A() = default; // <- moved to after A_impl is fully defined
void A::foo() const { pimpl->foo(); }
A_impl.cpp
#include "A_impl.hpp"
#include <iostream>
void A::A_impl::foo() const { std::cout << "foo\n"; }
Demo
If you let the compiler generate A::~A() it will not compile. My compiler says:
unique_ptr.h:79:16: error: invalid application of ‘sizeof’ to incomplete type ‘A::A_impl’
static_assert(sizeof(_Tp)>0,
^~~~~~~~~~~
Demo
This seems to be asking when you would define the destructor for a class if the body of that destructor would be the same as the one the compiler generates.
Reasons include:
Clarity. If you have a class with copy/move constructors or copy/move assignment operators, it is typically managing some resource. Many coding guidelines would require you define the destructor to show that it wasn't just overlooked, even it is equivalent to the compiler-generated one.
Some aspect of the function differs from the one the compiler would generate. If you want a virtual destructor, you have to define it. Similarly, a throwing destructor must be defined.
You want to control the place the destructor is generated. You can define a destructor outside of the class definition. You might need to do this for cyclically dependent classes as in one of the other answers. You may want to do this to define a stable ABI. You may want to do this to control code generation.
In all these cases, you must or want to define the destructor, even though the body is nothing special. Why would you use = default versus an empty body? Because the compiler-generated destructor is equivalent to the one you get with = default, and you only want to change the aspects of the destructor you are trying to change. An empty body is not the same as = default in C++, because a defaulted function can be defined as deleted. An empty body also rules out trivial destructibility, even if that was otherwise an option.
C++ Core Guidelines C.21: If you define or =delete any copy, move, or destructor function, define or =delete them all
Reason
The semantics of copy, move, and destruction are closely related, so if one needs to be declared, the odds are that others need consideration too.
Declaring any copy/move/destructor function, even as =default or =delete, will suppress the implicit declaration of a move constructor and move assignment operator. Declaring a move constructor or move assignment operator, even as =default or =delete, will cause an implicitly generated copy constructor or implicitly generated copy assignment operator to be defined as deleted. So as soon as any of these are declared, the others should all be declared to avoid unwanted effects like turning all potential moves into more expensive copies, or making a class move-only.
Note
If you want a default implementation (while defining another), write =default to show you're doing so intentionally for that function. If you don't want a generated default function, suppress it with =delete.
So this mainly depends on what is declared in the class.
Generally it is about The rule of three/five/zero
If the class needs a custom copy/move function, but nothing special for a destructor, then =default should be used on the destructor.
#include <iostream>
class Base {
private:
std::string hello{ "hello world!" };
public:
Base() = default;
virtual ~Base() = default;
const std::string &getHello() const {
return hello;
}
void setHello(const std::string &hello) {
Base::hello = hello;
}
};
class Derived : public Base {
public:
Derived() = default;
~Derived() override = default;
};
int main(int argc, const char *argv[]) {
Derived d;
std::cout << d.getHello() << std::endl;
}
Base and Derived all use the default constructor and destructor, I explicitly declare them and marked as default. But in fact, if you don't explicitly declare them, the code can still work very well.
My confusion is whether I need to explicitly declare them. I heard two different arguments, some people think that whether or not you use them, you should declare them, others consider that if you need one, you declare it, otherwise you don't.
So what is the good practice?
The answer is (technically) no, you don't need to declare any of the special member functions explicitly - so long as you don't declare any other special member functions that result in suppressing other ones you might want (or need).
The rules regarding which special member functions are implicitly declared based on which ones you declare yourself are rather complex all things considered - this answer has a handy table to illustrate them. Unless you'd like to learn this by heart, you probably want to just be explicit in all cases. And even if you do know the rules by heart, somebody else reading your code might not. Core Guideline C.21 gives some more examples and goes in depth about why you'd want to do so.
However, while the guideline only suggests to define or default the remaining special member functions if you define any of them, I would like to encourage you to always explicitly default/delete all of them. I have two reasons for this:
It makes your intent obvious.
It prevents any nasty surprises later down the line. If you don't declare special member functions explicitly and later find out that you need a custom copy constructor for instance, you then have to remember to declare move operations if you haven't already - otherwise all code using these operations will break and it might not be immediately obvious why.
Also, this is not directly related to your question, but when talking about special member functions it's always useful to remind people of the rules of three, five and zero.
So to summarize - you don't have to, but you likely should.
How can I prohibit the construction of an object? I mark = delete; all relevant special functions as follows:
struct A
{
A() = delete;
A(A const &) = delete;
A(A &&) = delete;
void * operator new(std::size_t) = delete;
void operator delete(void *) = delete;
};
A x{};
A y = {};
A * z = ::new A{};
LIVE EXAMPLE
But x, y and *z can still exist. What to do? I am interested in both cases; static/stack allocation and heap allocation.
One option would be to give the class a pure virtual function, and mark it final:
struct A final
{
virtual void nonconstructible() = 0;
};
[Live example]
If you want to have just static members, then write namespace A rather than struct A. Ensuing code will be syntactically similar.
To prevent creation of an instance of a class, make it abstract. (Include one pure virtual function). But doing this introduces a v-table into you class, which you might not want.
If you want to make it impossible to instantiate the class you could just declare private constructors:
class NotInstantiable {
private:
NotInstatiable();
public:
};
And not defining NotInstantiable further. This can't now be instantiated since first the constructor is private but also that a definition for the constructor has not been provided.
The second obstacle for instantiate the NotInstantiable would for example prohibit this possibility, which in fact otherwise is a well known pattern:
class NotInstantiable {
private:
NotInstantiable();
public:
NotInstantiable* evil_method()
{
return new NotInstantiable(); // this will fail if there's no body of the constructor.
}
};
In general, to completely prevent client code instantiation of a class you can declare the class final and either
make the constructors non-public, or
delete the constructors and make sure that the class isn't an aggregate, or
add a pure virtual member function (e.g. make the destructor pure virtual) to make the class abstract.
Declaring the class final is necessary when the non-public is protected, and for the abstract class, in order to prevent instantiation of a base class sub-object of a derived class.
To partially prohibit instantiation, you can
make the destructor non-public.
This prevents automatic and static variables, but it does not prevent dynamic allocation with new.
make the class' allocation function (the operator new) non-public.
This prevents dynamic allocation via an ordinary new-expression in client code, but it does not provide automatic and static variables, or sub-objects of other objects, and it does not prevent dynamic allocation via a ::new-expression, which uses the global allocation function.
There are also other relevant techniques, such as an allocation function with extra arguments that make new-expressions inordinately complicated and impractical. I used that once to force the use of a special macro to dynamically allocate objects, e.g. for a shared-from-this class. But that was in the time before C++11 support for forwarding of arguments; nowadays an ordinary function can do the job, and such a function can be made a friend of the class.
The fact that the code compiles with at least one version of the clang compiler with -std=gnu++1z, is due to a bug and/or language extension in that compiler.
The code should not compile, since it invokes the default constructor that has been deleted. And it does not compile with e.g. MinGW g++ 5.1.0, even with -std=gnu++1z.
The fact that the code compiles with at least one version of the clang compiler with -std=gnu++1z, may be due to a bug and/or language extension in that compiler. What the correct behavior is, is unclear because
Although the code compiles with clang and with Visual C++ 2015, it does not compile with e.g. MinGW g++ 5.1.0, even with -std=gnu++1z.
Intuitively the delete would be meaningless if the code should compile, but many meaningless constructs are permitted in C++.
At issue is whether the class is an aggregate (in which case the new expression performs aggregate initialization), which rests on whether the deleted default constructor can be regarded as user-provided. And as user TartanLlama explains in comments, the requirements for user-provided are
C++11 §8.4.2/4
” A special member function is user-provided if it is user-declared and not explicitly
defaulted or deleted on its first declaration.
I.e. although the delete of the default constructor in this question's example declares that constructor, it's not user-provided (and ditto for the other members) and so the class is an aggregate.
The only defect report I can find about this wording is DR 1355, which however just concerns an issue with the use of the words “special member”, and proposes to drop those words. But, considering both the effect demonstrated by this question, and considering that a function can only be deleted on its first declaration, the wording is strange.
Summing up, formally, as of C++11 (I haven't checked C++14), the code should compile. But this may be a defect in the standard, with the wording not reflecting the intent. And since MinGW g++ 5.1.0 doesn't compile the code, as of October 2015 it's not a good idea to rely on the code compiling.
Essentially this compiles and is allowed because the type A is an aggregate type and the aggregate initialisation doesn't use default constructors.
What is an aggregate type?;
class type (typically, struct or union), that has
no private or protected members
no user-provided constructors (explicitly defaulted or deleted constructors are allowed) (since C++11)
no base classes
no virtual member functions
Giving it any one of the above would make it non-aggregate and thus the aggregate initialisation would not apply. Giving it a private user defined (and unimplemented) constructor will do.
struct A
{
A() = delete;
A(A const &) = delete;
A(A &&) = delete;
void * operator new(std::size_t) = delete;
void operator delete(void *) = delete;
private:
A(int);
};
As a side note; I hope this is a defect in the language specifications. At first look I thought that this should not compile, yet it does. One of the motivations for the =delete was to avoid the C++03 "trick" of declaring the constructors private to "hide" them and thus be unusable. I would expect a =delete on the default constructor to effectively prohibit class creation (outside other user defined constructors).
For easier reading and clearer intent, consider even an empty base class;
struct NonAggregate{};
struct A : private NonAggregate
{
//...
Maybe the simplest yet is to return to the C++03 style here, make the default constructor private;
struct A
{
private:
A(); // note no =delete...
};
I cannot understand the rationale behind the automatic addition of default ctors. In particular I find very awkward that every single time I just need to add an empty virtual destructor and nothing more, I loose the move stuffs, but adding them I loose the copy and default things, so I end up adding all this chunk of code:
virtual ~SomeClass(){} // you are the guilty!
//virtual ~SomeClass() = default // would be the same
SomeClass(SomeClass&&) = default; // no more auto-added
SomeClass& operator=(SomeClass&&) = default; // no more auto-added
SomeClass(const SomeClass&) = default; // but with the moves defined,
SomeClass& operator=(const SomeClass&) = default; // I'm now missing the copy
SomeClass(){} // and the default as well
I'm sure there is a reason for making my classes ugly and letting me desire an evil macro, I just would like to know it to feel more comfortable.
Take a look at this. It explains something called the rule of five, which is essentially what standard requires.
Generally, for most cases, compiler creates defaults for copy constructor, copy assignment, move assignment, and destructor. But, if a programmer defines any of these, then the compiler assumes the user has encapsulated something in this class that requires his/her special, let's say. destructor. Now that the programmer knows that he/she is going to need a destructor, the compiler will know that the programmer know what's going on and just not create the defaults for the rest (because, based on the assumption that the compiler makes, the default ones are going to be wrong, and can even result in undesired behavior).
The problem is that your class is trying to do two separate things: providing a polymorphic interface (hence the need for the virtual destructor) and managing concrete data members (hence the need for the copy/move operations). It's generally a good idea to give each class a single responsibility.
I'd move the virtual destructor, and any virtual function declarations, to an empty, abstract base class. Then any concrete class(es) deriving from that will be free to autogenerate all the needful things.
Example:
#include <iostream>
struct Movable {
Movable() {}
Movable(Movable&&m) {std::cout << "Moving\n";}
};
struct SomeInterface {
virtual ~SomeInterface() {}
// no data members, so no need for any other special member functions
};
struct SomeClass : SomeInterface {
Movable stuff;
// no user-declared special functions, so all are auto-generated
};
int main() {
SomeClass c;
SomeClass c2(std::move(c)); // uses the auto-generated move constructor
}
I'm wondering if there's any reason to explicitly write code that does the same as default behavior of C++.
Here's some code:
class BaseClass
{
public:
virtual ~BaseClass() {}
virtual void f() { /* do something */ }
};
class ExplicitClass
: public BaseClass
{
public:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
class ImplicitClass
: public BaseClass
{
};
I'm mainly curious in the realm of refactoring and a changing code base. I don't think many coders intend to write code like this, but it can endup looking like this when code changes over time.
Is there any point to leaving the code present in the ExplicitClass? I can see the bonus that it shows you what is happening, but it comes across as lint-prone and risky.
Personally I prefer to remove any code that is default behavior code(like ImplicitClass).
Is there any consensus favoring one way or the other?
There are two approaches to this issue:
Define everything even if compiler generates the same,
Does not define anything which compile will do better.
The believers of (1) are using rules like: "Always define Default c-tor, Copy C-tor, assignment operator and d-tor".
(1) believers think it is safer to have more than to miss something.
Unfortunately (1) is especially loved by our managers - they believe it is better to have than don't have. Sp such rules like this "always define big four" go to "Coding standard" and must be followed.
I believe in (2). And for firms where such coding standards exist, I always put comment "Do not define copy c-tor as compiler does it better"
As the question is about consensus, I can't answer, but I find ildjarn's comment amusing and correct.
As per your question whether there is a reason to write it like that, there is not as the explicit and implicit class behave the same. People sometimes do it for 'maintenance' reasons e.g. if derived f is ever implemented in a different way to remember to call the base class. I personally, don't find this useful.
Either way is fine, so long as you understand what really happens there, and the problems which may arise from not writing the functions yourself.
EXCEPTION-SAFETY:
The compiler will generate the functions implicitly adding the necessary throw conditions. For an implicitly created constructor, this would be every throw conditions from the base classes and the members.
ILL-FORMED CODE
There are some tricky cases where some of the automatically generated members functions will be ill-formed. Here is an example:
class Derived;
class Base
{
public:
virtual Base& /* or Derived& */
operator=( const Derived& ) throw( B1 );
virtual ~Base() throw( B2 );
};
class Member
{
public:
Member& operator=( const Member& ) throw( M1 );
~Member() throw( M2 );
};
class Derived : public Base
{
Member m_;
// Derived& Derived::operator=( const Derived& )
// throw( B1, M1 ); // error, ill-formed
// Derived::~Derived()
// throw( B2, M2 ); // error, ill-formed
};
The operator= is ill-formed because its throw directive should be at least as restrictive as its base class, meaning that it should throw either B1, or nothing at all. This makes sense as a Derived object can also be seen as a Base object.
Note that it is perfectly legal to have an ill-formed function as long as you never invoke it.
I am basically rewriting GotW #69 here, so if you want more details, you can find them here
it depends on how you like to structure and read the programs. of course, there are preferences and reasons for and against each.
class ExplicitClass
: public BaseClass
{
public:
initialization is very important. not initializing a base or member can produce warnings, rightly so or catching a bug in some cases. so this really starts to make sense if that collection of warnings is enabled, you keep the warning levels way up, and the warning counts down. it also helps to demonstrate intention:
ExplicitClass()
: BaseClass() // <-- explicit call of base class constructor
{
// empty function
}
an empty virtual destructor is, IME, statistically the best place to export a virtual (of course, that definition would be elsewhere if visible to more than one translation). you want this exported because there is a ton of rtti and vtable information that could end up as unnecessary bloat in your binary. i actually define empty destructors very regularly for this reason:
virtual ~ExplicitClass() {} // <-- explicit empty virtual destructor
perhaps it is a convention in your group, or it documents that this is exactly what the implementation is designed to do. this can also be helpful (subjective) in large codebases or within complex hierarchies because it can also help remind you of the dynamic interface the type is expected to adopt. some people prefer all declarations in the subclass because they can see all the class' dynamic implementation in one place. so the locality helps them in the event the class hierarchy/interface is larger than the programmer's mental stack. like the destructor, this virtual may also be a good place to export the typeinfo:
virtual void f() { BaseClass::f(); } // <-- function just calls base
};
of course, it gets hard to follow a program or rationale if you define only if qualified. so it can end up with some codebases easier to follow if you just stick to conventions because it is clearer than documenting why an empty destructor is exported at every turn.
a final reason (which swings both ways) is that explicit default definitions can increase and decrease build and link times.
fortunately, it's easier and unambiguous now to specify default and deleted methods and constructors.