Say I have a class which manages memory and thus needs user-defined special member functions (imagine vector or similar).
Consider the following implementation of the move-assignment operator:
Class& operator=(Class&& rhs)
{
this->~Class(); // call destructor
new (this) Class(std::move(rhs)); // call move constructor in-place
}
Is it valid to implement a move-assignment operator this way? That is, does calling a destructor and constructor in this way not run afoul of any object lifetime rules in the language?
Is it a good idea to implement a move-assignment operator this way? If not, why not, and is there a better canonical way?
It's not valid: What if this move assignment is called as part of moving a child object? Then you destroy the child (assuming it has a virtual destructor) and recreate in its place a parent object.
I would say that even in a non-virtual context it's still a bad idea because you don't see the syntax very often and it may make the code harder to grok for future maintainers.
The best approach is to avoid having to write your own move constructor entirely (and using the default) by having all your class members take care of moving themselves. For example rely on unique_ptr, etc. Failing that it seems that implementing it in terms of swap (as the copy-and-swap for copy assignment) would be an easily understandable mechanism.
It may be valid(1). To address your specific issue about dtor/ctor lifetimes, yes that is valid(2). That is how the original implementation for vector worked.
It may be a good idea (it probably isn't), but you may not want a canonical way.(3)
(1) There is controversy about whether or moves need to be valid in the self-move case.
Arguing for self-move safety is the position that code should be safe (duh), we certainly expect self assignment to be safe. Also some user experience reports that for many algorithms which use move, self-move is possible and tedious to check for.
Arguing against self-move safety is the position that the whole point of move semantics is time savings and for that reason moves should be as fast as possible. A self-move check can be expensive relative to the cost of a move. Note that the compiler will never generate self-move code, because natural (not casted) rvalues can't be self-moved. The only way to have a self-move is if a "std::move()" cast is explicitly invoked. This puts the burden on the caller of std::move() to either verify that self-move isn't involved or convince themselves that it isn't. Note also that it would be trivial to create a user defined equivalent of "std::move" that checked for self-move and then did nothing. If you don't support self-move, you might want to document that.
(2) This is not a template so you can know if the expression "new (this) Class(std::move(rhs));" can throw. If it can, then no, this isn't valid.
(3) This code may be a puzzle to maintainers, who might expect a more traditional swap approach, but there is a potential drawbacks to the swap approach. If the resources being released by the target need to be released as soon as possible (such as a mutex), then swap has the drawback that the resources are swapped into the move source object. If the move is the result of a call to "std::move()" the move source object may not be immediately disposed of. (Since this isn't a template you can know what resources are being freed. If memory is the only resource being freed, then this isn't an issue.)
A better approach might be to factor out the resource freeing code from the destructor and the resource moving code from the move constructor and then just call those (inlined) routines in this move assignment operator.
Related
My search-fu is good, but this is a difficult one to phrase correctly to find the answer. Basically, after a move ctor/assignment is invoked, is it guaranteed that the only thing that will ever be called on the RHS will be the destructor?
The reason I ask is that I have various things that (for sanity's sake) cannot be in an invalid state. But, far and away the most efficient move scheme would be to swap some stuff into them that the dtor can accept but nothing else could. Else, I have to allocate actual data, no matter how trivial, to keep the RHS in a valid state.
If the dtor is the only thing that will ever get called, then I can get maximum efficiency.
is it guaranteed that the only thing that will ever be called on the RHS will be the destructor?
No. It is well-formed to call member functions of a moved from object. The standard doesn't guarantee that a programmer won't do that.
As the implementor of a class, you can decide that some member functions must not be called on a moved from object, and thus can avoid for example allocating memory. Or, you can decide to not have such requirement. In general, having preconditions can allow more efficient implementation, while not having preconditions makes the class easier to use.
As the user of a class, you are responsible for following the preconditions of the member functions that you call (or member access). If a precondition of a function is that class is not in a moved from state, then don't break that precondition.
As a general rule, it's probably a good design to allow assignment operator to be called on a moved from object. That's what all (assignable) standard library classes do.
In short: There is no such guarantee by the standard, but you can impose such requirement on the user of the class. Just make sure it is well documented.
There’s nothing magic about moving from an object. After a move the object is still valid and if it’s not a temporary your code can call member functions on it and pass it to functions.
Just like any other object, and this is really the point of the question, the compiler won’t do anything to an object other than destroy it at the end of its lifetime. For a temporary object, that’s the end of the full statement that creates the object. For a named object, that’s the end of the scope in which the object is created.
I do understand why sometimes it is recommended to implement our own swap() function for a given class.
For instance, if we have a class following the pimpl idiom we would likely want to define our own copy constructor, so that it performs a deep copy of the contents of the object passed as an argument instead of the shallow copy that would be performed by the default copy constructor. The same could apply to the copy assignment operator.
Since it seems that std::swap() is implemented in terms of (at least, when it comes to C++03) both the copy constructor and the copy assignment operator. It would be inefficient to perform deep copies of the objects to be swapped, since just a swap of the pointers contained by these objects need to be done.
My question is why we should implement our swap() function as a non-throwing one.
In the case explained above, I assume it is just about semantics: since no new resources are being allocated (i.e.: two existing pointers are just being swapped). It wouldn't make much sense for such a kind of function to throw an exception.
However, there may be other reasons or scenarios I am overlooking in this reasoning.
My question is why we should implement our swap() function as a non-throwing one
Because swap is completely useless if it might throw.
Consider: you swap two instances, and the operation throws. Now, what state are they in?
The strong guarantee is that there are no side-effects if an exception was thrown, meaning both original objects are left in their original state.
If we can't meet the strong guarantee, we simply can't use swap in many cases, because there's no way to recover usefully from a failure, and there's no point in writing that version of swap at all.
Because there's no reason for it to throw.
The trivial implementation of swap (now) uses move-assignment and -construction.
There's generally no reason for a move-constructor to throw: it doesn't allocate anything new, it's just re-seating existing data. There's generally no reason for move-assignment to throw (as above), and destructors should never throw - and those are the only operations required.
EDIT
I've originally understood the question as "why would you use the throw specifier on the swap function. This answer might be off topic since I doesn't explain why would swap never throw.
I think the best answer is why not ?.
Why would you not specify that a function will never throw when this function as no reason to throw ?
You should always implement function as non-throwing when they have no reason to throw exception: you offer stronger guaranty for your function.
Furthermore, with some meta programming, you can take advantages of functions being non-throwing. Some STL classes use that to have faster member function when the swap/copy/move(C++11) is no-throw. (Actually I'm not sure that they take advantage of a function being no-throw in pre-C++11 code)
For some classes such as
a class following the pimpl idiom
we know that the implementation of swap will not need to throw because
It wouldn't make much sense for such a kind of function to throw an exception.
When it doesn't make sense to throw an exception, then it is best not to throw an exception.
There can be other classes such as those that contain complex members without a specialized swap function, but with potentially throwing copy constructor / assignment. For such classes we can not implement swap that never throws.
swaping your pImpls can't fail, in a well-formed program. (And the behaviour in an ill formed program doesn't matter). There is nothing to throw
I am basically trying to figure out, is the whole "move semantics" concept something brand new, or it is just making existing code simpler to implement? I am always interested in reducing the number of times I call copy/constructors but I usually pass objects through using reference (and possibly const) and ensure I always use initialiser lists. With this in mind (and having looked at the whole ugly && syntax) I wonder if it is worth adopting these principles or simply coding as I already do? Is anything new being done here, or is it just "easier" syntactic sugar for what I already do?
TL;DR
This is definitely something new and it goes well beyond just being a way to avoid copying memory.
Long Answer: Why it's new and some perhaps non-obvious implications
Move semantics are just what the name implies--that is, a way to explicitly declare instructions for moving objects rather than copying. In addition to the obvious efficiency benefit, this also affords a programmer a standards-compliant way to have objects that are movable but not copyable. Objects that are movable and not copyable convey a very clear boundary of resource ownership via standard language semantics. This was possible in the past, but there was no standard/unified (or STL-compatible) way to do this.
This is a big deal because having a standard and unified semantic benefits both programmers and compilers. Programmers don't have to spend time potentially introducing bugs into a move routine that can reliably be generated by compilers (most cases); compilers can now make appropriate optimizations because the standard provides a way to inform the compiler when and where you're doing standard moves.
Move semantics is particularly interesting because it very well suits the RAII idiom, which is a long-standing a cornerstone of C++ best practice. RAII encompasses much more than just this example, but my point is that move semantics is now a standard way to concisely express (among other things) movable-but-not-copyable objects.
You don't always have to explicitly define this functionality in order to prevent copying. A compiler feature known as "copy elision" will eliminate quite a lot of unnecessary copies from functions that pass by value.
Criminally-Incomplete Crash Course on RAII (for the uninitiated)
I realize you didn't ask for a code example, but here's a really simple one that might benefit a future reader who might be less familiar with the topic or the relevance of Move Semantics to RAII practices. (If you already understand this, then skip the rest of this answer)
// non-copyable class that manages lifecycle of a resource
// note: non-virtual destructor--probably not an appropriate candidate
// for serving as a base class for objects handled polymorphically.
class res_t {
using handle_t = /* whatever */;
handle_t* handle; // Pointer to owned resource
public:
res_t( const res_t& src ) = delete; // no copy constructor
res_t& operator=( const res_t& src ) = delete; // no copy-assignment
res_t( res_t&& src ) = default; // Move constructor
res_t& operator=( res_t&& src ) = default; // Move-assignment
res_t(); // Default constructor
~res_t(); // Destructor
};
Objects of this class will allocate/provision whatever resource is needed upon construction and then free/release it upon destruction. Since the resource pointed to by the data member can never accidentally be transferred to another object, the rightful owner of a resource is never in doubt. In addition to making your code less prone to abuse or errors (and easily compatible with STL containers), your intentions will be immediately recognized by any programmer familiar with this standard practice.
In the Turing Tar Pit, there is nothing new under the sun. Everything that move semantics does, can be done without move semantics -- it just takes a lot more code, and is a lot more fragile.
What move semantics does is takes a particular common pattern that massively increases efficiency and safety in a number of situations, and embeds it in the language.
It increases efficiency in obvious ways. Moving, be it via swap or move construction, is much faster for many data types than copying. You can create special interfaces to indicate when things can be moved from: but honestly people didn't do that. With move semantics, it becomes relatively easy to do. Compare the cost of moving a std::vector to copying it -- move takes roughly copying 3 pointers, while copying requires a heap allocation, copying every element in the container, and creating 3 pointers.
Even more so, compare reserve on a move-aware std::vector to a copy-only aware one: suppose you have a std::vector of std::vector. In C++03, that was performance suicide if you didn't know the dimensions of every component ahead of time -- in C++11, move semantics makes it as smooth as silk, because it is no longer repeatedly copying the sub-vectors whenever the outer vector resizes.
Move semantics makes every "pImpl pattern" type to have blazing fast performance, while means you can start having complex objects that behave like values instead of having to deal with and manage pointers to them.
On top of these performance gains, and opening up complex-class-as-value, move semantics also open up a whole host of safety measures, and allow doing some things that where not very practical before.
std::unique_ptr is a replacement for std::auto_ptr. They both do roughly the same thing, but std::auto_ptr treated copies as moves. This made std::auto_ptr ridiculously dangerous to use in practice. Meanwhile, std::unique_ptr just works. It represents unique ownership of some resource extremely well, and transfer of ownership can happen easily and smoothly.
You know the problem whereby you take a foo* in an interface, and sometimes it means "this interface is taking ownership of the object" and sometimes it means "this interface just wants to be able to modify this object remotely", and you have to delve into API documentation and sometimes source code to figure out which?
std::unique_ptr actually solves this problem -- interfaces that want to take onwership can now take a std::unique_ptr<foo>, and the transfer of ownership is obvious at both the API level and in the code that calls the interface. std::unique_ptr is an auto_ptr that just works, and has the unsafe portions removed, and replaced with move semantics. And it does all of this with nearly perfect efficiency.
std::unique_ptr is a transferable RAII representation of resource whose value is represented by a pointer.
After you write make_unique<T>(Args&&...), unless you are writing really low level code, it is probably a good idea to never call new directly again. Move semantics basically have made new obsolete.
Other RAII representations are often non-copyable. A port, a print session, an interaction with a physical device -- all of these are resources for whom "copy" doesn't make much sense. Most every one of them can be easily modified to support move semantics, which opens up a whole host of freedom in dealing with these variables.
Move semantics also allows you to put your return values in the return part of a function. The pattern of taking return values by reference (and documenting "this one is out-only, this one is in/out", or failing to do so) can be somewhat replaced by returning your data.
So instead of void fill_vec( std::vector<foo>& ), you have std::vector<foo> get_vec(). This even works with multiple return values -- std::tuple< std::vector<A>, std::set<B>, bool > get_stuff() can be called, and you can load your data into local variables efficiently via std::tie( my_vec, my_set, my_bool ) = get_stuff().
Output parameters can be semantically output-only, with very little overhead (the above, in a worst case, costs 8 pointer and 2 bool copies, regardless of how much data we have in those containers -- and that overhead can be as little as 0 pointer and 0 bool copies with a bit more work), because of move semantics.
There is absolutely something new going on here. Consider unique_ptr which can be moved, but not copied because it uniquely holds ownership of a resource. That ownership can then be transferred by moving it to a new unique_ptr if needed, but copying it would be impossible (as you would then have two references to the owned object).
While many uses of moving may have positive performance implications, the movable-but-not-copyable types are a much bigger functional improvement to the language.
In short, use the new techniques where it indicates the meaning of how your class should be used, or where (significant) performance concerns can be alleviated by movement rather than copy-and-destroy.
No answer is complete without a reference to Thomas Becker's painstakingly exhaustive write up on rvalue references, perfect forwarding, reference collapsing and everything related to that.
see here: http://thbecker.net/articles/rvalue_references/section_01.html
I would say yes because a Move Constructor and Move Assignment operator are now compiler defined for objects that do not define/protect a destructor, copy constructor, or copy assignment.
This means that if you have the following code...
struct intContainer
{
std::vector<int> v;
}
intContainer CreateContainer()
{
intContainer c;
c.v.push_back(3);
return c;
}
The code above would be optimized simply by recompiling with a compiler that supports move semantics. Your container c will have compiler defined move-semantics and thus will call the manually defined move operations for std::vector without any changes to your code.
Since move semantics only apply in the presence of rvalue
references, which are declared by a new token, &&, it seems
very clear that they are something new.
In principle, they are purely an optimizing techique, which
means that:
1. you don't use them until the profiler says it is necessary, and
2. in theory, optimizing is the compiler's job, and move
semantics aren't any more necessary than register.
Concerning 1, we may, in time, end up with an ubiquitous
heuristic as to how to use them: after all, passing an argument
by const reference, rather than by value, is also an
optimization, but the ubiquitous convention is to pass class
types by const reference, and all other types by value.
Concerning 2, compilers just aren't there yet. At least, the
usual ones. The basic principles which could be used to make
move semantics irrelevant are (well?) known, but to date, they
tend to result in unacceptable compile times for real programs.
As a result: if you're writing a low level library, you'll
probably want to consider move semantics from the start.
Otherwise, they're just extra complication, and should be
ignored, until the profiler says otherwise.
Copy constructors were traditionally ubiquitous in C++ programs. However, I'm doubting whether there's a good reason to that since C++11.
Even when the program logic didn't need copying objects, copy constructors (usu. default) were often included for the sole purpose of object reallocation. Without a copy constructor, you couldn't store objects in a std::vector or even return an object from a function.
However, since C++11, move constructors have been responsible for object reallocation.
Another use case for copy constructors was, simply, making clones of objects. However, I'm quite convinced that a .copy() or .clone() method is better suited for that role than a copy constructor because...
Copying objects isn't really commonplace. Certainly it's sometimes necessary for an object's interface to contain a "make a duplicate of yourself" method, but only sometimes. And when it is the case, explicit is better than implicit.
Sometimes an object could expose several different .copy()-like methods, because in different contexts the copy might need to be created differently (e.g. shallower or deeper).
In some contexts, we'd want the .copy() methods to do non-trivial things related to program logic (increment some counter, or perhaps generate a new unique name for the copy). I wouldn't accept any code that has non-obvious logic in a copy constructor.
Last but not least, a .copy() method can be virtual if needed, allowing to solve the problem of slicing.
The only cases where I'd actually want to use a copy constructor are:
RAII handles of copiable resources (quite obviously)
Structures that are intended to be used like built-in types, like math vectors or matrices -
simply because they are copied often and vec3 b = a.copy() is too verbose.
Side note: I've considered the fact that copy constructor is needed for CAS, but CAS is needed for operator=(const T&) which I consider redundant basing on the exact same reasoning;
.copy() + operator=(T&&) = default would be preferred if you really need this.)
For me, that's quite enough incentive to use T(const T&) = delete everywhere by default and provide a .copy() method when needed. (Perhaps also a private T(const T&) = default just to be able to write copy() or virtual copy() without boilerplate.)
Q: Is the above reasoning correct or am I missing any good reasons why logic objects actually need or somehow benefit from copy constructors?
Specifically, am I correct in that move constructors took over the responsibility of object reallocation in C++11 completely? I'm using "reallocation" informally for all the situations when an object needs to be moved someplace else in the memory without altering its state.
The problem is what is the word "object" referring to.
If objects are the resources that variables refers to (like in java or in C++ through pointers, using classical OOP paradigms) every "copy between variables" is a "sharing", and if single ownership is imposed, "sharing" becomes "moving".
If objects are the variables themselves, since each variables has to have its own history, you cannot "move" if you cannot / don't want to impose the destruction of a value in favor of another.
Cosider for example std::strings:
std::string a="Aa";
std::string b=a;
...
b = "Bb";
Do you expect the value of a to change, or that code to don't compile? If not, then copy is needed.
Now consider this:
std::string a="Aa";
std::string b=std::move(a);
...
b = "Bb";
Now a is left empty, since its value (better, the dynamic memory that contains it) had been "moved" to b. The value of b is then chaged, and the old "Aa" discarded.
In essence, move works only if explicitly called or if the right argument is "temporary", like in
a = b+c;
where the resource hold by the return of operator+ is clearly not needed after the assignment, hence moving it to a, rather than copy it in another a's held place and delete it is more effective.
Move and copy are two different things. Move is not "THE replacement for copy". It an more efficient way to avoid copy only in all the cases when an object is not required to generate a clone of itself.
Short anwer
Is the above reasoning correct or am I missing any good reasons why logic objects actually need or somehow benefit from copy constructors?
Automatically generated copy constructors are a great benefit in separating resource management from program logic; classes implementing logic do not need to worry about allocating, freeing or copying resources at all.
In my opinion, any replacement would need to do the same, and doing that for named functions feels a bit weird.
Long answer
When considering copy semantics, it's useful to divide types into four categories:
Primitive types, with semantics defined by the language;
Resource management (or RAII) types, with special requirements;
Aggregate types, which simply copy each member;
Polymorphic types.
Primitive types are what they are, so they are beyond the scope of the question; I'm assuming that a radical change to the language, breaking decades of legacy code, won't happen. Polymorphic types can't be copied (while maintaining the dynamic type) without user-defined virtual functions or RTTI shenanigans, so they are also beyond the scope of the question.
So the proposal is: mandate that RAII and aggregate types implement a named function, rather than a copy constructor, if they should be copied.
This makes little difference to RAII types; they just need to declare a differently-named copy function, and users just need to be slightly more verbose.
However, in the current world, aggregate types do not need to declare an explicit copy constructor at all; one will be generated automatically to copy all the members, or deleted if any are uncopyable. This ensures that, as long as all the member types are correctly copyable, so is the aggregate.
In your world, there are two possibilities:
Either the language knows about your copy-function, and can automatically generate one (perhaps only if explicitly requested, i.e. T copy() = default;, since you want explicitness). In my opinion, automatically generating named functions based on the same named function in other types feels more like magic than the current scheme of generating "language elements" (constructors and operator overloads), but perhaps that's just my prejudice speaking.
Or it's left to the user to correctly implement copying semantics for aggregates. This is error-prone (since you could add a member and forget to update the function), and breaks the current clean separation between resource management and program logic.
And to address the points you make in favour:
Copying (non-polymorphic) objects is commonplace, although as you say it's less common now that they can be moved when possible. It's just your opinion that "explicit is better" or that T a(b); is less explicit than T a(b.copy());
Agreed, if an object doesn't have clearly defined copy semantics, then it should have named functions to cover whatever options it offers. I don't see how that affects how normal objects should be copied.
I've no idea why you think that a copy constructor shouldn't be allowed to do things that a named function could, as long as they are part of the defined copy semantics. You argue that copy constructors shouldn't be used because of artificial restrictions that you place on them yourself.
Copying polymorphic objects is an entirely different kettle of fish. Forcing all types to use named functions just because polymorphic ones must won't give the consistency you seem to be arguing for, since the return types would have to be different. Polymorphic copies will need to be dynamically allocated and returned by pointer; non-polymorphic copies should be returned by value. In my opinion, there is little value in making these different operations look similar without being interchangable.
One case where copy constructors come in useful is when implementing the strong exception guarantees.
To illustrate the point, let's consider the resize function of std::vector. The function might be implemented roughly as follows:
void std::vector::resize(std::size_t n)
{
if (n > capacity())
{
T *newData = new T [n];
for (std::size_t i = 0; i < capacity(); i++)
newData[i] = std::move(m_data[i]);
delete[] m_data;
m_data = newData;
}
else
{ /* ... */ }
}
If the resize function were to have a strong exception guarantee we need to ensure that, if an exception is thrown, the state of the std::vector before the resize() call is preserved.
If T has no move constructor, then we will default to the copy constructor. In this case, if the copy constructor throws an exception, we can still provide strong exception guarantee: we simply delete the newData array and no harm to the std::vector has been done.
However, if we were using the move constructor of T and it threw an exception, then we have a bunch of Ts that were moved into the newData array. Rolling this operation back isn't straight-forward: if we try to move them back into the m_data array the move constructor of T may throw an exception again!
To resolve this issue we have the std::move_if_noexcept function. This function will use the move constructor of T if it is marked as noexcept, otherwise the copy constructor will be used. This allows us to implement std::vector::resize in such a way as to provide a strong exception guarantee.
For completeness, I should mention that C++11 std::vector::resize does not provide a strong exception guarantee in all cases. According to www.cplusplus.com we have the the follow guarantees:
If n is less than or equal to the size of the container, the function never throws exceptions (no-throw guarantee).
If n is greater and a reallocation happens, there are no changes in the container in case of exception (strong guarantee) if the type of the elements is either copyable or no-throw moveable.
Otherwise, if an exception is thrown, the container is left with a valid state (basic guarantee).
Here's the thing. Moving is the new default- the new minimum requirement. But copying is still often a useful and convenient operation.
Nobody should bend over backwards to offer a copy constructor anymore. But it is still useful for your users to have copyability if you can offer it simply.
I would not ditch copy constructors any time soon, but I admit that for my own types, I only add them when it becomes clear I need them- not immediately. So far this is very, very few types.
For standard copy constructors and assignment operators, I always think about implementing them or deleteing the defaults out of existence, if my class implements a destructor.
For the new move constructor and move operator, what is the right way to think about whether or not an implementation is necessary?
As a first pass of transitioning a system from pre-C++0x, could I just delete the default move constructor and move operator or should I leave them alone?
You don't have to worry about it, in the sense that when you user-declare a destructor (or anything else listed in 12.8/9), that blocks the default move constructor from being generated. So there's not the same risk as there is with copies, that the default is wrong.
So, as the first pass leave them alone. There may be places in your existing code where C++11 move semantics allow a move, whereas C++03 dictates a copy. Your class will continue to be copied, and if that caused no performance problems in C++03 then I can't immediately think of any reason why it would in C++11. If it did cause performance problems in C++03, then you have an opportunity to fix a bug in your code that you never got around to before, but that's an opportunity, not an obligation ;-)
If you later implement move construction and assignment, they will be moved, and in particular you'll want to do this if you think that C++11 clients of your class are less likely to use "swaptimization" to avoid copies, more likely to pass your type by value, etc, than C++03 clients were.
When writing new classes in C++11, you need to consider and implement move under the same criteria that you considered and implemented swap in C++03. A class that can be copied implements the C++11 concept of "movable", (much as a class that can be copied in C++03 can be swapped via the default implementation in std), because "movable" doesn't say what state the source is left in - in particular it's permitted to be unchanged. So a copy is valid as a move, it's just not necessarily an efficient one, and for many classes you'll find that unlike a "good" move or swap, a copy can throw.
You might find that you have to implement move for your classes in cases where you have a destructor (hence no default move constructor), and you also have a data member which is movable but not copyable (hence no default copy constructor either). That's when move becomes important semantically as well as for performance.
With C++11, you very rarely need to provide a destructor or copy semantics, due to the way the library is written. Compiler provided members pretty much always do fine (provided they are implemented correctly: MSVC forces you to implement a lot of move semantics by hand, which is very bothersome).
In case you have to implement a custom destructor, use the following approach:
Implement a move constructor, and an assignment operator taking by value (using copy&swap: note that you cannot use std::swap since it uses the assignment. You have to provide a private swap yourself). Pay attention to exception guarantees (look up std::move_if_noexcept).
If necessary, implement a copy constructor. Otherwise, delete it. Beware that non default copy semantics rarely make sense.
Also, a virtual destructor counts as a custom destructor: provide or delete copy + move semantics when declaring a virtual destructor.
Since they are used as an optimization you should implement them if the optimization is applicable to your class. If you can "steal" the internal resource your class is holding from a temporary object that is about to be destroyed. std::vector is a perfect example, where move constructor only assigns pointers to internal buffer leaving the temporary object empty (effectively stealing the elements).