I need to check if thrown exception from operator=? - c++

Given the following code:
template <class T>
class A {
T* arr;
int size;
public:
A(int size) : arr(new T[size]) , size(size) {
}
//..
A& operator=(const A& a){
if(this == &a){
return *this;
}
this->size = a.size;
T* ar=new T[a.size];
for(int i=0 ; i<size ; i++){
ar[i]=a.arr[i]; // I need to do it at "try-catch" ?
}
delete[] this->arr;
this->arr=ar;
return *this;
}
//...
};
When I copy the elements from the given array, do I need to do it with a try-catch clause or not? is it a good idea or not?

I can see that potentially your T copy could throw due to its own alloc failure or other reasons.
On the other hand your A copy could already throw because it had alloc failure.
Currently you would need to handle the destruction because you have not concreted the array that you have allocated, and all the T instances that you have created need to be destroyed if one of them exceptions, perhaps due to allocation failure.
One quick way to fix that would be to hold the array in a unique_ptr. Then it will be destroyed on exiting context.
Another way may be to reconsider your contract on A after the assignment has exceptioned: It must be valid, i.e. survive being used, but perhaps it need not guarantee to still contain its previous contents, nor all the new contents, so you could decide to destroy its existing array before allocating and assigning a new array, then copying the members. You could decide not to reallocate if the size has not changed, but just re-assign - this would leave a mess of new and old members after an exception, but they would all be valid and safe to delete.
Please ensure that size matches the actual attached array size at all times! Your existing code makes this mistake, but in particular that it is set to null and 0 after the delete and before the assignment; and it is only set to new new size after the assignment of the new pointer.

Related

How to implement vector::clear()? (study purpose)

I'm trying to implement my own vector class (for study purpose). Currently I am stuck at vector::clear(), how can I implement it? It can't be like this, right?
void clear() {
for (size_t i = 0; i < _size; ++i) {
_data = 0;//can't work with custom class
}
}
Or
void clear() {
delete[] _data;
_data = new T[_cap];
_size = 0;
}//can we make it better?
Or as simple as:
void clear() {
_size = 0;//fastest, but user can still access the data
}
T& operator[](size_t pos) {
if (pos >= _size) throw;//added to prevent from accessing "cleared" data
return _data[pos];
}
Did some digging through libcxx and found out they use alloc_traits::destroy, which destroy each element in _data???
Here is my class attribute
T* _data;
size_t _size;
size_t _cap;
If you keep a capacity larger than the size this means you need to account for allocated memory that doesn't hold any object. This means that the new T[_cap] approach simply doesn't work:
first and foremost your vector won't work for objects that are not default constructible
even for those that are, you will be creating more objects than requested and for some objects construction can be expensive.
the other problem is when you push_back when you sill have capacity you will be doing assignment instead of construction for the object (because an object already exists there)
So you need to decouple memory allocation from object creation:
allocate memory with operator new. Please note this is different than the new expression you are most familiar with.
construct an object in the allocated memory with in-place constructor (also kown as placement new)
destroy an object by explicitly calling the destructor
deallocate memory with operator delete
C++17 brings some utility functions for this purpose like: std::uninitialized_default_construct, uninitialized_copy, std::destroy etc.; find more in Dynamic memory management
If you want to be more generic like std::vector you can use allocators instead.
With this in mind, now answering your specific question about clear. The behavior of std::vector::clear is: "Erases all elements from the container. After this call, size() returns zero. [...] Leaves the capacity() of the vector unchanged". If you want the same behavior, this means:
void clear() noexcept
{
for (T* it = _data; it != _data + _size; ++it)
it->~T();
// or
std::destroy(_data, _data + size);
_size = 0;
}
As you can see implementing something like std::vector is far from trivial and requires some expert knowledge and techniques.
Another source of complications comes from strong exception safety. Let's consider just the case of push_back for exemplification. A naive implementation could do this (pseudocode):
void push_back(const T& obj)
{
if size == capacity
// grow capacity
new_data = allocate memory
move objects from _data to new_data
_data = new_data
update _cap
new (_data + _size) T{obj}; // in-place construct
++_size;
}
Now think what will it happen if the move constructor of one object throws while moving to the new larger memory. You have a memory leak and worst: you are left with some objects in your vector in a moved-from state. This will bring your vector in an invalid internal state. That's why is important that std::vector::push_back guarantees that either:
the operator is successful or
if an exception is thrown, the function has no effect.
In other words, it grantees that it never leaves the object in an "intermediary" or invalid state, like our naive implementation does.
The only responsibility of clear is to call the destructor on any objects that are in the array and set the array size to 0 (http://www.cplusplus.com/reference/vector/vector/clear/). Many implementations do not actually free the memory, but simply set the end and last pointers in the array to the beginning pointer after running through the vector and calling the destructor on each element. This is done as an optimization so that the memory is still available and the vector is ready to go if new elements are pushed onto the vector.
If you don't need that particular optimization, your version of clear() where you simply delete[] the data and then reallocate is perfectly reasonable. It all depends on what tradeoffs you want to make.

At which line in this code is it likely to find an exception/error?

This is the code:
template <typename T> class Ntuplet
{
public:
Ntuplet(std::initializer_list<T> s);
~Ntuplet(void);
Ntuplet<T>& operator=(const Ntuplet<T>& t);
private:
size_t m_size;
T* m_objects;
};
template<typename T>
Ntuplet<T>& Ntuplet<T>::operator=(const Ntuplet<T>& t)
{
if (&t == this)
return *this;
delete [] m_objects;
m_size = t.m_size;
m_objects = new T[t.m_size];
for(int i = 0; i < m_size; ++i)
m_objects[i] = t.m_objets[i];
return *this;
}
This is from an old exam. The question reads:
"
At which line is it likely that an exception might be thrown; in which state will the object Ntuplet be at that point (initial, coherent, incoherent, undefined)? Propose a better way to implement the class in order to avoid exception/problems."
My guess was either at m_size = t.m_size because I thought maybe t.m_size would have too large of a value, but that can't be, because then how would the t object even exist (the error would have appeared earlier). The only other thing that comes to mind is ++i that might be out of range as an index..?
Thanks in advance
Edit: "Coherent" state meaning the object is in a state that doesn't have contradictory attributes, but it's not in the state we want it to be.
"Incoherent" means the attributes are not what they should be. For example, if you do a++ = b but the = operator throws an error, a is in an incoherent state because it was incremented even though the rest of the code didn't get to be executed. In this state, the destructor is available.
"Undefined" is the same as the above, but with the destructor unavailable as well.
Exceptions can only be thrown by very specific constructs. Anything that is undefined behavior, like dereferencing an invalid pointer, out-of-range access to a C-style array, undefined behavior in type conversions, is not an exception.
The only things in Ntuplet<T>::operator=(const Ntuplet<T>& t) that can throw an exception is the new[] expression (if no memory can be allocated std::bad_alloc or if the default constructor for T throws any exception) and the copy assignment of type T used inside the loop (depending on what type T is).
If the new throws an exception, then m_objects will dangling pointer because it was delete[]ed beforehand.
m_size will already have the size of the copied instance. Therefore the object will not be in any sane state.
Assuming the destructor of Ntuplet actually delete[]s the memory allocated for m_objects, then calling it later will cause undefined behavior because of a double-free.
If you are not going to replace Ntuplet or the m_objects member with a standard library implementation which takes care of exception-safety, then one solution for this particular exception would be to save the return value of the new expression to a temporary pointer T* p before modifying any member. Then the temporary can be assigned to m_objects later after the old m_objects was deleted.
In case the assignment operator in the loop throws an exception, the instance will also be in some partially assigned state. Calling the destructor afterwards should however be fine (assuming it only deletes m_objects) since m_objects is pointing to a new-allocated array. Making this exception-safe requires keeping all the old values in m_objects, so the loop should be moved directly after the new[] and it should assign to p[] instead of m_objects[].
This does however still cause a memory leak, because the new[] allocated memory will not be freed if the assignment loop throws. Therefore any exception must be intercepted to delete p:
template<typename T>
Ntuplet<T>& Ntuplet<T>::operator=(const Ntuplet<T>& t)
{
if (&t == this)
return *this;
T* p = new T[t.m_size];
try {
for(size_t i = 0; i < t.m_size; ++i)
p[i] = t.m_objects[i];
} catch (...) {
delete[] p;
throw;
}
delete [] m_objects;
m_objects = p;
m_size = t.m_size;
return *this;
}
I am assuming that the destructor of T does not throw any exceptions. In principle they are allowed to do that but it is unusual.
The only code here that can throw is the new (a std::bad_alloc or something thrown from T's constructor), or the value assignments.
If the new throws, then your op= doesn't complete, and your m_size will be left not matching up with the m_objects (which will be a dangling pointer to dead memory that used to hold an array of possibly some other size).
If one of the assignments throws, then your op= doesn't complete, and your m_size will be correct but some (or all) of the array elements will be default-constructed rather than having the value you want.
The way to fix this is to use a vector so that you don't have to worry about it. And I don't mean implementing Ntuplet using a vector... I mean literally replace Ntuplet with std::vector.

Simple implementation of vector class like std::vector

I'm implementing a simple vector like std::vector, and I wrote some functions , without worrying what kind of exception safe guarantee it gives. I know a little about exceptions of c++, but i have not experienced writing exception safe codes.
Here is my code :
template <typename T>
class vector {
public:
vector(int _DEFAULT_VECTOR_SIZE = 10) :
size(0), array_(new T[_DEFAULT_VECTOR_SIZE]), capacity(_DEFAULT_VECTOR_SIZE) {}
void push_back(T& elem)
{
if (size == capacity)
resize(2 * size);
array_[size] = elem;
size++;
}
void pop_back()
{
--size;
}
void resize(int size_)
{
if (size_ > capacity)
{
T* temp = new T[size_];
memcpy(temp,array_,size*sizeof(T));
swap(temp, array_);
delete[] array_;
capacity = size_;
}
}
private:
T* array_;
int size;
int capacity;
};
So my question is: How can I modify my code(functions) , that would give at least basic guarantee , or some technique of writing exception safe code for basic or strong guarantee ?
Thanks
Exception-safety comes in two major flavours:
If an exception happens, your program ends in a sane state. Which one is unspecified.
If an exception happens, your program ends in the original state.
The main challenge for you will be to deal with assignment and copy constructors throwing. As the comments already note, you shouldn't use memcpy because that fails to call copy constructors. Copying a std::string should also copy the character buffer, for instance, and a vector of strings is a perfectly normal type which you should support.
So, let's look at your vector's copy constructor. It will need to copy every element of the source vector. And each individual copy can throw. What if one of the strings is so long that a copy throws std::bad_alloc ?
Now, exception safety means that you leave the program in a sane state, so no memory leaks. Your vector's copy ctor failed, so the dtor won't run. Who cleans up T* array then? This must be done in your copy ctor.
When the copy fails, there won't be a new vector, so you get the second type of exception safety for free. ("strong exception safety"). But let's look at the assignment operator next, v2 = v1. There's an old vector that you will overwrite. If you first do a .resize(0) and then copy over all elements, you may encounter an exception halfway through the copy. Your original vector content is gone, and the new content is incomplete. Still, you haven't leaked any memory yet nor have you copied half an element.
To make the assignment safe, there's a simple trick: first copy the source vector to a temporary vector. If this fails, no problem (see above). We didn't touch the destination yet. But if the assignment succeeds, we swap the array* pointers, size and capacity of the temporary and the destination vectors. Swapping pointers and swapping ints is safe (can't throw). Finally, we let the temporary vector go out of scope which destroys the old vector elements that are no longer needed.
So, by doing all the dangerous operations on temporaries, we ensure that any exception doesn't touch the original state.
You'll need to check all your methods to see if these problems can occur, but the pattern is usually similar. Don't leak array if an element copy or element assignment throws, do propagate that exception to your caller.

Error every time the destructor is called

I got a weird message everytime the destructor is called. Since one of my private variable is dynamic allocated array (int *member;), I write the destructor like this:
ClassSet::~ClassSet(){delete []member;}
Everytime the destructor for ClassSet is called, I got an error message:
Windows has triggered a breakpoint in Hw1.exe.
This may be due to a corruption of the heap, which indicates a bug in Hw1.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while Hw1.exe has focus.
entire class:
class ClassSet
{
public:
ClassSet(int n = DEFAULT_MAX_ITEMS);
ClassSet(const ClassSet& other);
ClassSet &operator=(const ClassSet& other);
~ClassSet();
private:
int size;
int *member;
};
ClassSet::ClassSet(int n){
size = n;
member = new int[n];
}
ClassSet::ClassSet(const ClassSet& other){
int i = 0;
this->size = other.size;
member = new int [capacity];
while (i<size)
{
this->member[i] = other.member[i];
i++;
}
}
Multiset& Multiset::operator=(const Multiset &other)
{
if (&other == this){return *this;}
this->size = other.size;
int i = 0;
delete [] member;
member = new int[size];
while (i<other.size)
{
this->member[i] = other.member[i];
i++;
}
return *this;
}
Any idea what's wrong with this destructor?
You failed to implement (or you have implemented incorrectly) one of ClassSet::ClassSet(const ClassSet&) or ClassSet::operator=(const ClassSet&).
In other words, you have violated the Rule of Three.
The best solution, however, is likely not to implement them, but rather to change how you allocate space for your dynamic array. Instead of using new[] and delete[], try replacing that member with a std::vector<>.
Heap corruption is often something detected after-the-fact. It may have to do with your destructor, or as I've seen, can likely happen well before the heap access the error occurs at.
Basically "Heap corruption detected" simply means that on a given access of the heap, Windows decided that the current state of the heap was inconsistent/invalid. Something went bad a while earlier.
These bugs can be really hard to track down. One common cause of heap corruption though is double deletion you deleted something twice inadvertently. This can point at deeper issues with how your data is copied around your code and your design.
This can happen, as others have said, when you don't have an appropriate copy constructor/assignment operator that copies dynamic memory. The "copy" deletes your memory, then the initial class deletes again, causing a double delete.
If you've posted you actual code, then I think the problem is here:
ClassSet::ClassSet(const ClassSet& other){
int i = 0;
this->size = other.size;
member = new int [capacity]; // <--- what is capacity?
while (i<size)
{
this->member[i] = other.member[i];
i++;
}
}
You're sizing the copied array based on something named capacity which doesn't have any obvious relationship to other.size. If capacity is smaller than size the loop that copies elements will corrupt the heap.
Assuming that this is an academic exercise, once you solve this problem you should look into the copy/swap idiom that used for classes like these to ensure exception safety.
If this isn't an academic exercise, then you should be looking at std::vector or other containers that are provided in libraries.
This problem is quite common. The default copy constructor is equivalent to
ClassSet(const ClassSet& other) {
size = other.size;
member = other.member;
}
The problem with this is that when an instance ClassSet is copied, both the original instance and the new instance hold a raw pointer to member. Both destructors will free member, causing the double free problem you are seeing.
For example,
{
ClassSet a
ClassSet b(a); // assert(b.member == a.member)
} // At this point, both a and b will free the same pointer.
You can mitigate this by not allowing copying, or moving the pointer instead of copying.

Freeing in the destructor is causing a memory leak

I have to write a stack class template using arrays in C++ for my assignment.
Here is my code:
#include <iostream>
template <typename Type>
class Stack {
private:
int top;
Type items[];
public:
Stack() { top = -1; };
~Stack() { delete[] items; };
void push(Type);
bool isEmpty() const;
Type pop();
Type peek() const;
};
int main (int argc, char *argv[]) {
Stack<double> st;
return 0;
}
template<typename Type>
void Stack<Type>::push(Type item) {
top++;
if(top == sizeof(items) / sizeof(Type)) {
Type buff[] = new Type[top];
std::copy(items,items+top,buff);
delete[] items;
items = new Type[2*top];
std::copy(buff,buff+top,items);
delete[] buff;
}
items[top] = item;
}
template<typename Type>
bool Stack<Type>::isEmpty() const {
return top == -1 ? true : false;
}
template<typename Type>
Type Stack<Type>::pop() {
//TODO
return items[top--];
}
template<typename Type>
Type Stack<Type>::peek() const{
//TODO
return items[top-1];
}
It compiled fine using "g++ -Wall", however when I run the program, I got this error:
* glibc detected * ./lab3: munmap_chunk(): invalid pointer: 0x00007fff41a3cdf8
After trying out a bit with GDB, I found out the error arose from the line:
'free[] items' in the destructor.
I don't understand why freeing an array results in a memory leak? Any clues?
You should only delete[] what you have explicitly allocated with new[]. Your items member is not a dynamically allocated array, so you must not free it as if it were.
On the other hand, you have
Type items[];
which doesn't actually allocate any space in your object instances for the stack.
I don't think that's a memory leak! It's a crash occurring because you deleted a non-heap object.
You haven't new'd items, so you can't delete it in the destructor. Assuming you require 'Stack' to be a dynamically sized class then the items array must be dynamically allocated in which case the declaration for items should be
Type *items; (as hacker mentions above)
and the constructor should call 'items = new Type[ARRAY_SIZE]' to allocate the memory, or at the very least initially assign the 'items' pointer to NULL.
Edit based on comments -
To complete and secure the memory allocation responsibilities for this class you should also include a copy constructor and an assignment operator which allocates new memory and copies the values stored in items to the new object. This avoids the pointer simply being copied by the compiler generated copy constructor or assignment operator which would lead to multiple objects pointing to the same dynamically allocated memory. Upon destruction of the first of these objects this memory will be deleted. Further use of the now deleted memory by the other objects which share the pointer would be likely to result in a crash.
Rather than adding code here I refer you to the code in Martin's answer for this question.
items is a pointer to Type. This pointer must be initialised
before it is used (it is a wild pointer as it is). That
is why your program crashes. That it happens in the
destructor is a coincidence. It could just as well have
happened earlier. E.g. in push() memory is overwritten in the
location that items happens to point to.
You need to allocate memory for a number of elements. See below.
You already have the logic for the dynamic growth of the
array in place. But an array in C++ does not know its size
(it is just a pointer of some type). You need to keep track
of the current capacity/size. Thus instead of
sizeof(items)
define a new member to hold the current capacity.
E.g.:
int capacity;
and in the constructor:
capacity = 100;
items = new Type[capacity];
The last line allocates a number of elements and is the main solution to your problem.
And in push():
if (top == capacity / sizeof(Type)) {
The logic for reallocation will need to change. E.g.:
capacity *= 2;
items = new Type[capacity];
instead of
items = new Type[2*top];
I have just sketched out a solution here. Yon can fill in the details.
Note: you have only one instance of Stack in your program; body of main() is:
Stack<double> st;
If you instead were to assign one instance of Stack to another instance, e.g.
{
Stack<double> st;
Stack<double> st2 = st; //Copy constructor will be invoked for st2. If not defined the compiler generated copy constructor will do a copy of the pointer value.
Stack<double> st3;
st3 = st; //Assignment operator will be invoked for st3. If not defined the compiler generated assignment operator will do a copy of the pointer value.
}
then the copy constructor (to work for st2) and the
assignment operator (to work for st3) for Stack must be
defined properly (make a copy of the referenced member
items). Otherwise a double or triple delete in the destructor and
undefined behavior (e.g. a crash) will be the result when
st, st2 and st3 goes out of scope at the close brace.
The first thing I've seen is that you're likely to mean Type *items, not Type items[].
And then you don't want to (can't) use sizeof on the dynamically allocated data.