I'm implementing a simple vector like std::vector, and I wrote some functions , without worrying what kind of exception safe guarantee it gives. I know a little about exceptions of c++, but i have not experienced writing exception safe codes.
Here is my code :
template <typename T>
class vector {
public:
vector(int _DEFAULT_VECTOR_SIZE = 10) :
size(0), array_(new T[_DEFAULT_VECTOR_SIZE]), capacity(_DEFAULT_VECTOR_SIZE) {}
void push_back(T& elem)
{
if (size == capacity)
resize(2 * size);
array_[size] = elem;
size++;
}
void pop_back()
{
--size;
}
void resize(int size_)
{
if (size_ > capacity)
{
T* temp = new T[size_];
memcpy(temp,array_,size*sizeof(T));
swap(temp, array_);
delete[] array_;
capacity = size_;
}
}
private:
T* array_;
int size;
int capacity;
};
So my question is: How can I modify my code(functions) , that would give at least basic guarantee , or some technique of writing exception safe code for basic or strong guarantee ?
Thanks
Exception-safety comes in two major flavours:
If an exception happens, your program ends in a sane state. Which one is unspecified.
If an exception happens, your program ends in the original state.
The main challenge for you will be to deal with assignment and copy constructors throwing. As the comments already note, you shouldn't use memcpy because that fails to call copy constructors. Copying a std::string should also copy the character buffer, for instance, and a vector of strings is a perfectly normal type which you should support.
So, let's look at your vector's copy constructor. It will need to copy every element of the source vector. And each individual copy can throw. What if one of the strings is so long that a copy throws std::bad_alloc ?
Now, exception safety means that you leave the program in a sane state, so no memory leaks. Your vector's copy ctor failed, so the dtor won't run. Who cleans up T* array then? This must be done in your copy ctor.
When the copy fails, there won't be a new vector, so you get the second type of exception safety for free. ("strong exception safety"). But let's look at the assignment operator next, v2 = v1. There's an old vector that you will overwrite. If you first do a .resize(0) and then copy over all elements, you may encounter an exception halfway through the copy. Your original vector content is gone, and the new content is incomplete. Still, you haven't leaked any memory yet nor have you copied half an element.
To make the assignment safe, there's a simple trick: first copy the source vector to a temporary vector. If this fails, no problem (see above). We didn't touch the destination yet. But if the assignment succeeds, we swap the array* pointers, size and capacity of the temporary and the destination vectors. Swapping pointers and swapping ints is safe (can't throw). Finally, we let the temporary vector go out of scope which destroys the old vector elements that are no longer needed.
So, by doing all the dangerous operations on temporaries, we ensure that any exception doesn't touch the original state.
You'll need to check all your methods to see if these problems can occur, but the pattern is usually similar. Don't leak array if an element copy or element assignment throws, do propagate that exception to your caller.
Related
I'm trying to implement my own vector class (for study purpose). Currently I am stuck at vector::clear(), how can I implement it? It can't be like this, right?
void clear() {
for (size_t i = 0; i < _size; ++i) {
_data = 0;//can't work with custom class
}
}
Or
void clear() {
delete[] _data;
_data = new T[_cap];
_size = 0;
}//can we make it better?
Or as simple as:
void clear() {
_size = 0;//fastest, but user can still access the data
}
T& operator[](size_t pos) {
if (pos >= _size) throw;//added to prevent from accessing "cleared" data
return _data[pos];
}
Did some digging through libcxx and found out they use alloc_traits::destroy, which destroy each element in _data???
Here is my class attribute
T* _data;
size_t _size;
size_t _cap;
If you keep a capacity larger than the size this means you need to account for allocated memory that doesn't hold any object. This means that the new T[_cap] approach simply doesn't work:
first and foremost your vector won't work for objects that are not default constructible
even for those that are, you will be creating more objects than requested and for some objects construction can be expensive.
the other problem is when you push_back when you sill have capacity you will be doing assignment instead of construction for the object (because an object already exists there)
So you need to decouple memory allocation from object creation:
allocate memory with operator new. Please note this is different than the new expression you are most familiar with.
construct an object in the allocated memory with in-place constructor (also kown as placement new)
destroy an object by explicitly calling the destructor
deallocate memory with operator delete
C++17 brings some utility functions for this purpose like: std::uninitialized_default_construct, uninitialized_copy, std::destroy etc.; find more in Dynamic memory management
If you want to be more generic like std::vector you can use allocators instead.
With this in mind, now answering your specific question about clear. The behavior of std::vector::clear is: "Erases all elements from the container. After this call, size() returns zero. [...] Leaves the capacity() of the vector unchanged". If you want the same behavior, this means:
void clear() noexcept
{
for (T* it = _data; it != _data + _size; ++it)
it->~T();
// or
std::destroy(_data, _data + size);
_size = 0;
}
As you can see implementing something like std::vector is far from trivial and requires some expert knowledge and techniques.
Another source of complications comes from strong exception safety. Let's consider just the case of push_back for exemplification. A naive implementation could do this (pseudocode):
void push_back(const T& obj)
{
if size == capacity
// grow capacity
new_data = allocate memory
move objects from _data to new_data
_data = new_data
update _cap
new (_data + _size) T{obj}; // in-place construct
++_size;
}
Now think what will it happen if the move constructor of one object throws while moving to the new larger memory. You have a memory leak and worst: you are left with some objects in your vector in a moved-from state. This will bring your vector in an invalid internal state. That's why is important that std::vector::push_back guarantees that either:
the operator is successful or
if an exception is thrown, the function has no effect.
In other words, it grantees that it never leaves the object in an "intermediary" or invalid state, like our naive implementation does.
The only responsibility of clear is to call the destructor on any objects that are in the array and set the array size to 0 (http://www.cplusplus.com/reference/vector/vector/clear/). Many implementations do not actually free the memory, but simply set the end and last pointers in the array to the beginning pointer after running through the vector and calling the destructor on each element. This is done as an optimization so that the memory is still available and the vector is ready to go if new elements are pushed onto the vector.
If you don't need that particular optimization, your version of clear() where you simply delete[] the data and then reallocate is perfectly reasonable. It all depends on what tradeoffs you want to make.
This is the code:
template <typename T> class Ntuplet
{
public:
Ntuplet(std::initializer_list<T> s);
~Ntuplet(void);
Ntuplet<T>& operator=(const Ntuplet<T>& t);
private:
size_t m_size;
T* m_objects;
};
template<typename T>
Ntuplet<T>& Ntuplet<T>::operator=(const Ntuplet<T>& t)
{
if (&t == this)
return *this;
delete [] m_objects;
m_size = t.m_size;
m_objects = new T[t.m_size];
for(int i = 0; i < m_size; ++i)
m_objects[i] = t.m_objets[i];
return *this;
}
This is from an old exam. The question reads:
"
At which line is it likely that an exception might be thrown; in which state will the object Ntuplet be at that point (initial, coherent, incoherent, undefined)? Propose a better way to implement the class in order to avoid exception/problems."
My guess was either at m_size = t.m_size because I thought maybe t.m_size would have too large of a value, but that can't be, because then how would the t object even exist (the error would have appeared earlier). The only other thing that comes to mind is ++i that might be out of range as an index..?
Thanks in advance
Edit: "Coherent" state meaning the object is in a state that doesn't have contradictory attributes, but it's not in the state we want it to be.
"Incoherent" means the attributes are not what they should be. For example, if you do a++ = b but the = operator throws an error, a is in an incoherent state because it was incremented even though the rest of the code didn't get to be executed. In this state, the destructor is available.
"Undefined" is the same as the above, but with the destructor unavailable as well.
Exceptions can only be thrown by very specific constructs. Anything that is undefined behavior, like dereferencing an invalid pointer, out-of-range access to a C-style array, undefined behavior in type conversions, is not an exception.
The only things in Ntuplet<T>::operator=(const Ntuplet<T>& t) that can throw an exception is the new[] expression (if no memory can be allocated std::bad_alloc or if the default constructor for T throws any exception) and the copy assignment of type T used inside the loop (depending on what type T is).
If the new throws an exception, then m_objects will dangling pointer because it was delete[]ed beforehand.
m_size will already have the size of the copied instance. Therefore the object will not be in any sane state.
Assuming the destructor of Ntuplet actually delete[]s the memory allocated for m_objects, then calling it later will cause undefined behavior because of a double-free.
If you are not going to replace Ntuplet or the m_objects member with a standard library implementation which takes care of exception-safety, then one solution for this particular exception would be to save the return value of the new expression to a temporary pointer T* p before modifying any member. Then the temporary can be assigned to m_objects later after the old m_objects was deleted.
In case the assignment operator in the loop throws an exception, the instance will also be in some partially assigned state. Calling the destructor afterwards should however be fine (assuming it only deletes m_objects) since m_objects is pointing to a new-allocated array. Making this exception-safe requires keeping all the old values in m_objects, so the loop should be moved directly after the new[] and it should assign to p[] instead of m_objects[].
This does however still cause a memory leak, because the new[] allocated memory will not be freed if the assignment loop throws. Therefore any exception must be intercepted to delete p:
template<typename T>
Ntuplet<T>& Ntuplet<T>::operator=(const Ntuplet<T>& t)
{
if (&t == this)
return *this;
T* p = new T[t.m_size];
try {
for(size_t i = 0; i < t.m_size; ++i)
p[i] = t.m_objects[i];
} catch (...) {
delete[] p;
throw;
}
delete [] m_objects;
m_objects = p;
m_size = t.m_size;
return *this;
}
I am assuming that the destructor of T does not throw any exceptions. In principle they are allowed to do that but it is unusual.
The only code here that can throw is the new (a std::bad_alloc or something thrown from T's constructor), or the value assignments.
If the new throws, then your op= doesn't complete, and your m_size will be left not matching up with the m_objects (which will be a dangling pointer to dead memory that used to hold an array of possibly some other size).
If one of the assignments throws, then your op= doesn't complete, and your m_size will be correct but some (or all) of the array elements will be default-constructed rather than having the value you want.
The way to fix this is to use a vector so that you don't have to worry about it. And I don't mean implementing Ntuplet using a vector... I mean literally replace Ntuplet with std::vector.
Given the following code:
template <class T>
class A {
T* arr;
int size;
public:
A(int size) : arr(new T[size]) , size(size) {
}
//..
A& operator=(const A& a){
if(this == &a){
return *this;
}
this->size = a.size;
T* ar=new T[a.size];
for(int i=0 ; i<size ; i++){
ar[i]=a.arr[i]; // I need to do it at "try-catch" ?
}
delete[] this->arr;
this->arr=ar;
return *this;
}
//...
};
When I copy the elements from the given array, do I need to do it with a try-catch clause or not? is it a good idea or not?
I can see that potentially your T copy could throw due to its own alloc failure or other reasons.
On the other hand your A copy could already throw because it had alloc failure.
Currently you would need to handle the destruction because you have not concreted the array that you have allocated, and all the T instances that you have created need to be destroyed if one of them exceptions, perhaps due to allocation failure.
One quick way to fix that would be to hold the array in a unique_ptr. Then it will be destroyed on exiting context.
Another way may be to reconsider your contract on A after the assignment has exceptioned: It must be valid, i.e. survive being used, but perhaps it need not guarantee to still contain its previous contents, nor all the new contents, so you could decide to destroy its existing array before allocating and assigning a new array, then copying the members. You could decide not to reallocate if the size has not changed, but just re-assign - this would leave a mess of new and old members after an exception, but they would all be valid and safe to delete.
Please ensure that size matches the actual attached array size at all times! Your existing code makes this mistake, but in particular that it is set to null and 0 after the delete and before the assignment; and it is only set to new new size after the assignment of the new pointer.
I am trying to append a blank object to a list using the push_back method.
main.cpp
vector<FacialMemory> facial_memory;
printf("2\n");
// Add people face memories based on number of sections
for (int i = 0; i < QuadrantDict::getMaxFaceAreas(); i++)
{
printf("i %d\n", i);
FacialMemory n_fm;
facial_memory.push_back(n_fm); // NOTE: Breaks here
}
At the push_back method call, the program crashes with a segmentation fault. I have looked around at similar questions and they point to the solution I have here. I have also tried to pass FacialMemory() into the push_back call but still the same issue.
The FacialMemory class is defined as such:
FacialMemory.h
class FacialMemory
{
private:
vector<FaceData> face_memory;
public:
FacialMemory();
~FacialMemory();
void pushData(FaceData face);
bool isEmpty();
vector<FaceData> getFaces();
FaceData getRecent();
};
Constructor and destructor
FacialMemory::FacialMemory()
{
}
FacialMemory::~FacialMemory()
{
delete[] & face_memory;
}
When you push_back an item into a vector, the item is copied. Sometimes this triggers more work as the vector is resized: Its current contents are copied, and the now-copied elements that used to belong to the vector are destroyed. destruction invokes the destructor.
Unfortunately, FacialMemory's destructor contains a fatal error:
FacialMemory::~FacialMemory()
{
delete[] & face_memory; <<== right here
}
It tries to delete[] data that was not allocated by new[], and whatever is managing the program's memory threw a fit because the expected book-keeping structures that keep track of dynamically allocated storage (memory allocated with new or with new[]) for the storage being returned were not found or not correct.
Further, face_memory is a std::vector, an object designed to look after its memory for you. You can create, copy, resize, and delete a vector without any intervention in most cases. The most notable counter case is a vector of pointers where you may have to release the pointed-at data when removing the pointer from the vector.
The solution is to do nothing in the FacialMemory class destructor. In fact, the Rule of Zero recommends that you not have a destructor at all because FacialMemory has no members or resources that require special handling. The compiler will generate the destructor for you with approximately zero chance of a mistake being made.
While reading the link for the Rule of Zero, pay attention to the Rules of Three and Five because they handle the cases where a class does require special handling and outline the minimum handling you should provide.
One of the reason to occur Segmentation fault is when ever you access a memory part which is invalid and In your program you are deallocating memory(delete keyword in your destructor) which is not allocated by new keyword.
Please refer to vector-push-back
Try adding a meaningful copy constructor to FacialMemory class
I've been shown that a std::string cannot be inserted into a boost::lockfree::queue.
boost::lockfree::queue is too valuable to abandon, so I think I could use very large, fixed length chars to pass the data according to the requirements (assuming that even satifies since I'm having trouble learning about how to satisfy these requirements), but that will eat up memory if I want large messages.
Does a dynamically-sized text object with a copy constructor, a trivial assignment operator, and a trivial destructor exist? If so, where? If not, please outline how to manifest one.
A dynamically-size type with a trivial copy ctor/dtor is not possible. There are two solutions to your problem, use a fixed sized type, or store pointers in the queue:
boost::lockfree::queue<std::string*> queue(some_size);
// push on via new
queue.push(new std::string("blah"));
// pop and delete
std::string* ptr;
if(queue.pop(ptr))
{
delete ptr;
}
Does a dynamically-sized text object with a copy constructor, a trivial assignment operator, and a trivial destructor exist?
Dynamically sized, no. For something to have a trivial destructor, it requires that the destructor of the object is implicit (or defaulted), and any non-static member objects also have implicit (or defaulted) destructors. Since anything that is dynamically allocated will require a delete [] somewhere along the line in a destructor, you cannot have this constraint satisfied.
To expand upon the above, consider a (very cut down) example of what happens in std::string:
namespace std
{
// Ignoring templates and std::basic_string for simplicity
class string
{
private:
char* internal_;
// Other fields
public:
string(const char* str)
: internal_(new char[strlen(str) + 1])
{ }
};
}
Consider what would happen if we left the destructor as default: it would destroy the stack-allocated char * (that is, the pointer itself, not what it points to). This would cause a memory leak, as we now have allocated space that has no references and hence can never be freed. So we need to declare a destructor:
~string()
{
delete[] internal_;
}
However, by doing this, the destructor becomes user-defined and is therefore non-trivial.
This will be a problem with anything that is dynamically allocated. Note that we cannot fix this by using something like a shared_ptr or a vector<char> as a member variable; even though they may be stack allocated in our class, underneath, they are simply taking care of the memory management for us: somewhere along the line with these, there is a new [] and corresponding delete [], hence they will have non-trivial destructors.
To satisfy this, you'll need to use a stack allocated char array. That means no dynamic allocation, and therefore a fixed size.