How to handle failure to release a resource which is contained in a smart pointer? - c++

How should an error during resource deallocation be handled, when the
object representing the resource is contained in a shared pointer?
EDIT 1:
To put this question in more concrete terms: Many C-style interfaces
have a function to allocate a resource, and one to release
it. Examples are open(2) and close(2) for file descriptors on POSIX
systems, XOpenDisplay and XCloseDisplay for a connection to an X
server, or sqlite3_open and sqlite3_close for a connection to an
SQLite database.
I like to encapsulate such interfaces in a C++ class, using the Pimpl
idiom to hide the implementation details, and providing a factory
method returning a shared pointer to ensure that the resource is
deallocated when no references to it remain.
But, in all the examples given above and many others, the function
used to release the resource may report an error. If this function is
called by the destructor, I cannot throw an exception because
generally destructors must not throw.
If, on the other hand, I provide a public method to release the
resource, I now have a class with two possible states: One in which
the resource is valid, and one in which the resource has already been
released. Not only does this complicate the implementation of the
class, it also opens a potential for wrong usage. This is bad, because
an interface should aim to make usage errors impossible.
I would be grateful for any help with this problem.
The original statement of the question, and thoughts about a possible
solution follow below.
EDIT 2:
There is now a bounty on this question. A solution must meet these
requirements:
The resource is released if and only if no references to it remain.
References to the resource may be destroyed explicitly. An exception is thrown if an error occured while releasing the resource.
It is not possible to use a resource which has already been released.
Reference counting and releasing of the resource are thread-safe.
A solution should meet these requirements:
It uses the shared pointer provided by boost, the C++ Technical Report 1 (TR1), and the upcoming C++ standard, C++0x.
It is generic. Resource classes only need to implement how the resource is released.
Thank you for your time and thoughts.
EDIT 3:
Thanks to everybody who answered my question.
Alsk's answer met everything asked for in the bounty, and
was accepted. In multithreaded code, this solution would require
a separate cleanup thread.
I have added another answer where any exceptions during
cleanup are thrown by the thread that actually used the resource,
without need for a separate cleanup thread. If you are still
interested in this problem (it bothered me a lot), please
comment.
Smart pointers are a useful tool to manage resources safely. Examples
of such resources are memory, disk files, database connections, or
network connections.
// open a connection to the local HTTP port
boost::shared_ptr<Socket> socket = Socket::connect("localhost:80");
In a typical scenario, the class encapsulating the resource should be
noncopyable and polymorphic. A good way to support this is to provide
a factory method returning a shared pointer, and declare all
constructors non-public. The shared pointers can now be copied from
and assigned to freely. The object is automatically destroyed when no
reference to it remains, and the destructor then releases the
resource.
/** A TCP/IP connection. */
class Socket
{
public:
static boost::shared_ptr<Socket> connect(const std::string& address);
virtual ~Socket();
protected:
Socket(const std::string& address);
private:
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
But there is a problem with this approach. The destructor must not
throw, so a failure to release the resource will remain undetected.
A common way out of this problem is to add a public method to release
the resource.
class Socket
{
public:
virtual void close(); // may throw
// ...
};
Unfortunately, this approach introduces another problem: Our objects
may now contain resources which have already been released. This
complicates the implementation of the resource class. Even worse, it
makes it possible for clients of the class to use it incorrectly. The
following example may seem far-fetched, but it is a common pitfall in
multi-threaded code.
socket->close();
// ...
size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use!
Either we ensure that the resource is not released before the object
is destroyed, thereby losing any way to deal with a failed resource
deallocation. Or we provide a way to release the resource explicitly
during the object's lifetime, thereby making it possible to use the
resource class incorrectly.
There is a way out of this dilemma. But the solution involves using a
modified shared pointer class. These modifications are likely to be
controversial.
Typical shared pointer implementations, such as boost::shared_ptr,
require that no exception be thrown when their object's destructor is
called. Generally, no destructor should ever throw, so this is a
reasonable requirement. These implementations also allow a custom
deleter function to be specified, which is called in lieu of the
destructor when no reference to the object remains. The no-throw
requirement is extended to this custom deleter function.
The rationale for this requirement is clear: The shared pointer's
destructor must not throw. If the deleter function does not throw, nor
will the shared pointer's destructor. However, the same holds for
other member functions of the shared pointer which lead to resource
deallocation, e.g. reset(): If resource deallocation fails, no
exception can be thrown.
The solution proposed here is to allow custom deleter functions to
throw. This means that the modified shared pointer's destructor must
catch exceptions thrown by the deleter function. On the other hand,
member functions other than the destructor, e.g. reset(), shall not
catch exceptions of the deleter function (and their implementation
becomes somewhat more complicated).
Here is the original example, using a throwing deleter function:
/** A TCP/IP connection. */
class Socket
{
public:
static SharedPtr<Socket> connect(const std::string& address);
protected:
Socket(const std::string& address);
virtual Socket() { }
private:
struct Deleter;
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
struct Socket::Deleter
{
void operator()(Socket* socket)
{
// Close the connection. If an error occurs, delete the socket
// and throw an exception.
delete socket;
}
};
SharedPtr<Socket> Socket::connect(const std::string& address)
{
return SharedPtr<Socket>(new Socket(address), Deleter());
}
We can now use reset() to free the resource explicitly. If there is
still a reference to the resource in another thread or another part of
the program, calling reset() will only decrement the reference
count. If this is the last reference to the resource, the resource is
released. If resource deallocation fails, an exception is thrown.
SharedPtr<Socket> socket = Socket::connect("localhost:80");
// ...
socket.reset();
EDIT:
Here is a complete (but platform-dependent) implementation of the deleter:
struct Socket::Deleter
{
void operator()(Socket* socket)
{
if (close(socket->m_impl.fd) < 0)
{
int error = errno;
delete socket;
throw Exception::fromErrno(error);
}
delete socket;
}
};

We need to store allocated resources somewhere (as it was already mentioned by DeadMG) and explicitly call some reporting/throwing function outside of any destructor. But that doesn't prevent us from taking advantage of reference counting implemented in boost::shared_ptr.
/** A TCP/IP connection. */
class Socket
{
private:
//store internally every allocated resource here
static std::vector<boost::shared_ptr<Socket> > pool;
public:
static boost::shared_ptr<Socket> connect(const std::string& address)
{
//...
boost::shared_ptr<Socket> socket(new Socket(address));
pool.push_back(socket); //the socket won't be actually
//destroyed until we want it to
return socket;
}
virtual ~Socket();
//call cleanupAndReport() as often as needed
//probably, on a separate thread, or by timer
static void cleanupAndReport()
{
//find resources without clients
foreach(boost::shared_ptr<Socket>& socket, pool)
{
if(socket.unique()) //there are no clients for this socket, i.e.
//there are no shared_ptr's elsewhere pointing to this socket
{
//try to deallocate this resource
if (close(socket->m_impl.fd) < 0)
{
int error = errno;
socket.reset(); //destroys Socket object
//throw an exception or handle error in-place
//...
//throw Exception::fromErrno(error);
}
else
{
socket.reset();
}
}
} //foreach socket
}
protected:
Socket(const std::string& address);
private:
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
The implementation of cleanupAndReport() should be a little more complicated: in the present version the pool is populated with null pointers after cleanup, and in case of throwing exception we have to call the function until it doesn't throw anymore etc, but I hope, it illustrates well the idea.
Now, more general solution:
//forward declarations
template<class Resource>
boost::shared_ptr<Resource> make_shared_resource();
template<class Resource>
void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource> deallocator);
//for every type of used resource there will be a template instance with a static pool
template<class Resource>
class pool_holder
{
private:
friend boost::shared_ptr<Resource> make_shared_resource<Resource>();
friend void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource>);
static std::vector<boost::shared_ptr<Resource> > pool;
};
template<class Resource>
std::vector<boost::shared_ptr<Resource> > pool_holder<Resource>::pool;
template<class Resource>
boost::shared_ptr<Resource> make_shared_resource()
{
boost::shared_ptr<Resource> res(new Resource);
pool_holder<Resource>::pool.push_back(res);
return res;
}
template<class Resource>
void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource> > deallocator)
{
foreach(boost::shared_ptr<Resource>& res, pool_holder<Resource>::pool)
{
if(res.unique())
{
deallocator(res);
}
} //foreach
}
//usage
{
boost::shared_ptr<A> a = make_shared_resource<A>();
boost::shared_ptr<A> a2 = make_shared_resource<A>();
boost::shared_ptr<B> b = make_shared_resource<B>();
//...
}
cleanupAndReport<A>(deallocate_A);
cleanupAndReport<B>(deallocate_B);

If releasing some resource can actually fail, then a destructor is clearly a wrong abstraction to use. Destructors are meant to clean up without fail, regardless of the circumstances. A close() method (or whatever you want to name it) is probably the only way to go.
But think closely about it. If releasing a resource actually fails, what can you do? Is such an error recoverable? If it is, which part of your code should handle it? The way to recover is probably highly application-specific and tied to other parts of the application. It is highly unlikely that you actually want that to happen automatically, in an arbitrary place in the code that happened to release the resource and trigger the error. A shared pointer abstraction does not really model what you're trying to achieve. If so, then you clearly need to create your own abstraction which models your requested behavior. Abusing shared pointers to do something they're not supposed to do is not the right way.
Also, please read this.
EDIT:
If all you want to do is to inform the user what happened before crashing, then consider wrapping the Socket in another wrapper object that would call the deleter on its destruction, catch any exceptions thrown and handle them by showing the user a message box or whatever. Then put this wrapper object inside a boost::shared_ptr.

Quoting Herb Sutter, author of "Exceptional C++" (from here):
If a destructor throws an exception,
Bad Things can happen. Specifically,
consider code like the following:
// The problem
//
class X {
public:
~X() { throw 1; }
};
void f() {
X x;
throw 2;
} // calls X::~X (which throws), then calls terminate()
If a destructor throws an exception
while another exception is already
active (i.e., during stack unwinding),
the program is terminated. This is
usually not a good thing.
In other words, regardless of what you would want to believe is elegant in this situation, you cannot blithely throw an exception in a destructor unless you can guarantee that it will not be thrown while handling another exception.
Besides, what can you do if you can't successfully get rid of a resource? Exceptions should be thrown for things that can be handled higher up, not bugs. If you want to report odd behavior, log the release failure and simply go on. Or terminate.

As announced in the question, edit 3:
Here is another solution which, as far as I can judge, fulfills the
requirements in the question. It is similar to the solution described
in the original question, but uses boost::shared_ptr instead of a
custom smart pointer.
The central idea of this solution is to provide a release()
operation on shared_ptr. If we can make the shared_ptr give up its
ownership, we are free to call a cleanup function, delete the object,
and throw an exception in case an error occurred during cleanup.
Boost has a good
reason
to not provide a release() operation on shared_ptr:
shared_ptr cannot give away ownership unless it's unique() because the
other copy will still destroy the object.
Consider:
shared_ptr<int> a(new int);
shared_ptr<int> b(a); // a.use_count() == b.use_count() == 2
int * p = a.release();
// Who owns p now? b will still call delete on it in its destructor.
Furthermore, the pointer returned by release() would be difficult to
deallocate reliably, as the source shared_ptr could have been created
with a custom deleter.
The first argument against a release() operation is that, by the
nature of shared_ptr, many pointers share ownership of the object,
so no single one of them can simply release that ownership. But what
if the release() function returned a null pointer if there were
still other references left? The shared_ptr can reliably determine
this, without race conditions.
The second argument against the release() operation is that, if a
custom deleter was passed to the shared_ptr, you should use that to
deallocate the object, rather than simply deleting it. But release()
could return a function object, in addition to the raw pointer, to
enable its caller to deallocate the pointer reliably.
However, in our specific szenario, custom deleters will not be an
issue, because we do not have to deal with arbitrary custom
deleters. This will become clearer from the code given below.
Providing a release() operation on shared_ptr without modifying
its implementation is, of course, not possible without a hack. The
hack which is used in the code below relies on a thread-local variable
to prevent our custom deleter from actually deleting the object.
That said, here's the code, consisting mostly of the header
Resource.hpp, plus a small implementation file Resource.cpp. Note
that it must be linked with -lboost_thread-mt due to the
thread-local variable.
// ---------------------------------------------------------------------
// Resource.hpp
// ---------------------------------------------------------------------
#include <boost/assert.hpp>
#include <boost/ref.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread/tss.hpp>
/// Factory for a resource.
template<typename T>
struct ResourceFactory
{
/// Create a resource.
static boost::shared_ptr<T>
create()
{
return boost::shared_ptr<T>(new T, ResourceFactory());
}
template<typename A1>
static boost::shared_ptr<T>
create(const A1& a1)
{
return boost::shared_ptr<T>(new T(a1), ResourceFactory());
}
template<typename A1, typename A2>
static boost::shared_ptr<T>
create(const A1& a1, const A2& a2)
{
return boost::shared_ptr<T>(new T(a1, a2), ResourceFactory());
}
// ...
/// Destroy a resource.
static void destroy(boost::shared_ptr<T>& resource);
/// Deleter for boost::shared_ptr<T>.
void operator()(T* resource);
};
namespace impl
{
// ---------------------------------------------------------------------
/// Return the last reference to the resource, or zero. Resets the pointer.
template<typename T>
T* release(boost::shared_ptr<T>& resource);
/// Return true if the resource should be deleted (thread-local).
bool wantDelete();
// ---------------------------------------------------------------------
} // namespace impl
template<typename T>
inline
void ResourceFactory<T>::destroy(boost::shared_ptr<T>& ptr)
{
T* resource = impl::release(ptr);
if (resource != 0) // Is it the last reference?
{
try
{
resource->close();
}
catch (...)
{
delete resource;
throw;
}
delete resource;
}
}
// ---------------------------------------------------------------------
template<typename T>
inline
void ResourceFactory<T>::operator()(T* resource)
{
if (impl::wantDelete())
{
try
{
resource->close();
}
catch (...)
{
}
delete resource;
}
}
namespace impl
{
// ---------------------------------------------------------------------
/// Flag in thread-local storage.
class Flag
{
public:
~Flag()
{
m_ptr.release();
}
Flag& operator=(bool value)
{
if (value != static_cast<bool>(*this))
{
if (value)
{
m_ptr.reset(s_true); // may throw boost::thread_resource_error!
}
else
{
m_ptr.release();
}
}
return *this;
}
operator bool()
{
return m_ptr.get() == s_true;
}
private:
boost::thread_specific_ptr<char> m_ptr;
static char* s_true;
};
// ---------------------------------------------------------------------
/// Flag to prevent deletion.
extern Flag t_nodelete;
// ---------------------------------------------------------------------
/// Return the last reference to the resource, or zero.
template<typename T>
T* release(boost::shared_ptr<T>& resource)
{
try
{
BOOST_ASSERT(!t_nodelete);
t_nodelete = true; // may throw boost::thread_resource_error!
}
catch (...)
{
t_nodelete = false;
resource.reset();
throw;
}
T* rv = resource.get();
resource.reset();
return wantDelete() ? rv : 0;
}
// ---------------------------------------------------------------------
} // namespace impl
And the implementation file:
// ---------------------------------------------------------------------
// Resource.cpp
// ---------------------------------------------------------------------
#include "Resource.hpp"
namespace impl
{
// ---------------------------------------------------------------------
bool wantDelete()
{
bool rv = !t_nodelete;
t_nodelete = false;
return rv;
}
// ---------------------------------------------------------------------
Flag t_nodelete;
// ---------------------------------------------------------------------
char* Flag::s_true((char*)0x1);
// ---------------------------------------------------------------------
} // namespace impl
And here is an example of a resource class implemented using this solution:
// ---------------------------------------------------------------------
// example.cpp
// ---------------------------------------------------------------------
#include "Resource.hpp"
#include <cstdlib>
#include <string>
#include <stdexcept>
#include <iostream>
// uncomment to test failed resource allocation, usage, and deallocation
//#define TEST_CREAT_FAILURE
//#define TEST_USAGE_FAILURE
//#define TEST_CLOSE_FAILURE
// ---------------------------------------------------------------------
/// The low-level resource type.
struct foo { char c; };
// ---------------------------------------------------------------------
/// The low-level function to allocate the resource.
foo* foo_open()
{
#ifdef TEST_CREAT_FAILURE
return 0;
#else
return (foo*) std::malloc(sizeof(foo));
#endif
}
// ---------------------------------------------------------------------
/// Some low-level function using the resource.
int foo_use(foo*)
{
#ifdef TEST_USAGE_FAILURE
return -1;
#else
return 0;
#endif
}
// ---------------------------------------------------------------------
/// The low-level function to free the resource.
int foo_close(foo* foo)
{
std::free(foo);
#ifdef TEST_CLOSE_FAILURE
return -1;
#else
return 0;
#endif
}
// ---------------------------------------------------------------------
/// The C++ wrapper around the low-level resource.
class Foo
{
public:
void use()
{
if (foo_use(m_foo) < 0)
{
throw std::runtime_error("foo_use");
}
}
protected:
Foo()
: m_foo(foo_open())
{
if (m_foo == 0)
{
throw std::runtime_error("foo_open");
}
}
void close()
{
if (foo_close(m_foo) < 0)
{
throw std::runtime_error("foo_close");
}
}
private:
foo* m_foo;
friend struct ResourceFactory<Foo>;
};
// ---------------------------------------------------------------------
typedef ResourceFactory<Foo> FooFactory;
// ---------------------------------------------------------------------
/// Main function.
int main()
{
try
{
boost::shared_ptr<Foo> resource = FooFactory::create();
resource->use();
FooFactory::destroy(resource);
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
Finally, here is a small Makefile to build all that:
# Makefile
CXXFLAGS = -g -Wall
example: example.cpp Resource.hpp Resource.o
$(CXX) $(CXXFLAGS) -o example example.cpp Resource.o -lboost_thread-mt
Resource.o: Resource.cpp Resource.hpp
$(CXX) $(CXXFLAGS) -c Resource.cpp -o Resource.o
clean:
rm -f Resource.o example

Well, first off, I don't see a question here. Second off, I have to say that this is a bad idea. What will you gain in all this? When the last shared pointer to a resource is destroyed and your throwing deleter is called you will find yourself with a resource leak. You will have lost all handles to the resource that failed to release. You will never be able to try again.
Your desire to use an RAII object is a good one but a smart pointer is simply insufficient to the task. What you need needs to be even smarter. You need something that can rebuild itself on failure to completely collapse. The destructor is insufficient for such an interface.
You do introduce yourself to the misuse where someone could cause a resource to have a handle but be invalid. The type of resource you're dealing with here simply lends itself to this issue. There are many ways in which you may approach this. One method may be to use the handle/body idiom along with the state pattern. The implementation of the interface can be in one of two states: connected or unconnected. The handle simply passes requests to the internal body/state. Connected works like normal, unconnected throws exceptions/asserts in all applicable requests.
This thing would need a function other than ~ to destroy a handle to it. You could consider a destroy() function that can throw. If you catch an error when you call it you don't delete the handle but instead deal with the problem in whatever application specific way you need to. If you don't catch an error from destroy() you let the handle go out of scope, reset it, or whatever. The function destroy() then decriments the resource count and attempts to release the internal resource if that count is 0. Upon success the handle in switched to the unconnected state, upon failure it generates a catchable error that the client can attempt to handle but leaves the handle in a connected state.
It's not an entirely trivial thing to write but what you are wanting to do, introduce exceptions into destruction, simply will not work.

Generally speaking, if a resource's C-style closure fails, then it's a problem with the API rather than a problem in your code. However, what I would be tempted to do is, if destruction is failed, add it to a list of resources that need destruction/cleanup re-attempted later, say, when app exits, periodically, or when other similar resources are destroyed, and then try to re-destroy. If any are left over at arbitrary time, give user error and exit.

Related

Using member shared_ptr from a member callback function running in different thread (ROS topic subscription)

I am not completely sure how to best title this question since I am not completely sure what the nature of the problem actually is (I guess "how fix segfault" is not a good title).
The situation is, I have written this code:
template <typename T> class LatchedSubscriber {
private:
ros::Subscriber sub;
std::shared_ptr<T> last_received_msg;
std::shared_ptr<std::mutex> mutex;
int test;
void callback(T msg) {
std::shared_ptr<std::mutex> thread_local_mutex = mutex;
std::shared_ptr<T> thread_local_msg = last_received_msg;
if (!thread_local_mutex) {
ROS_INFO("Mutex pointer is null in callback");
}
if (!thread_local_msg) {
ROS_INFO("lrm: pointer is null in callback");
}
ROS_INFO("Test is %d", test);
std::lock_guard<std::mutex> guard(*thread_local_mutex);
*thread_local_msg = msg;
}
public:
LatchedSubscriber() {
last_received_msg = std::make_shared<T>();
mutex = std::make_shared<std::mutex>();
test = 42;
if (!mutex) {
ROS_INFO("Mutex pointer is null in constructor");
}
else {
ROS_INFO("Mutex pointer is not null in constructor");
}
}
void start(ros::NodeHandle &nh, const std::string &topic) {
sub = nh.subscribe(topic, 1000, &LatchedSubscriber<T>::callback, this);
}
T get_last_msg() {
std::lock_guard<std::mutex> guard(*mutex);
return *last_received_msg;
}
};
Essentially what it is doing is subscribing to a topic (channel), meaning that a callback function is called each time a message arrives. The job of this class is to store the last received message so the user of the class can always access it.
In the constructor I allocate a shared_ptr to the message and for a mutex to synchronize access to this message. The reason for using heap memory here is so the LatchedSubscriber can be copied and the same latched message can still be read. (the Subscriber already implements this kind of behavior where copying it doesn't do anything except for the fact that the callback stops being called once the last instance goes out of scope).
The problem is basically that the code segfaults. I am pretty sure the reason for this is that my shared pointers become null in the callback function, despite not being null in the constructor.
The ROS_INFO calls print:
Mutex pointer is not null in constructor
Mutex pointer is null in callback
lrm: pointer is null in callback
Test is 42
I don't understand how this can happen. I guess I have either misunderstood something about shared pointers, ros topic subscriptions, or both.
Things I have done:
At first I had the subscribe call happening in the constructor. I think giving the this pointer to another thread before the constructor has returned can be bad, so I moved this into a start function which is called after the object has been constructed.
There are many aspects to the thread safety of shared_ptrs it seems. At first I used mutex and last_received_msg directly in the callback. Now I have copied them into local variables hoping this would help. But it doesn't seem to make a difference.
I have added a local integer variable. I can read the integer I assigned to this variable in the constructor from the callback. Just a sanity check to make sure that the callback is actually called on an instance created by my constructor.
I think I have figured out the problem.
When subscribing I am passing the this pointer to the subscribe function along with the callback. If the LatchedSubscriber is ever copied and the original deleted, that this pointer becomes invalid, but the sub still exists so the callback keeps being called.
I didn't think this happened anywhere in my code, but the LatcedSubscriber was stored as a member inside an object which was owned by a unique pointer. It looks like make_unique might be doing some copying internally? In any case it is wrong to use the this pointer for the callback.
I ended up doing the following instead
void start(ros::NodeHandle &nh, const std::string &topic) {
auto l_mutex = mutex;
auto l_last_received_msg = last_received_msg;
boost::function<void(const T)> callback =
[l_mutex, l_last_received_msg](const T msg) {
std::lock_guard<std::mutex> guard(*l_mutex);
*l_last_received_msg = msg;
};
sub = nh.subscribe<T>(topic, 1000, callback);
}
This way copies of the two smart pointers are used with the callback instead.
Assigning the closure to a variable of type boost::function<void(const T)> seems to be necessary. Probably due to the way the subscribe function is.
This appears to have fixed the issue. I might also move the subscription into the constructor again and get rid of the start method.

simple thread safe vector for connections in grpc service

I'm trying to learn about concurrency, and I'm implementing a small connection pool in a grpc service that needs to make many connections to a postgres database.
I'm trying to implement a basic connectionPool to prevent creating a new connection for each request. To start, I attempted to create a thread safe std::vector. When I run the grpc server, a single transaction is made, and then the server blocks, but I can't reason out what's going on. Any help would be appreciated
class SafeVector {
std::vector<pqxx::connection*> pool_;
int size_;
int max_size_;
std::mutex m_;
std::condition_variable cv_;
public:
SafeVector(int size, int max_size) : size_(size), max_size_(max_size) {
assert(size_ <= max_size_);
for (size_t i = 0; i < size; ++i) {
pool_.push_back(new pqxx::connection("some conn string"));
}
}
SafeVector(SafeVector const&)=delete; // to be implemented
SafeVector& operator=(SafeVector const&)=delete; // no assignment keeps things simple
std::shared_ptr<pqxx::connection> borrow() {
std::unique_lock<std::mutex> l(m_);
cv_.wait(l, [this]{ return !pool_.empty(); });
std::shared_ptr<pqxx::connection> res(pool_.back());
pool_.pop_back();
return res;
}
void surrender(std::shared_ptr<pqxx::connection> connection) {
std::lock_guard<std::mutex> l(m_);
pool_.push_back(connection.get());
cv_.notify_all();
}
};
In main, I then pass a SafeVector* s = new SafeVector(4, 10); into my service ServiceA(s)
Inside ServiceA, I use the connection as follows:
std::shared_ptr<pqxx::connection> c = safeVector_->borrow();
c->perform(SomeTransactorImpl);
safeVector_->surrender(c);
I put a bunch of logging statements everywhere, and I'm pretty sure I have a fundamental misunderstanding of the core concept of either (1) shared_ptr or (2) the various locking structures.
In particular, it seems that after 4 connections are used (the maximum number of hardware threads on my machine), a seg fault (error 11) happens when attempting to return a connection in the borrow() method.
Any help would be appreciated. Thanks.
smart pointers in C++ are about object ownership.
Object ownership is about who gets to delete the object and when.
A shared pointer means that who gets to delete and when is a shared concern. Once you have said "no one bit of code is permitted to delete this object", you cannot take it back.
In your code, you try to take an object with shared ownership and claim it for your SafeVector in surrender. This is not permitted. You try it anyhow with a call to .get(), but the right to delete that object remains owned by shared pointers.
They proceed to delete it (maybe right away, maybe tomorrow) and your container has a dangling pointer to a deleted object.
Change your shared ptrs to unique ptrs. Add move as required to make it compile.
In surrender, assert the supplied unique ptr is non-empty.
And whike you are in there,
cv_.notify_one();
I would also
std::vector<std::unique_ptr<pqxx::connection>> pool_;
and change:
pool_.push_back(std::move(connection));
if you don't update the type of pool_, instead change .get() to .release(). Unlike shared ptr, unique ptr can give up ownership.

Should this be called a mutex?

I have objects that can be opened in different modes, among which read and write.
If you opened it read you can still call
object->upgradeOpen();
It is common practice in our code to call
object->downgradeOpen();
When you are done writing.
I usually find it easier to use the concept of a mutex that I learned in c++ essentials where you let this upgradeOpen and downgradeOpen be done in the constructor and destructor of this mutex object.
class ObjectMutex{
public:
ObjectMutex(const Object& o)
: m_o(o)
{
m_o.upgradeOpen();
}
~ObjectMutex(){
m_o.downgradeOpen();
}
private:
Object m_o;
};
Only problem is, it doesn't really lock the object to make it thread safe, so I don't think it really is a mutex.
Is there another accepted name to call this construction?
The principle which is implemented in this class is called RAII (http://en.cppreference.com/w/cpp/language/raii).
In general such objects can be called "RAII object".
For the name in code you can use ScopedSomething. In this particular case, for example, ScopedObjectUpgrader or another meaningful name of action which is done for the scope.
Sounds to me more like an upgradable mutex
Take a look at RAII wrappers for upgradable mutexes How to unlock boost::upgrade_to_unique_lock (made from boost::shared_mutex)? to get a better idea of how to write one yourself.
For example you probably want to write two separate RAII wrappers
class OpenLock {
public:
OpenLock(Object& o_in) : o{o_in} {
this->o.open();
}
~OpenLock() {
this->o.close();
}
private:
Object& o;
};
class UpgradeOpenLock {
public:
UpgradeOpenLock(Object& o_in) : o{o_in} {
this->o->upgradeOpen();
}
~UpgradeOpenLock() {
this->o->downgradeOpen();
}
private:
Object& o;
};
and then use it like this
{
OpenLock open_lck(o);
// freely read
{
UpgradeOpenLock upgrade_lck(o);
// freely read or write
}
// freely read again
}

std::function in combination with thread c++11 fails debug assertion in vector

I want to build a helper class that can accept an std::function created via std::bind) so that i can call this class repeaded from another thread:
short example:
void loopme() {
std::cout << "yay";
}
main () {
LoopThread loop = { std::bind(&loopme) };
loop.start();
//wait 1 second
loop.stop();
//be happy about output
}
However, when calling stop() my current implementation will raise the following error: debug assertion Failed , see Image: i.stack.imgur.com/aR9hP.png.
Does anyone know why the error is thrown ?
I don't even use vectors in this example.
When i dont call loopme from within the thread but directly output to std::cout, no error is thrown.
Here the full implementation of my class:
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, thread_{ nullptr }, is_running_{ false }, counter_{ 0 } {};
~LoopThread();
void start();
void stop();
bool isRunning() { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::thread* thread_;
bool is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = new std::thread{ &LoopThread::executeLoop, this };
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->detach();
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
if (!is_running_) {
std::cout << "end";
}
//delete thread_;
//thread_ = nullptr;
}
I used the following Googletest code for testing (however a simple main method containing the code should work):
void testfunction(pft::LoopThread*, uint32_t i) {
std::cout << i << ' ';
}
TEST(pfFiles, TestLoop)
{
pft::LoopThread loop{ std::bind(&testfunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(500));
loop.stop();
std::this_thread::sleep_for(std::chrono::milliseconds(2500));
std::cout << "Why does this fail";
}
Your use of is_running_ is undefined behavior, because you write in one thread and read in another without a synchronization barrier.
Partly due to this, your stop() doesn't stop anything. Even without this UB (ie, you "fix" it by using an atomic), it just tries to say "oy, stop at some point", by the end it does not even attempt to guarantee the stop happened.
Your code calls new needlessly. There is no reason to use a std::thread* here.
Your code violates the rule of 5. You wrote a destructor, then neglected copy/move operations. It is ridiculously fragile.
As stop() does nothing of consequence to stop a thread, your thread with a pointer to this outlives your LoopThread object. LoopThread goes out of scope, destroying what the pointer your std::thread stores. The still running executeLoop invokes a std::function that has been destroyed, then increments a counter to invalid memory (possibly on the stack where another variable has been created).
Roughly, there is 1 fundamental error in using std threading in every 3-5 lines of your code (not counting interface declarations).
Beyond the technical errors, the design is wrong as well; using detach is almost always a horrible idea; unless you have a promise you make ready at thread exit and then wait on the completion of that promise somewhere, doing that and getting anything like a clean and dependable shutdown of your program is next to impossible.
As a guess, the vector error is because you are stomping all over stack memory and following nearly invalid pointers to find functions to execute. The test system either puts an array index in the spot you are trashing and then the debug vector catches that it is out of bounds, or a function pointer that half-makes sense for your std function execution to run, or somesuch.
Only communicate through synchronized data between threads. That means atomic data, or mutex guarded, unless you are getting ridiculously fancy. You don't understand threading enough to get fancy. You don't understand threading enough to copy someone who got fancy and properly use it. Don't get fancy.
Don't use new. Almost never, ever use new. Use make_shared or make_unique if you absolutely have to. But use those rarely.
Don't detach a thread. Period. Yes this means you might have to wait for it to finish a loop or somesuch. Deal with it, or write a thread manager that does the waiting at shutdown or somesuch.
Be extremely clear about what data is owned by what thread. Be extremely clear about when a thread is finished with data. Avoid using data shared between threads; communicate by passing values (or pointers to immutable shared data), and get information from std::futures back.
There are a number of hurdles in learning how to program. If you have gotten this far, you have passed a few. But you probably know people who learned along side of you that fell over at one of the earlier hurdles.
Sequence, that things happen one after another.
Flow control.
Subprocedures and functions.
Looping.
Recursion.
Pointers/references and dynamic vs automatic allocation.
Dynamic lifetime management.
Objects and Dynamic dispatch.
Complexity
Coordinate spaces
Message
Threading and Concurrency
Non-uniform address spaces, Serialization and Networking
Functional programming, meta functions, currying, partial application, Monads
This list is not complete.
The point is, each of these hurdles can cause you to crash and fail as a programmer, and getting each of these hurdles right is hard.
Threading is hard. Do it the easy way. Dynamic lifetime management is hard. Do it the easy way. In both cases, extremely smart people have mastered the "manual" way to do it, and the result is programs that exhibit random unpredictable/undefined behavior and crash a lot. Muddling through manual resource allocation and deallocation and multithreaded code can be made to work, but the result is usually someone whose small programs work accidentally (they work insofar as you fixed the bugs you noticed). And when you master it, initial mastery comes in the form of holding an entire program's "state" in uour head and understanding how it works; this fails to scale to large many-developer code bases, so younusually graduate to having large programs that work accidentally.
Both make_unique style and only-immutable-shared-data based threading are composible strategies. This means if small pieces are correct, and you put them together, the resulting program is correct (with regards to resource lifetime and concurrency). That permits local mastery of small-scale threading or resource management to apply to larfe-scale programs in the domain that these strategies work.
After following the guide from #Yakk i decided to restructure my programm:
bool is_running_ will change to td::atomic<bool> is_running_
stop() will not only trigger the stopping, but will activly wait for the thread to stop via a thread_->join()
all calls of new are replaced with std::make_unique<std::thread>( &LoopThread::executeLoop, this )
I have no experience with copy or move constructors. So i decided to forbid them. This should prevent me from accidently using this. If i sometime in the future will need those i have to take a deepter look on thoose
thread_->detach() was replaced by thread_->join() (see 2.)
This is the end of the list.
class LoopThread {
public:
LoopThread(std::function<void(LoopThread*, uint32_t)> function) : function_{ function }, is_running_{ false }, counter_{ 0 } {};
LoopThread(LoopThread &&) = delete;
LoopThread(const LoopThread &) = delete;
LoopThread& operator=(const LoopThread&) = delete;
LoopThread& operator=(LoopThread&&) = delete;
~LoopThread();
void start();
void stop();
bool isRunning() const { return is_running_; };
private:
std::function<void(LoopThread*, uint32_t)> function_;
std::unique_ptr<std::thread> thread_;
std::atomic<bool> is_running_;
uint32_t counter_;
void executeLoop();
};
LoopThread::~LoopThread() {
if (isRunning()) {
stop();
}
}
void LoopThread::start() {
if (is_running_) {
throw std::runtime_error("Thread is already running");
}
if (thread_ != nullptr) {
throw std::runtime_error("Thread is not stopped yet");
}
is_running_ = true;
thread_ = std::make_unique<std::thread>( &LoopThread::executeLoop, this );
}
void LoopThread::stop() {
if (!is_running_) {
throw std::runtime_error("Thread is already stopped");
}
is_running_ = false;
thread_->join();
thread_ = nullptr;
}
void LoopThread::executeLoop() {
while (is_running_) {
function_(this, counter_);
++counter_;
}
}
TEST(pfThread, TestLoop)
{
pft::LoopThread loop{ std::bind(&testFunction, std::placeholders::_1, std::placeholders::_2) };
loop.start();
std::this_thread::sleep_for(std::chrono::milliseconds(50));
loop.stop();
}

Way for C++ destructor to skip work when specific exception being thrown?

I have an object on the stack for which I wish its destructor to skip some work when the destructor is being called because the stack is being unwound due to a specific exception being thrown through the scope of the object on the stack.
Now I could add a try catch block inside the scope of the stack item and catch the exception in question and notify the stack object to not run the work to be skipped an then rethrow the exception as follows:
RAII_Class pending;
try {
doSomeWorkThatMayThrowException();
} catch (exceptionToSkipPendingDtor &err) {
pending.notifySkipResourceRelease();
throw;
}
However, I'm hoping there is a more elegant way to do this. For example imagine:
RAII_Class::~RAII_Class {
if (detectExceptionToSkipPendingDtorBeingThrown()) {
return;
}
releaseResource();
}
You can almost do this with std::uncaught_exception(), but not quite.
Herb Sutter explains the "almost" better than I do: http://www.gotw.ca/gotw/047.htm
There are corner cases where std::uncaught_exception() returns true when called from a destructor but the object in question isn't actually being destroyed by the stack unwinding process.
You're probably better off without RAII because it doesn't match your use case. RAII means always clean up; exception or not.
What you want is much simpler: only release resource if an exception is not throw which is a simple sequence of functions.
explicitAllocateResource();
doSomeWorkThatMayThrowException();
explicitReleaseResource(); // skipped if an exception is thrown
// by the previous function.
I would do it the other way around - explicitly tell it to do its work if no exception was thrown:
RAII_Class pending;
doSomeWorkThatMayThrowException();
pending.commit(); // do or prepare actual work
This seems to circumvent the main reason to use RAII. The point of RAII is that if an exception happens in the middle of your code you can still release resources/be destructed properly.
If this isn;t the semantic you want, then don't use RAII.
So instead of:
void myFunction() {
WrapperClass wc(acquireResource());
// code that may throw
}
Just do:
void myFunction() {
Resource r = acquireResource();
// code that may throw
freeResource(r);
}
If the code in the middle throws, the resource won't be freed. This is what you want, rather than keeping RAII (and keeping the name) but not implementing RAII semantics.
Looks like bool std::uncaught_exception(); does the trick if you want to have this behavior for every exception, not just special ones!
You can do without a try-catch:
RAII_Class pending;
doSomeWorkThatMayThrowException(); // intentional: don't release if throw
pending.releaseResource();
Alternatively, you can try a little harder with RAII:
struct RAII_Class {
template<class Op>
void execute(Op op) {
op();
releaseResources();
}
private:
void releaseResources() { /* ... */ }
};
int main(int argc, char* argv[])
{
RAII_Class().execute(doSomeWorkThatMayThrowException);
return 0;
}
Although it would be a kludge at best, if you own the code for the exception class you're interested in, you could add a static data member to that class (bool) that would be set to "true" in the constructor for objects of that class, and false in the destructor (might need to be an int that you increment/decrement instead). Then in the destructor of your RAII class, you can check std::uncaught_exception(), and if true, query the static data member in your exception class. If you get true (or > 0) back, you've got one of those exceptions--otherwise you ignore it.
Not very elegant, but it would probably do the trick (as long as you don't have multiple threads).
I found this website with an interesting discussion about std::uncaught_exception() and an alternative solution to your question that seems much more elegant and correct to me:
http://www.gotw.ca/gotw/047.htm
// Alternative right solution
//
T::Close() {
// ... code that could throw ...
}
T::~T() /* throw() */ {
try {
Close();
} catch( ... ) {
}
}
In this way you're destructor does only one thing and you're protected against throwing an exception during an exception (which I assume is the problem you're trying to solve).