Is this an acceptable way to lock a container using C++? - c++

I need to implement (in C++) a thread safe container in such a way that only one thread is ever able to add or remove items from the container. I have done this kind of thing before by sharing a mutex between threads. This leads to a lot of mutex objects being littered throughout my code and makes things very messy and hard to maintain.
I was wondering if there is a neater and more object oriented way to do this. I thought of the following simple class wrapper around the container (semi-pseudo C++ code)
class LockedList {
private:
std::list<MyClass> m_List;
public:
MutexObject Mutex;
};
so that locking could be done in the following way
LockedList lockableList; //create instance
lockableList.Mutex.Lock(); // Lock object
... // search and add or remove items
lockableList.Mutex.Unlock(); // Unlock object
So my question really is to ask if this is a good approach from a design perspective? I know that allowing public access to members is frowned upon from a design perspective, does the above design have any serious flaws in it. If so is there a better way to implement thread safe container objects?
I have read a lot of books on design and C++ in general but there really does seem to be a shortage of literature regarding multithreaded programming and multithreaded software design.
If the above is a poor approach to solving the problem I have could anyone suggest a way to improve it, or point me towards some information that explains good ways to design classes to be thread safe??? Many thanks.

I would rather design a resourece owner that locks a mutex and returns an object that can be used by the thread. Once the thread has finished with it and stops using the object the resource is automatically returned to its owner and the lock released.
template<typename Resource>
class ResourceOwner
{
Lock lock;
Resource resource;
public:
ResourceHolder<Resource> getExclusiveAccess()
{
// Let the ResourceHolder lock and unlock the lock
// So while a thread holds a copy of this object only it
// can access the resource. Once the thread releases all
// copies then the lock is released allowing another
// thread to call getExclusiveAccess().
//
// Make it behave like a form of smart pointer
// 1) So you can pass it around.
// 2) So all properties of the resource are provided via ->
// 3) So the lock is automatically released when the thread
// releases the object.
return ResourceHolder<Resource>(lock, resource);
}
};
The resource holder (not thought hard so this can be improved)
template<typename Resource>
class ResourceHolder<
{
// Use a shared_ptr to hold the scopped lock
// When first created will lock the lock. When the shared_ptr
// destroyes the scopped lock (after all copies are gone)
// this will unlock the lock thus allowding other to use
// getExclusiveAccess() on the owner
std::shared_ptr<scopped_lock> locker;
Resource& resource; // local reference on the resource.
public:
ResourceHolder(Lock& lock, Resource& r)
: locker(new scopped_lock(lock))
, resource(r)
{}
// Access to the resource via the -> operator
// Thus allowing you to use all normal functionality of
// the resource.
Resource* operator->() {return &resource;}
};
Now a lockable list is:
ResourceOwner<list<int>> lockedList;
void threadedCode()
{
ResourceHolder<list<int>> list = lockedList.getExclusiveAccess();
list->push_back(1);
}
// When list goes out of scope here.
// It is destroyed and the the member locker will unlock `lock`
// in its destructor thus allowing the next thread to call getExclusiveAccess()

I would do something like this to make it more exception-safe by using RAII.
class LockedList {
private:
std::list<MyClass> m_List;
MutexObject Mutex;
friend class LockableListLock;
};
class LockableListLock {
private:
LockedList& list_;
public:
LockableListLock(LockedList& list) : list_(list) { list.Mutex.Lock(); }
~LockableListLock(){ list.Mutex.Unlock(); }
}
You would use it like this
LockableList list;
{
LockableListLock lock(list); // The list is now locked.
// do stuff to the list
} // The list is automatically unlocked when lock goes out of scope.
You could also make the class force you to lock it before doing anything with it by adding wrappers around the interface for std::list in LockableListLock so instead of accessing the list through the LockedList class, you would access the list through the LockableListLock class. For instance, you would make this wrapper around std::list::begin()
std::list::iterator LockableListLock::begin() {
return list_.m_List.begin();
}
and then use it like this
LockableList list;
LockableListLock lock(list);
// list.begin(); //This is a compiler error so you can't
//access the list without locking it
lock.begin(); // This gets you the beginning of the list

Okay, I'll state a little more directly what others have already implied: at least part, and quite possibly all, of this design is probably not what you want. At the very least, you want RAII-style locking.
I'd also make the locked (or whatever you prefer to call it) a template, so you can decouple the locking from the container itself.
// C++ like pesudo-code. Not intended to compile as-is.
struct mutex {
void lock() { /* ... */ }
void unlock() { /* ... */ }
};
struct lock {
lock(mutex &m) { m.lock(); }
~lock(mutex &m) { m.unlock(); }
};
template <class container>
class locked {
typedef container::value_type value_type;
typedef container::reference_type reference_type;
// ...
container c;
mutex m;
public:
void push_back(reference_type const t) {
lock l(m);
c.push_back(t);
}
void push_front(reference_type const t) {
lock l(m);
c.push_front(t);
}
// etc.
};
This makes the code fairly easy to write and (for at least some cases) still get correct behavior -- e.g., where your single-threaded code might look like:
std::vector<int> x;
x.push_back(y);
...your thread-safe code would look like:
locked<std::vector<int> > x;
x.push_back(y);
Assuming you provide the usual begin(), end(), push_front, push_back, etc., your locked<container> will still be usable like a normal container, so it works with standard algorithms, iterators, etc.

The problem with this approach is that it makes LockedList non-copyable. For details on this snag, please look at this question:
Designing a thread-safe copyable class
I have tried various things over the years, and a mutex declared beside the the container declaration always turns out to be the simplest way to go ( once all the bugs have been fixed after naively implementing other methods ).
You do not need to 'litter' your code with mutexes. You just need one mutex, declared beside the container it guards.

It's hard to say that the coarse grain locking is a bad design decision. We'd need to know about the system that the code lives in to talk about that. It is a good starting point if you don't know that it won't work however. Do the simplest thing that could possibly work first.
You could improve that code by making it less likely to fail if you scope without unlocking though.
struct ScopedLocker {
ScopedLocker(MutexObject &mo_) : mo(mo_) { mo.Lock(); }
~ScopedLocker() { mo.Unlock(); }
MutexObject &mo;
};
You could also hide the implementation from users.
class LockedList {
private:
std::list<MyClass> m_List;
MutexObject Mutex;
public:
struct ScopedLocker {
ScopedLocker(LockedList &ll);
~ScopedLocker();
};
};
Then you just pass the locked list to it without them having to worry about details of the MutexObject.
You can also have the list handle all the locking internally, which is alright in some cases. The design issue is iteration. If the list locks internally, then operations like this are much worse than letting the user of the list decide when to lock.
void foo(LockedList &list) {
for (size_t i = 0; i < 100000000; i++) {
list.push_back(i);
}
}
Generally speaking, it's a hard topic to give advice on because of problems like this. More often than not, it's more about how you use an object. There are a lot of leaky abstractions when you try and write code that solves multi-processor programming. That is why you see more toolkits that let people compose the solution that meets their needs.
There are books that discuss multi-processor programming, though they are few. With all the new C++11 features coming out, there should be more literature coming within the next few years.

I came up with this (which I'm sure can be improved to take more than two arguments):
template<class T1, class T2>
class combine : public T1, public T2
{
public:
/// We always need a virtual destructor.
virtual ~combine() { }
};
This allows you to do:
// Combine an std::mutex and std::map<std::string, std::string> into
// a single instance.
combine<std::mutex, std::map<std::string, std::string>> mapWithMutex;
// Lock the map within scope to modify the map in a thread-safe way.
{
// Lock the map.
std::lock_guard<std::mutex> locked(mapWithMutex);
// Modify the map.
mapWithMutex["Person 1"] = "Jack";
mapWithMutex["Person 2"] = "Jill";
}
If you wish to use an std::recursive_mutex and an std::set, that would also work.

Related

Should this be called a mutex?

I have objects that can be opened in different modes, among which read and write.
If you opened it read you can still call
object->upgradeOpen();
It is common practice in our code to call
object->downgradeOpen();
When you are done writing.
I usually find it easier to use the concept of a mutex that I learned in c++ essentials where you let this upgradeOpen and downgradeOpen be done in the constructor and destructor of this mutex object.
class ObjectMutex{
public:
ObjectMutex(const Object& o)
: m_o(o)
{
m_o.upgradeOpen();
}
~ObjectMutex(){
m_o.downgradeOpen();
}
private:
Object m_o;
};
Only problem is, it doesn't really lock the object to make it thread safe, so I don't think it really is a mutex.
Is there another accepted name to call this construction?
The principle which is implemented in this class is called RAII (http://en.cppreference.com/w/cpp/language/raii).
In general such objects can be called "RAII object".
For the name in code you can use ScopedSomething. In this particular case, for example, ScopedObjectUpgrader or another meaningful name of action which is done for the scope.
Sounds to me more like an upgradable mutex
Take a look at RAII wrappers for upgradable mutexes How to unlock boost::upgrade_to_unique_lock (made from boost::shared_mutex)? to get a better idea of how to write one yourself.
For example you probably want to write two separate RAII wrappers
class OpenLock {
public:
OpenLock(Object& o_in) : o{o_in} {
this->o.open();
}
~OpenLock() {
this->o.close();
}
private:
Object& o;
};
class UpgradeOpenLock {
public:
UpgradeOpenLock(Object& o_in) : o{o_in} {
this->o->upgradeOpen();
}
~UpgradeOpenLock() {
this->o->downgradeOpen();
}
private:
Object& o;
};
and then use it like this
{
OpenLock open_lck(o);
// freely read
{
UpgradeOpenLock upgrade_lck(o);
// freely read or write
}
// freely read again
}

Synchronizing method calls on shared object from multiple threads

I am thinking about how to implement a class that will contain private data that will be eventually be modified by multiple threads through method calls. For synchronization (using the Windows API), I am planning on using a CRITICAL_SECTION object since all the threads will spawn from the same process.
Given the following design, I have a few questions.
template <typename T> class Shareable
{
private:
const LPCRITICAL_SECTION sync; //Can be read and used by multiple threads
T *data;
public:
Shareable(LPCRITICAL_SECTION cs, unsigned elems) : sync{cs}, data{new T[elems]} { }
~Shareable() { delete[] data; }
void sharedModify(unsigned index, T &datum) //<-- Can this be validly called
//by multiple threads with synchronization being implicit?
{
EnterCriticalSection(sync);
/*
The critical section of code involving reads & writes to 'data'
*/
LeaveCriticalSection(sync);
}
};
// Somewhere else ...
DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
Shareable<ActualType> *ptr = static_cast<Shareable<ActualType>*>(lpParameter);
T copyable = /* initialization */;
ptr->sharedModify(validIndex, copyable); //<-- OK, synchronized?
return 0;
}
The way I see it, the API calls will be conducted in the context of the current thread. That is, I assume this is the same as if I had acquired the critical section object from the pointer and called the API from within ThreadProc(). However, I am worried that if the object is created and placed in the main/initial thread, there will be something funky about the API calls.
When sharedModify() is called on the same object concurrently,
from multiple threads, will the synchronization be implicit, in the
way I described it above?
Should I instead get a pointer to the
critical section object and use that instead?
Is there some other
synchronization mechanism that is better suited to this scenario?
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
It's not implicit, it's explicit. There's only only CRITICAL_SECTION and only one thread can hold it at a time.
Should I instead get a pointer to the critical section object and use that instead?
No. There's no reason to use a pointer here.
Is there some other synchronization mechanism that is better suited to this scenario?
It's hard to say without seeing more code, but this is definitely the "default" solution. It's like a singly-linked list -- you learn it first, it always works, but it's not always the best choice.
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
Implicit from the caller's perspective, yes.
Should I instead get a pointer to the critical section object and use that instead?
No. In fact, I would suggest giving the Sharable object ownership of its own critical section instead of accepting one from the outside (and embrace RAII concepts to write safer code), eg:
template <typename T>
class Shareable
{
private:
CRITICAL_SECTION sync;
std::vector<T> data;
struct SyncLocker
{
CRITICAL_SECTION &sync;
SyncLocker(CRITICAL_SECTION &cs) : sync(cs) { EnterCriticalSection(&sync); }
~SyncLocker() { LeaveCriticalSection(&sync); }
}
public:
Shareable(unsigned elems) : data(elems)
{
InitializeCriticalSection(&sync);
}
Shareable(const Shareable&) = delete;
Shareable(Shareable&&) = delete;
~Shareable()
{
{
SyncLocker lock(sync);
data.clear();
}
DeleteCriticalSection(&sync);
}
void sharedModify(unsigned index, const T &datum)
{
SyncLocker lock(sync);
data[index] = datum;
}
Shareable& operator=(const Shareable&) = delete;
Shareable& operator=(Shareable&&) = delete;
};
Is there some other synchronization mechanism that is better suited to this scenario?
That depends. Will multiple threads be accessing the same index at the same time? If not, then there is not really a need for the critical section at all. One thread can safely access one index while another thread accesses a different index.
If multiple threads need to access the same index at the same time, a critical section might still not be the best choice. Locking the entire array might be a big bottleneck if you only need to lock portions of the array at a time. Things like the Interlocked API, or Slim Read/Write locks, might make more sense. It really depends on your thread designs and what you are actually trying to protect.

Updating cache without blocking

I currently have a program that has a cache like mechanism. I have a thread listening for updates from another server to this cache. This thread will update the cache when it receives an update. Here is some pseudo code:
void cache::update_cache()
{
cache_ = new std::map<std::string, value>();
while(true)
{
if(recv().compare("update") == 0)
{
std::map<std::string, value> *new_info = new std::map<std::string, value>();
std::map<std::string, value> *tmp;
//Get new info, store in new_info
tmp = cache_;
cache_ = new_cache;
delete tmp;
}
}
}
std::map<std::string, value> *cache::get_cache()
{
return cache_;
}
cache_ is being read from many different threads concurrently. I believe how I have it here I will run into undefined behavior if one of my threads call get_cache(), then my cache updates, then the thread tries to access the stored cache.
I am looking for a way to avoid this problem. I know I could use a mutex, but I would rather not block reads from happening as they have to be as low latency as possible, but if need be, I can go that route.
I was wondering if this would be a good use case for a unique_ptr. Is my understanding correct in that if a thread calls get_cache, and that returns a unique_ptr instead of a standard pointer, once all threads that have the old version of cache are finished with it(i.e leave scope), the object will be deleted.
Is using a unique_ptr the best option for this case, or is there another option that I am not thinking of?
Any input will be greatly appreciated.
Edit:
I believe I made a mistake in my OP. I meant to use and pass a shared_ptr not a unique_ptr for cache_. And when all threads are finished with cache_ the shared_ptr should delete itself.
A little about my program: My program is a webserver that will be using this information to decide what information to return. It is fairly high throughput(thousands of req/sec) Each request queries the cache once, so telling my other threads when to update is no problem. I can tolerate slightly out of date information, and would prefer that over blocking all of my threads from executing if possible. The information in the cache is fairly large, and I would like to limit any copies on value because of this.
update_cache is only run once. It is run in a thread that just listens for an update command and runs the code.
I feel there are multiple issues:
1) Do not leak memory: for that never use "delete" in your code and stick with unique_ptr (or shared_ptr in specific cases)
2) Protect accesses to shared data, for that either using locking (mutex) or lock-free mecanism (std::atomic)
class Cache {
using Map = std::map<std::string, value>();
std::unique_ptr<Map> m_cache;
std::mutex m_cacheLock;
public:
void update_cache()
{
while(true)
{
if(recv().compare("update") == 0)
{
std::unique_ptr<Map> new_info { new Map };
//Get new info, store in new_info
{
std::lock_guard<std::mutex> lock{m_cacheLock};
using std::swap;
swap(m_cache, new_cache);
}
}
}
}
Note: I don't like update_cache() being part of a public interface for the cache as it contains an infinite loop. I would probably externalize the loop with the recv and have a:
void update_cache(std::unique_ptr<Map> new_info)
{
{ // This inner brace is not useless, we don't need to keep the lock during deletion
std::lock_guard<std::mutex> lock{m_cacheLock};
using std::swap;
swap(m_cache, new_cache);
}
}
Now for the reading to the cache, use proper encapsulation and don't leave the pointer to the member map escape:
value get(const std::string &key)
{
// lock, fetch, and return.
// Depending on value type, you might want to allocate memory
// before locking
}
Using this signature you have to throw an exception if the value is not present in the cache, another option is to return something like a boost::optional.
Overall you can keep a low latency (everything is relative, I don't know your use case) if you take care of doing costly operations (memory allocation for instance) outside of the locking section.
shared_ptr is very reasonable for this purpose, C++11 has a family of functions for handling shared_ptr atomically. If the data is immutable after creation, you won't even need any additional synchronization:
class cache {
public:
using map_t = std::map<std::string, value>;
void update_cache();
std::shared_ptr<const map_t> get_cache() const;
private:
std::shared_ptr<const map_t> cache_;
};
void cache::update_cache()
{
while(true)
{
if(recv() == "update")
{
auto new_info = std::make_shared<map_t>();
// Get new info, store in new_info
// Make immutable & publish
std::atomic_store(&cache_,
std::shared_ptr<const map_t>{std::move(new_info)});
}
}
}
auto cache::get_cache() const -> std::shared_ptr<const map_t> {
return std::atomic_load(&cache_);
}

Thread safe container

There is some exemplary class of container in pseudo code:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
// addition of data
}
data get(size_t which)
{
// returning some data
}
void remove(size_t which)
{
// delete specified object
}
private:
data d;
};
How this container can be made thread safe? I heard about mutexes - where these mutexes should be placed? Should mutex be static for a class or maybe in global scope? What is good library for this task in C++?
First of all mutexes should not be static for a class as long as you going to use more than one instance. There is many cases where you should or shouldn't use use them. So without seeing your code it's hard to say. Just remember, they are used to synchronise access to shared data. So it's wise to place them inside methods that modify or rely on object's state. In your case I would use one mutex to protect whole object and lock all three methods. Like:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
lock_guard<Mutex> lock(mutex);
// addition of data
}
data get(size_t which)
{
lock_guard<Mutex> lock(mutex);
// getting copy of value
// return that value
}
void remove(size_t which)
{
lock_guard<Mutex> lock(mutex);
// delete specified object
}
private:
data d;
Mutex mutex;
};
Intel Thread Building Blocks (TBB) provides a bunch of thread-safe container implementations for C++. It has been open sourced, you can download it from: http://threadingbuildingblocks.org/ver.php?fid=174 .
First: sharing mutable state between threads is hard. You should be using a library that has been audited and debugged.
Now that it is said, there are two different functional issue:
you want a container to provide safe atomic operations
you want a container to provide safe multiple operations
The idea of multiple operations is that multiple accesses to the same container must be executed successively, under the control of a single entity. They require the caller to "hold" the mutex for the duration of the transaction so that only it changes the state.
1. Atomic operations
This one appears simple:
add a mutex to the object
at the start of each method grab a mutex with a RAII lock
Unfortunately it's also plain wrong.
The issue is re-entrancy. It is likely that some methods will call other methods on the same object. If those once again attempt to grab the mutex, you get a dead lock.
It is possible to use re-entrant mutexes. They are a bit slower, but allow the same thread to lock a given mutex as much as it wants. The number of unlocks should match the number of locks, so once again, RAII.
Another approach is to use dispatching methods:
class C {
public:
void foo() { Lock lock(_mutex); foo_impl(); }]
private:
void foo_impl() { /* do something */; foo_impl(); }
};
The public methods are simple forwarders to private work-methods and simply lock. Then one just have to ensure that private methods never take the mutex...
Of course there are risks of accidentally calling a locking method from a work-method, in which case you deadlock. Read on to avoid this ;)
2. Multiple operations
The only way to achieve this is to have the caller hold the mutex.
The general method is simple:
add a mutex to the container
provide a handle on this method
cross your fingers that the caller will never forget to hold the mutex while accessing the class
I personally prefer a much saner approach.
First, I create a "bundle of data", which simply represents the class data (+ a mutex), and then I provide a Proxy, in charge of grabbing the mutex. The data is locked so that the proxy only may access the state.
class ContainerData {
protected:
friend class ContainerProxy;
Mutex _mutex;
void foo();
void bar();
private:
// some data
};
class ContainerProxy {
public:
ContainerProxy(ContainerData& data): _data(data), _lock(data._mutex) {}
void foo() { data.foo(); }
void bar() { foo(); data.bar(); }
};
Note that it is perfectly safe for the Proxy to call its own methods. The mutex will be released automatically by the destructor.
The mutex can still be reentrant if multiple Proxies are desired. But really, when multiple proxies are involved, it generally turns into a mess. In debug mode, it's also possible to add a "check" that the mutex is not already held by this thread (and assert if it is).
3. Reminder
Using locks is error-prone. Deadlocks are a common cause of error and occur as soon as you have two mutexes (or one and re-entrancy). When possible, prefer using higher level alternatives.
Add mutex as an instance variable of class. Initialize it in constructor, and lock it at the very begining of every method, including destructor, and unlock at the end of method. Adding global mutex for all instances of class (static member or just in gloabl scope) may be a performance penalty.
The is also a very nice collection of lock-free containers (including maps) by Max Khiszinsky
LibCDS1 Concurrent Data Structures
Here is the documentation page:
http://libcds.sourceforge.net/doc/index.html
It can be kind of intimidating to get started, because it is fully generic and requires you register a chosen garbage collection strategy and initialize that. Of course, the threading library is configurable and you need to initialize that as well :)
See the following links for some getting started info:
initialization of CDS and the threading manager
http://sourceforge.net/projects/libcds/forums/forum/1034512/topic/4600301/
the unit tests ((cd build && ./build.sh ----debug-test for debug build)
Here is base template for 'main':
#include <cds/threading/model.h> // threading manager
#include <cds/gc/hzp/hzp.h> // Hazard Pointer GC
int main()
{
// Initialize \p CDS library
cds::Initialize();
// Initialize Garbage collector(s) that you use
cds::gc::hzp::GarbageCollector::Construct();
// Attach main thread
// Note: it is needed if main thread can access to libcds containers
cds::threading::Manager::attachThread();
// Do some useful work
...
// Finish main thread - detaches internal control structures
cds::threading::Manager::detachThread();
// Terminate GCs
cds::gc::hzp::GarbageCollector::Destruct();
// Terminate \p CDS library
cds::Terminate();
}
Don't forget to attach any additional threads you are using:
#include <cds/threading/model.h>
int myThreadFunc(void *)
{
// initialize libcds thread control structures
cds::threading::Manager::attachThread();
// Now, you can work with GCs and libcds containers
....
// Finish working thread
cds::threading::Manager::detachThread();
}
1 (not to be confuse with Google's compact datastructures library)

Lightweight wrapper - is this a common problem and if yes, what is its name?

I have to use a library that makes database calls which are not thread-safe. Also I occasionally have to load larger amounts of data in a background thread.
It is hard to say which library functions actually access the DB, so I think the safest approach for me is to protect every library call with a lock.
Let's say I have a library object:
dbLib::SomeObject someObject;
Right now I can do something like this:
dbLib::ErrorCode errorCode = 0;
std::list<dbLib::Item> items;
{
DbLock dbLock;
errorCode = someObject.someFunction(&items);
} // dbLock goes out of scope
I would like to simplify that to something like this (or even simpler):
dbLib::ErrorCode errorCode =
protectedCall(someObject, &dbLib::SomeObject::someFunction(&items));
The main advantage of this would be that I won't have to duplicate the interface of dbLib::SomeObject in order to protect each call with a lock.
I'm pretty sure that this is a common pattern/idiom but I don't know its name or what keywords to search for. (Looking at http://www.vincehuston.org/dp/gof_intents.html I think, it's more an idiom than a pattern).
Where do I have to look for more information?
You could make protectedCall a template function that takes a functor without arguments (meaning you'd bind the arguments at the call-site), and then creates a scoped lock, calls the functor, and returns its value. For example something like:
template <typename Ret>
Ret protectedCall(boost::function<Ret ()> func)
{
DbLock lock;
return func();
}
You'd then call it like this:
dbLib::ErrorCode errorCode = protectedCall(boost::bind(&dbLib::SomeObject::someFunction, &items));
EDIT. In case you're using C++0x, you can use std::function and std::bind instead of the boost equivalents.
In C++0x, you can implement some form of decorators:
template <typename F>
auto protect(F&& f) -> decltype(f())
{
DbLock lock;
return f();
}
usage:
dbLib::ErrorCode errorCode = protect([&]()
{
return someObject.someFunction(&items);
});
From your description this would seem a job for Decorator Pattern.
However, especially in the case of resources, I wouldn't recommend using it.
The reason is that in general these functions tend to scale badly, require higher level (less finegrained) locking for consistency, or return references to internal structures that require the lock to stay locked until all information is read.
Think, e.g. about a DB function that calls a stored procedure that returns a BLOB (stream) or a ref cursor: the streams should not be read outside of the lock.
What to do?
I recommend instead to use the Facade Pattern. Instead of composing your operations directly in terms of DB calls, implement a facade that uses the DB layer; This layer could then manage the locking at exactly the required level (and optimize where needed: you could have the facade be implemented as a thread-local Singleton, and use separate resources, obviating the need for locks, e.g.)
The simplest (and still straightforward) solution might be to write a function which returns a proxy for the object. The proxy does the locking and overloads -> to allow calling the object. Here is an example:
#include <cstdio>
template<class T>
class call_proxy
{
T &item;
public:
call_proxy(T &t) : item(t) { puts("LOCK"); }
T *operator -> () { return &item; }
~call_proxy() { puts("UNLOCK"); }
};
template<class T>
call_proxy<T> protect(T &t)
{
return call_proxy<T>(t);
}
Here's how to use it:
class Intf
{
public:
void function()
{
puts("foo");
}
};
int main()
{
Intf a;
protect(a)->function();
}
The output should be:
LOCK
foo
UNLOCK
If you want the lock to happen before the evaluation of the arguments, then can use this macro:
#define PCALL(X,APPL) (protect(X), (X).APPL)
PCALL(x,x.function());
This evaluates x twice though.
This article by Andrei Alexandrescu has a pretty interesting article how to create this kind of thin wrapper and combine it with dreaded volatile keyword for thread safety.
Mutex locking is a similar problem. It asked for help here: Need some feedback on how to make a class "thread-safe"
The solution I came up with was a wrapper class that prevents access to the protected object. Access can be obtained via an "accessor" class. The accessor will lock the mutex in its constructor and unlock it on destruction. See the "ThreadSafe" and "Locker" classes in Threading.h for more details.