Mutex that works across std implementations - c++

Related to this question, I need a mutex that works across std implementations, or a way to atomically write and read a pointer. One thread is spawned by code compiled with mingw-w64 and the other one is Visual Studio 2019 code in a static/dynamic library.

Export from your main executable (mingw-w64) to your DLL (VC++) - compiled with separate compilers - a synchronization/mutex "handle" (an opaque pointer, typically, though it can be an index into something too) and a pair of C-style functions (if you want, you can wrap them into classes like std::mutex and std::lock, exposing the same API - that'd be the safest thing to do) lock and unlock that take that handle. They can be just as bare as that, or they might include additional functionality like timeout, or try-lock - these are quite useful but not required. You can also export handle_t create() and void delete(handle_t handle) functions.
The point is that the sync object itself (the mutex or whatever) is always manipulated by those indirection functions to avoid errors in usage, and these functions are, depending on the compiler (that can easily be detected by the preprocessor), backed by compiler-specific atomic operation intrinsics or CRT functions, like the perfectly fitting InterlockedCompareExchange (it works under mingw-w64 too) and its Visual C++ specific compiler intrinsic variant, or GCC's __atomic (more specifically, __atomic_compare_exchange).

struct imutex {
virtual void lock() = 0;
virtual void unlock() = 0;
virtual ~imutex(){}
};
template<class M>
struct imp_mutex: imutex {
M m;
void lock() final override { m.lock(); }
void unlock() final override { m.unlock(); }
};
struct mymutex {
using up=std::unique_ptr<imutex, void(*)(imutex*)>;
mymutex( up m_in ):m(std::move(m_in)){}
mymutex():mymutex(up(new imp_mutex<std::mutex>{}, [](imutex* m){ delete m; })) {}
void lock(){ m->lock(); }
void unlock(){ m->unlock(); }
mymutex(mymutex&&)=delete;
private:
up m;
};
this assumes ABI compatibility of vtables and std::unique_ptr, which is plausible.
If not, replace unique ptr with something custom, and replace the virtual methods with function pointers taking a void pointer.
The point is, the mutex is created and destroyed in one library's code.
Here is a pure function pointer one. It relies that a struct containing two-three ptrs has the same layout, and that thr C calling convention is the same.
Whichever library makes the mymutex, both can then use it.
struct imutex_vtable {
void (*lock)(void*) = 0;
void (*unlock)(void*) = 0;
void (*dtor)(void*)=0;
};
template<class M>
imutex_vtable const* get_imutex_vtable(){
static const imutex_vtable vtable = {
[](void* m){ static_cast<M*>(m)->lock(); },
[](void* m){ static_cast<M*>(m)->unlock(); },
[](void* m){ delete static_cast<M*>(m); }
};
return &vtable;
}
struct mymutex {
mymutex( imutex_vtable const* vt, void* pm ):vtable(vt), pv(pm){}
template<class M>
explicit mymutex(std::unique_ptr<M> m):mymutex( get_imutex_vtable<M>(), m.release() ) {}
mymutex():mymutex(std::make_unique<std::mutex>()) {}
void lock(){ vtable->lock(pv); }
void unlock(){ vtable->unlock(pv); }
~mymutex(){ vtable->dtor(pv); }
mymutex(mymutex&&)=delete;
private:
imutex_vtable const* vtable=0;
void* pv=0;
};
This is basically implementing a simple case of C++ interface inheritance using C-like implementation, then wrapping it up in classes and templates so the user won't notice.

Related

Can a class object be created as an lvalue-only?

A well-known problem with std::lock_guard (and its relatives) is that it does not work the way as expected when only a temporary object is created.
For example:
std::mutex mtx;
std::lock_guard<std::mutex> {mtx} // temporary object, does not lock the entire scope
std::lock_guard<std::mutex> lck{mtx} // correct
I tried reference qualifiers to create a replacement that prevents a temporary object from being created (at compile time). The following code is a futile attempt:
#include <mutex>
template<typename T>
struct my_lock {
T &mtx;
my_lock(T &t) : mtx{t} { lock(); }
~my_lock() { unlock(); }
void lock() & { mtx.lock(); };
void unlock() & { mtx.unlock(); };
};
std::mutex mtx;
int main()
{
my_lock<std::mutex> {mtx}; // A
my_lock<std::mutex lck{mtx}; // B
}
This does not work, so the question becomes:
Is it possible to write the class in such a way that the compiler rejects A and accepts B ?
if you can use c++17, you can use [[nodiscard]] attribute with factory function.
class [[nodiscard]] my_lock{
my_lock()=default;
friend my_lock lock();
};
[[nodiscard]] my_lock lock(){return {};}
int main(){
{ lock(); } //warning for discard return value
{ auto l = lock();}
}
Let me reinterpret your question, instead of:
Is it possible to write the class in such a way that the compiler rejects A and accepts B ?
I'm gonna read this as
Is it possible for my compiler to reject A and accept B?
Yes, this is possible depending on the compiler without requiring to write your own classes. I'm very familiar with clang, however, similar checks will exist in other compilers or static analysers.
For clang, -Wunused-value -Werror will do the job. The first activates the warning, the second promotes it to an error.
Personally, I'm in favor to enable all warnings and explicitly disable those you have a reason not to comply to, including documentation on why.

Pointer-to-Function and Pointer-to-Object Semantics

I'm having issues with getting a partially-qualified function object to call later, with variable arguments, in another thread.
In GCC, I've been using a macro and typedef I made but I'm finishing up my project an trying to clear up warnings.
#define Function_Cast(func_ref) (SubscriptionFunction*) func_ref
typedef void(SubscriptionFunction(void*, std::shared_ptr<void>));
Using the Function_Cast macro like below results in "warning: casting between pointer-to-function and pointer-to-object is conditionally-supported"
Subscriber* init_subscriber = new Subscriber(this, Function_Cast(&BaseLoaderStaticInit::init), false);
All I really need is a pointer that I can make a std::bind<function_type> object of. How is this usually done?
Also, this conditionally-supported thing is really annoying. I know that on x86 my code will work fine and I'm aware of the limitations of relying on that sizeof(void*) == sizeof(this*) for all this*.
Also, is there a way to make clang treat function pointers like data pointers so that my code will compile? I'm interested to see how bad it fails (if it does).
Relevant Code:
#define Function_Cast(func_ref) (SubscriptionFunction*) func_ref
typedef void(SubscriptionFunction(void*, std::shared_ptr<void>));
typedef void(CallTypeFunction(std::shared_ptr<void>));
Subscriber(void* owner, SubscriptionFunction* func, bool serialized = true) {
this->_owner = owner;
this->_serialized = serialized;
this->method = func;
call = std::bind(&Subscriber::_std_call, this, std::placeholders::_1);
}
void _std_call(std::shared_ptr<void> arg) { method(_owner, arg); }
The problem here is that you are trying to use a member-function pointer in place of a function pointer, because you know that, under-the-hood, it is often implemented as function(this, ...).
struct S {
void f() {}
};
using fn_ptr = void(*)(S*);
void call(S* s, fn_ptr fn)
{
fn(s);
delete s;
}
int main() {
call(new S, (fn_ptr)&S::f);
}
http://ideone.com/fork/LJiohQ
But there's no guarantee this will actually work and obvious cases (virtual functions) where it probably won't.
Member functions are intended to be passed like this:
void call(S* s, void (S::*fn)())
and invoked like this:
(s->*fn)();
http://ideone.com/bJU5lx
How people work around this when they want to support different types is to use a trampoline, which is a non-member function. You can do this with either a static [member] function or a lambda:
auto sub = new Subscriber(this, [](auto* s){ s->init(); });
or if you'd like type safety at your call site, a templated constructor:
template<typename T>
Subscriber(T* t, void(T::*fn)(), bool x);
http://ideone.com/lECOp6
If your Subscriber constructor takes a std::function<void(void))> rather than a function pointer you can pass a capturing lambda and eliminate the need to take a void*:
new Subscriber([this](){ init(); }, false);
it's normally done something like this:
#include <functional>
#include <memory>
struct subscription
{
// RAII unsubscribe stuff in destructor here....
};
struct subscribable
{
subscription subscribe(std::function<void()> closure, std::weak_ptr<void> sentinel)
{
// perform the subscription
return subscription {
// some id so you can unsubscribe;
};
}
//
//
void notify_subscriber(std::function<void()> const& closure,
std::weak_ptr<void> const & sentinel)
{
if (auto locked = sentinel.lock())
{
closure();
}
}
};

How could I avoid this raw pointer with this OpenMP critical section?

I have a std::deque<std::reference_wrapper<MyType>> mydeque. I need a function that returns the front value (as a plain reference) and pops it from the queue. As std::deque are not thread safe, access should be protected (I'm using OpenMP).
I came up with the ugly code below. It looks very bad having such advanced structures and then falling back to a raw pointer.
MyType & retrieve() {
MyType* b;
#pragma omp critical(access_mydeque)
{
b = &(mydeque.front().get());
mydeque.pop_front();
}
return *b;
}
The problem is that I cannot return within the critical section, but I also cannot declare a reference(_wrapper) before the critical section (because it must be assigned to something)... Is there a way to solve this?
Any solution I can think of involves using an omp_lock_t instead of the critical construct and a RAII class managing the omp_lock_t ownership:
class LockGuard {
public:
explicit LockGuard(omp_lock_t& lock) : m_lock(lock){
omp_set_lock(&m_lock);
}
~LockGuard() {
omp_unset_lock(&m_lock);
}
private:
omp_lock_t& m_lock;
};
Then you can either modify the code you already have into something like:
MyType & retrieve() {
LockGuard guard(mydeque_lock);
auto b = mydeque.front();
mydeque.pop_front();
return b;
}
or better, write your own thread-safe container that aggregates the lock and the std::deque:
template<class T>
class MtLifo {
public:
MtLifo() {
omp_init_lock(&m_lock);
}
typename std::deque<T>::reference front_and_pop() {
LockGuard guard(m_lock);
auto b = m_stack.front();
m_stack.pop_front();
return b;
}
void push_front(const T& value) {
LockGuard guard(m_lock);
m_stack.push_front(value);
}
~MtLifo() {
omp_destroy_lock(&m_lock);
}
private:
std::deque<T> m_stack;
omp_lock_t m_lock;
}
You could simply use TBB's parallel data structures https://software.intel.com/en-us/node/506076 (though since there is no concurrent_deque they may not be perfect for you :-( ).
They do not require that you also use TBB to describe the parallelism aspects of your code, so can be mixed into an OpenMP code. (Of course, since you're using C++ you might find TBB's approach to scalable, composable, parallelism more friendly than OpenMP's, but that's a separable decision).

Allocating memory for delayed event arguments

Here is my issue.
I have a class to create timed events. It takes in:
A function pointer of void (*func)(void* arg)
A void* to the argument
A delay
The issue is I may want to create on-the-fly variables that I dont want to be a static variable in the class, or a global variable. If either of these are not met, I cant do something like:
void doStuff(void *arg)
{
somebool = *(bool*)arg;
}
void makeIt()
{
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,5);
}
That wont work because the bool gets destroyed when the function returns. So I'd have to allocate these on the heap. The issue then becomes, who allocates and who deletes. what I'd like to do is to be able to take in anything, then copy its memory and manage it in the timed event class. But I dont think I can do memcpy since I dont know the tyoe.
What would be a good way to acheive this where the time event is responsible for memory managment.
Thanks
I do not use boost
class AguiTimedEvent {
void (*onEvent)(void* arg);
void* argument;
AguiWidgetBase* caller;
double timeStamp;
public:
void call() const;
bool expired() const;
AguiWidgetBase* getCaller() const;
AguiTimedEvent();
AguiTimedEvent(void(*Timefunc)(void* arg),void* arg, double timeSec, AguiWidgetBase* caller);
};
void AguiWidgetContainer::handleTimedEvents()
{
for(std::vector<AguiTimedEvent>::iterator it = timedEvents.begin(); it != timedEvents.end();)
{
if(it->expired())
{
it->call();
it = timedEvents.erase(it);
}
else
it++;
}
}
void AguiWidgetBase::createTimedEvent( void (*func)(void* data),void* data,double timeInSec )
{
if(!getWidgetContainer())
return;
getWidgetContainer()->addTimedEvent(AguiTimedEvent(func,data,timeInSec,this));
}
void AguiWidgetContainer::addTimedEvent( const AguiTimedEvent &timedEvent )
{
timedEvents.push_back(timedEvent);
}
Why would you not use boost::shared_ptr?
It offers storage duration you require since an underlying object will be destructed only when all shared_ptrs pointing to it will have been destructed.
Also it offers full thread safety.
Using C++0x unique_ptr is perfect for the job. This is a future standard, but unique_ptr is already supported under G++ and Visual Studio. For C++98 (current standard), auto_ptr works like a harder to use version of unique_ptr... For C++ TR1 (implemented in Visual Studio and G++), you can use std::tr1::shared_ptr.
Basically, you need a smart pointer. Here's how unique_ptr would work:
unique_ptr<bool> makeIt(){ // More commonly, called a "source"
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,5);
return new unique_ptr<bool>(a)
}
When you use the code later...
void someFunction(){
unique_ptr<bool> stuff = makeIt();
} // stuff is deleted here, because unique_ptr deletes
// things when they leave their scope
You can also use it as a function "sink"
void sink(unique_ptr<bool> ptr){
// Use the pointer somehow
}
void somewhereElse(){
unique_ptr<bool> stuff = makeIt();
sink(stuff);
// stuff is now deleted! Stuff points to null now
}
Aside from that, you can use unique_ptr like a normal pointer, aside from the strange movement rules. There are many smart pointers, unique_ptr is just one of them. shared_ptr is implemented in both Visual Studio and G++ and is the more typical ptr. I personally like to use unique_ptr as often as possible however.
If you can't use boost or tr1, then what I'd do is write my own function that behaves like auto_ptr. In fact that's what I've done on a project here that doesn't have any boost or tr1 access. When all of the events who care about the data are done with it it automatically gets deleted.
You can just change your function definition to take in an extra parameter that represents the size of the object passed in. Then just pass the size down. So your new function declarations looks like this:
void (*func)(void* arg, size_t size)
void doStuff(void *arg, size_t size)
{
somebool = *(bool*)arg;
memcpy( arg, myStorage, size );
}
void makeIt()
{
bool a = true;
container->createTimedEvent(doStuff,(void*)&a,sizeof(bool), 5);
}
Then you can pass variables that are still on the stack and memcpy them in the timed event class. The only problem is that you don't know the type any more... but that's what happens when you cast to void*
Hope that helps.
You should re-work your class to use inheritance, not a function pointer.
class AguiEvent {
virtual void Call() = 0;
virtual ~AguiEvent() {}
};
class AguiTimedEvent {
std::auto_ptr<AguiEvent> event;
double timeSec;
AguiWidgetBase* caller;
public:
AguiTimedEvent(std::auto_ptr<AguiEvent> ev, double time, AguiWidgetBase* base)
: event(ev)
, timeSec(time)
, caller(base) {}
void call() { event->Call(); }
// All the rest of it
};
void MakeIt() {
class someclass : AguiEvent {
bool MahBool;
public:
someclass() { MahBool = false; }
void Call() {
// access to MahBool through this.
}
};
something->somefunc(AguiTimedEvent(new someclass())); // problem solved
}

Best Practice for Scoped Reference Idiom?

I just got burned by a bug that is partially due to my lack of understanding, and partially due to what I think is suboptimal design in our codebase. I'm curious as to how my 5-minute solution can be improved.
We're using ref-counted objects, where we have AddRef() and Release() on objects of these classes. One particular object is derived from the ref-count object, but a common function to get an instance of these objects (GetExisting) hides an AddRef() within itself without advertising that it is doing so. This necessitates doing a Release at the end of the functional block to free the hidden ref, but a developer who didn't inspect the implementation of GetExisting() wouldn't know that, and someone who forgets to add a Release at the end of the function (say, during a mad dash of bug-fixing crunch time) leaks objects. This, of course, was my burn.
void SomeFunction(ProgramStateInfo *P)
{
ThreadClass *thread = ThreadClass::GetExisting( P );
// some code goes here
bool result = UseThreadSomehow(thread);
// some code goes here
thread->Release(); // Need to do this because GetExisting() calls AddRef()
}
So I wrote up a little class to avoid the need for the Release() at the end of these functions.
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
ThreadClass * Thread() const { return m_T; }
};
So that now I can just do this:
void SomeFunction(ProgramStateInfo *P)
{
ThreadContainer ThreadC(ThreadClass::GetExisting( P ));
// some code goes here
bool result = UseThreadSomehow(ThreadC.Thread());
// some code goes here
// Automagic Release() in ThreadC Destructor!!!
}
What I don't like is that to access the thread pointer, I have to call a member function of ThreadContainer, Thread(). Is there some clever way that I can clean that up so that it's syntactically prettier, or would anything like that obscure the meaning of the container and introduce new problems for developers unfamiliar with the code?
Thanks.
use boost::shared_ptr
it is possible to define your own destructor function, such us in next example: http://www.boost.org/doc/libs/1_38_0/libs/smart_ptr/sp_techniques.html#com
Yes, you can implement operator ->() for the class, which will recursively call operator ->() on whatever you return:
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
ThreadClass * operator -> () const { return m_T; }
};
It's effectively using smart pointer semantics for your wrapper class:
Thread *t = new Thread();
...
ThreadContainer tc(t);
...
tc->SomeThreadFunction(); // invokes tc->t->SomeThreadFunction() behind the scenes...
You could also write a conversion function to enable your UseThreadSomehow(ThreadContainer tc) type calls in a similar way.
If Boost is an option, I think you can set up a shared_ptr to act as a smart reference as well.
Take a look at ScopeGuard. It allows syntax like this (shamelessly stolen from that link):
{
FILE* topSecret = fopen("cia.txt");
ON_BLOCK_EXIT(std::fclose, topSecret);
... use topSecret ...
} // topSecret automagically closed
Or you could try Boost::ScopeExit:
void World::addPerson(Person const& aPerson) {
bool commit = false;
m_persons.push_back(aPerson); // (1) direct action
BOOST_SCOPE_EXIT( (&commit)(&m_persons) )
{
if(!commit)
m_persons.pop_back(); // (2) rollback action
} BOOST_SCOPE_EXIT_END
// ... // (3) other operations
commit = true; // (4) turn all rollback actions into no-op
}
I would recommend following bb advice and using boost::shared_ptr<>. If boost is not an option, you can take a look at std::auto_ptr<>, which is simple and probably addresses most of your needs. Take into consideration that the std::auto_ptr has special move semantics that you probably don't want to mimic.
The approach is providing both the * and -> operators together with a getter (for the raw pointer) and a release operation in case you want to release control of the inner object.
You can add an automatic type-cast operator to return your raw pointer. This approach is used by Microsoft's CString class to give easy access to the underlying character buffer, and I've always found it handy. There might be some unpleasant surprises to be discovered with this method, as in any time you have an implicit conversion, but I haven't run across any.
class ThreadContainer
{
private:
ThreadClass *m_T;
public:
ThreadContainer(Thread *T){ m_T = T; }
~ThreadContainer() { if(m_T) m_T->Release(); }
operator ThreadClass *() const { return m_T; }
};
void SomeFunction(ProgramStateInfo *P)
{
ThreadContainer ThreadC(ThreadClass::GetExisting( P ));
// some code goes here
bool result = UseThreadSomehow(ThreadC);
// some code goes here
// Automagic Release() in ThreadC Destructor!!!
}