Does an optimistic lock-free FIFO queue implementation exist? - c++

Is there any C++ implementation (source codes) of "optmistic approach to lock-free FIFO queues" algorithm?

Herb Sutter covered just such a queue as part of his Effective Concurency column in Dr. Dobbs Journal.
Writing Lock-Free Code: A Corrected Queue

I want to conclude the answer given by greyfade, which is based on http://www.drdobbs.com/high-performance-computing/212201163 (the last part of the article), the optimized code would be (with some modification to suit my naming and coding convention) :
`
template <typename T> class LFQueue {
private:
struct LFQNode {
LFQNode( T* val ) : value(val), next(nullptr) { }
T* value;
AtomicPtr<LFQNode> next;
char pad[CACHE_LINE_SIZE - sizeof(T*) - sizeof(AtomicPtr<LFQNode>)];
};
char pad0[CACHE_LINE_SIZE];
LFQNode* first; // for one consumer at a time
char pad1[CACHE_LINE_SIZE - sizeof(LFQNode*)];
InterlockedFlag consumerLock; // shared among consumers
char pad2[CACHE_LINE_SIZE - sizeof(InterlockedFlag)];
LFQNode* last; // for one producer at a time
char pad3[CACHE_LINE_SIZE - sizeof(LFQNode*)];
InterlockedFlag producerLock; // shared among producers
char pad4[CACHE_LINE_SIZE - sizeof(InterlockedFlag)];
public:
LFQueue() {
first = last = new LFQNode( nullptr ); // no more divider
producerLock = consumerLock = false;
}
~LFQueue() {
while( first != nullptr ) {
LFQNode* tmp = first;
first = tmp->next;
delete tmp;
}
}
bool pop( T& result ) {
while( consumerLock.set(true) )
{ } // acquire exclusivity
if( first->next != nullptr ) { // if queue is nonempty
LFQNode* oldFirst = first;
first = first->next;
T* value = first->value; // take it out
first->value = nullptr; // of the Node
consumerLock = false; // release exclusivity
result = *value; // now copy it back
delete value; // and clean up
delete oldFirst; // both allocations
return true; // and report success
}
consumerLock = false; // release exclusivity
return false; // queue was empty
}
bool push( const T& t ) {
LFQNode* tmp = new LFQNode( t ); // do work off to the side
while( producerLock.set(true) )
{ } // acquire exclusivity
last->next = tmp; // A: publish the new item
last = tmp; // B: not "last->next"
producerLock = false; // release exclusivity
return true;
}
};
`
another question is how do you define CACHE_LINE_SIZE? its vary on ever CPUs right?

Here is my implementation of a lock-free FIFO.
Make sure each item of T is a multiple of 64 bytes (the cache line size in the Intel CPUs) to avoid false sharing.
This code compiles with gcc/mingw and should compile with clang. It's optimized for 64-bit, so to get it to run on 32-bit would need some refactoring.
https://github.com/vovoid/vsxu/blob/master/engine/include/vsx_fifo.h
vsx_fifo<my_struct, 512> my_fifo;
Sender:
my_struct my_struct_inst;
... fill it out ...
while (!my_fifo.produce(my_struct_inst)) {}
Receiver:
my_struct my_struct_recv;
while(my_fifo.consume(my_struct_recv))
{
...do stuff...
}

How about this lfqueue
This is cross-platform, unlimited enqueue thread safety queue, have been tested multi deq, multi enq-deq and multi enq. Guarantee memory safe.
For example
int* int_data;
lfqueue_t my_queue;
if (lfqueue_init(&my_queue) == -1)
return -1;
/** Wrap This scope in other threads **/
int_data = (int*) malloc(sizeof(int));
assert(int_data != NULL);
*int_data = i++;
/*Enqueue*/
while (lfqueue_enq(&my_queue, int_data) == -1) {
printf("ENQ Full ?\n");
}
/** Wrap This scope in other threads **/
/*Dequeue*/
while ( (int_data = lfqueue_deq(&my_queue)) == NULL) {
printf("DEQ EMPTY ..\n");
}
// printf("%d\n", *(int*) int_data );
free(int_data);
/** End **/
lfqueue_destroy(&my_queue);

If you're looking for a good lock free queue implementation both Microsoft Visual Studio 2010 & Intel's Thread Building Blocks contain a good LF queue which is similar to the paper.
Here's a link to the one in VC 2010

Related

std::list and garbage Collection algorithm

I have a server that puts 2 players together on request and starts a game Game in a new thread.
struct GInfo {Game* game; std::thread* g_thread};
while (true) {
players_pair = matchPlayers();
Game* game = new Game(players_pair);
std::thread* game_T = new std::thread(&Game::start, game);
GInfo ginfo = {game, game_T}
_actives.push_back(ginfo); // std::list
}
I am writing a "Garbage Collector", that runs in another thread, to clean the memory from terminated games.
void garbageCollector() {
while (true) {
for (std::list<Ginfo>::iterator it = _actives.begin(); it != _actives.end(); ++it) {
if (! it->game->isActive()) {
delete it->game; it->game = nullptr;
it->g_thread->join();
delete it->g_thread; it->g_thread = nullptr;
_actives.erase(it);
}
}
sleep(2);
}
}
This generates a segfault, I suspect it is because of the _active.erase(it) being in the iteration loop.
For troubleshooting, I made _actives an std::vector (instead of std::list) and applied the same algorithm but using indexes instead of iterators, it works fine.
Is there a way around this?
Is the algorithm, data structure used fine? Any better way to do the garbage collection?
Help is appreciated!
If you have a look at the documentation for the erase method it returns an iterator to the element after the one that was removed.
The way to use that is to assign the returned value to your iterator like so.
for (std::list<Ginfo>::iterator it = _actives.begin(); it != _actives.end();) {
if (! it->game->isActive()) {
delete it->game; it->game = nullptr;
it->g_thread->join();
delete it->g_thread; it->g_thread = nullptr;
it = _actives.erase(it);
}
else {
++it;
}
}
Since picking up the return value from erase advances the iterator to the next element, we have to make sure not to increment the iterator when that happens.
On an unrelated note, variable names starting with underscore is generally reserved for the internals of the compiler and should be avoided in your own code.
Any better way to do the garbage collection?
Yes, don't use new,delete or dynamic memory alltogether:
struct Players{};
struct Game{
Game(Players&& players){}
};
struct GInfo {
GInfo(Players&& players_pair):
game(std::move(players_pair)),g_thread(&Game::start, game){}
Game game;
std::thread g_thread;
};
std::list<GInfo> _actives;
void someLoop()
{
while (true) {
GInfo& ginfo = _actives.emplace_back(matchPlayers());
}
}
void garbageCollector() {
while (true) {
//Since C++20
//_active.remove_if([](GInfo& i){ return !i.game.isActive();});
//Until C++20
auto IT =std::remove_if(_actives.begin(),_actives.end(),
[](GInfo& i){ return !i.game.isActive();});
_active.erase(IT,_active.end());
//
sleep(2);
}
}
There might be a few typos, but that's the idea.

Segmentation fault bintree

I'm trying to implement the bintree, but I have problems in the insert method.
If I add the first element, the program dont crash but, when I introduce 2 or more element the program crash.
This is the code
template <typename T>
void Arbol<T>:: insertar( T c){
if(laraiz==0)
{
laraiz=new celdaArbol;
laraiz->elemento=c;
laraiz->padre=laraiz->hizqu=laraiz->hder=0;
}
else {
celdaArbol *com=laraiz;
bool poner=false;
while(poner==false){
if(c>com->elemento){
if(com->hder==0){
com->hder= new celdaArbol;
com->hder->elemento=c;
com->hder->padre=com;
poner=true;
}
else{
com=com->hder;
}
}
else {
if(com->hizqu==0){
com->hizqu= new celdaArbol;
com->hizqu->elemento=c;
com->hizqu->padre=com;
poner=true;
}
else {
com=com->hizqu;
}
}
}
}
}
I think that the problem is in the else:
else{
com=com->hizqu; //com=com->hder;
}
Because I saw in the debugger that the program enter several times in the section and should not do.
According to this code:
laraiz->padre=laraiz->hizqu=laraiz->hder=0;
You do not properly intialize pointers hizqu and hder to nullptr in constructor of celdaArbol class. And you do not initialize them in either branch of if(c>com->elemento){ so they seem to have garbage values.
Also your code can become more readable and less error prone if you use proper C++ constructions:
celdaArbol *com=laraiz;
while( true ){
celdaArbol *&ptr = c > com->elemento ? com->hder : com->hizqu;
if( ptr ) {
com = ptr;
continue;
}
ptr = new celdaArbol;
ptr->elemento=c;
ptr->padre=com;
ptr->hder = ptr->hizqu = nullptr;
break;
}
This code is logically equal to yours, except it shorter, easier to read, avoid duplication and fixes your bug.
For every leaf node (except the root of the tree), you never initialize the left child or right child node to be anything but an unspecified value.
You probably meant to initialize them as nullptr.
Here's one example:
if (com->hizqu==0){
com->hizqu = new celdaArbol;
com->hizqu->elemento = c;
com->hizqu->padre = com;
poner = true;
// What is the value of com->hizqu->hizqu?
// What is the value of com->hizqu->hder?
}

Destruction of boost::mutex fails in class destructor

To begin: I have read many posts about the occurence of this error (e.g. boost::mutex::~mutex(): Assertion `!pthread_mutex_destroy(&m)' failed ) and as I can see they do not apply in my case.
Additionally I can not use RAII as often suggested in those posts.
Futher I can not give a "minimum compiling example" since this error does not occur there.
My Problem:
I have two mutexes in a class representing a FIFO-List the mutexes are used to lock the anchor-pointer and the back-pointer.
At that moment when the class is destroyed the destruction of the back_mutex fails after the anchor_mutex is already destroyed. This is the error mesage:
boost::mutex::~mutex(): Assertion `!posix::pthread_mutex_destroy(&m)' failed.
POSIX-Spec says the only two cases where pthread_mutex_destroy fails are EINVAL, if the mutex is invalid and EBUSY, if the mutex is locked or referenced.
Due to that knowledge i changed my Destructor to the following for testing:
template<class T>
Fifo_Emut<T>::~Fifo_Emut()
{
this->clear();
anchor_mutex.lock();
back_mutex.lock();
anchor_mutex.unlock();
back_mutex.unlock();
}
Despite that the error still constists at the same position.
As I would suggest the locking and unlocking of the mutex should fail if one of the two cases EINVAL or EBUSY is relevant. Also I can gurantee that the Thread calling the Destructor is the last living Thread all others joined before.
As an additional test I wrote a return in the first line of push_back and pop_front, then the error does not occur. It also occurs when only using push_back
For 'kind of' completeness the code using the mutexes in push_back and pop_front mehtods:
/**
* #brief appends a new list element to the back end of the list
* #param[in] data the data to be copied into the list_element
**/
template<class T>
void Fifo_Emut<T>::push_back(const T const& data)
{
back_mutex.lock();
if(back == NULL)
{
if(!anchor_mutex.try_lock())
{
//if we cannot aquire anchor_mutex we have to release back_mutex and try again to avoid a deadlock
back_mutex.unlock();
return this->push_back(data);
}
if(anchor == NULL)
{
MutexListElement<T>* tmp;
tmp = new MutexListElement<T>(data);
anchor = tmp;
back = tmp;
boost::interprocess::ipcdetail::atomic_write32(&number_of_elements, 1);
anchor_mutex.unlock();
back_mutex.unlock();
}
//else error normally handled
}
else
{
MutexListElement<T>* tmp;
back->lock();
tmp = new MutexListElement<T>(back, data);
boost::interprocess::ipcdetail::atomic_inc32(&number_of_elements);
back->unlock();
back = tmp;
back_mutex.unlock();
}
}
/**
* #brief erases the first element of the queue
* #returns a copy of the data held in the erased element
**/
template<class T>
T Fifo_Emut<T>::pop_front(void)
{
uint32_t elements = boost::interprocess::ipcdetail::atomic_read32(&number_of_elements);
if(elements == 0)
{
return NULL;
}
if(elements == 1)
{
anchor_mutex.lock();
back_mutex.lock();
if(elements == boost::interprocess::ipcdetail::atomic_read32(&number_of_elements))
{
//still the same so we can pop
MutexListElement<T>* erase = anchor;
erase->lock(); //we do not have to lock next since tis is the only one
anchor = NULL;
back = NULL;
boost::interprocess::ipcdetail::atomic_write32(&number_of_elements, 0);
anchor_mutex.unlock();
back_mutex.unlock();
T tmp = erase->getData();
erase->unlock();
delete erase;
return tmp;
}
else
{
//something has changed so we have to try again
back_mutex.unlock();
anchor_mutex.unlock();
return this->pop_front();
}
}
else
{
anchor_mutex.lock();
if(boost::interprocess::ipcdetail::atomic_read32(&number_of_elements) > 1)
{
//still more than one element in the queue so we can just pop whitout changing back pointer
MutexListElement<T>* erase = anchor;
erase->lock();
(dynamic_cast<MutexListElement<T>*>(anchor->next))->lock();
anchor = dynamic_cast<MutexListElement<T>*>(anchor->next);
anchor->prev = NULL;
boost::interprocess::ipcdetail::atomic_dec32(&number_of_elements);
anchor->unlock();
anchor_mutex.unlock();
T tmp = erase->getData();
erase->unlock();
delete erase;
return tmp;
}
else
{
//number of elements decreased to other case during locking
anchor_mutex.unlock();
return this->pop_front();
}
}
}
Question:
How is it possible that a 'functioning' mutex fails on being destroyed?
Or am I overseeing something here? How can I get rid of that error? What have I made wrong in my code and assumptions?

Linked list sorting function in C++ doesn't stop properly

Many thanks in advance!
So, I've made attempts to make this function work. There are mistakes in the function but cannot catch them.
It seems to me, that I've missed the logic of sorting.
Could you point me 'where to go'?
/* node*/
typedef struct client {
int number; /* */
int balance;/* */
char lastName[20]; /* */
char firstName [20];/* */
char phone[11]; /* */
char email[20];
struct client *prev;/* */
struct client *next;
struct client *tmp; /* */
} Client;
Client *firstc,*currentc,*newc, *a, *b,*tmp; /*pointers*/
/* *"firstc' firstc element in list
*'currentc' current node
*'newc' new node
*'a' temporary pointer to Sort function
*'b' temporary pointer to Sort function
*'tmp' temporary pointer to Sort function
*/
int counter = 0;
int cnum = 0; /*cnum gives unique account numbers avoiding misentering*/
/*---Sort function------*/
void Sort()
{
/* */
int a = 0;/*variables to store balance*/
int b = 0;/*variables to store balance*/
if(firstc==NULL)
printf("Database is empty"); /*message*/
else
currentc = firstc;
currentc->prev = NULL;
tmp = NULL;
while((currentc=currentc->next)!= NULL)
{ /* 1) compare two nodes;
2) IF balance >*/
int a = currentc->balance;
int b = currentc->next->balance;/* debugger stopped here... */
if (a>b)
//if(currentc->balance >currentc->next->balance)
{ /*swap nodes*/
/*code using three pointers*/
tmp = currentc->next;
currentc->next->next = currentc->next;
currentc->next->next = tmp;
}
/*3)move along the list*/
else
currentc = currentc->next;
/*4) repeat to the end of list*/
}
currentc = firstc;
listAll();
return;
}
int b = currentc->next->balance;/* debugger stopped here... */
When currentc is pointing to the last item in the list currentc->next will be null. So currentc->next->balance is an access through a null pointer.
Also, practices like making assignments in conditions like while((currentc=currentc->next)!= NULL) will eventually come back to hurt you. In this case it seems you are skipping the first item in the list.
You probably meant:
if(firstc == NULL)
printf("Database is empty"); /*message*/
else
{ /* missing braces spotted by others */
currentc = firstc;
currentc->prev = NULL;
tmp = NULL;
for( ; currentc != NULL; currentc = currentc->next)
{
if(currentc->next == NUL)
/* nothing to compare */
break;
...
}
}
Furthermore the swapping code is swapping the wrong nodes:
tmp = currentc->next;
currentc->next->next = currentc->next;
currentc->next->next = tmp;
will almost (but not quite) swap the next node (b), with the one after it instead of with (a). You need to use the prev pointer (However since this looks like homework I had better not tell you exactly how to do it). Also, you are initialising prev but you need to keep it up to date in the loop. Actually, your 3 lines above are equivalent to:
tmp = currentc->next;
currentc->next->next = tmp;
so I think you meant something else.
the problem is when currentc is the last node, currectc->next is null, thus currentc->next->balance make it crash.
add some validation like
if (currentc->next == null)
and set b to a default/predefined value or put some logic whether you swap the nodes or not.

How can I synchronize three threads?

My app consist of the main-process and two threads, all running concurrently and making use of three fifo-queues:
The fifo-q's are Qmain, Q1 and Q2. Internally the queues each use a counter that is incremented when an item is put into the queue, and decremented when an item is 'get'ed from the queue.
The processing involve two threads,
QMaster, which get from Q1 and Q2, and put into Qmain,
Monitor, which put into Q2,
and the main process, which get from Qmain and put into Q1.
The QMaster-thread loop consecutively checks the counts of Q1 and Q2 and if any items are in the q's, it get's them and puts them into Qmain.
The Monitor-thread loop obtains data from external sources, package it and put it into Q2.
The main-process of the app also runs a loop checking the count of Qmain, and if any items, get's an item
from Qmain at each iteration of the loop and process it further. During this processing it occasionally
puts an item into Q1 to be processed later (when it is get'ed from Qmain in turn).
The problem:
I've implemented all as described above, and it works for a randomly (short) time and then hangs.
I've managed to identify the source of the crashing to happen in the increment/decrement of the
count of a fifo-q (it may happen in any of them).
What I've tried:
Using three mutex's: QMAIN_LOCK, Q1_LOCK and Q2_LOCK, which I lock whenever any get/put operation
is done on a relevant fifo-q. Result: the app doesn't get going, just hangs.
The main-process must continue running all the time, must not be blocked on a 'read' (named-pipes fail, socketpair fail).
Any advice?
I think I'm not implementing the mutex's properly, how should it be done?
(Any comments on improving the above design also welcome)
[edit] below are the processes and the fifo-q-template:
Where & how in this should I place the mutex's to avoid the problems described above?
main-process:
...
start thread QMaster
start thread Monitor
...
while (!quit)
{
...
if (Qmain.count() > 0)
{
X = Qmain.get();
process(X)
delete X;
}
...
//at some random time:
Q2.put(Y);
...
}
Monitor:
{
while (1)
{
//obtain & package data
Q2.put(data)
}
}
QMaster:
{
while(1)
{
if (Q1.count() > 0)
Qmain.put(Q1.get());
if (Q2.count() > 0)
Qmain.put(Q2.get());
}
}
fifo_q:
template < class X* > class fifo_q
{
struct item
{
X* data;
item *next;
item() { data=NULL; next=NULL; }
}
item *head, *tail;
int count;
public:
fifo_q() { head=tail=NULL; count=0; }
~fifo_q() { clear(); /*deletes all items*/ }
void put(X x) { item i=new item(); (... adds to tail...); count++; }
X* get() { X *d = h.data; (...deletes head ...); count--; return d; }
clear() {...}
};
An example of how I would adapt the design and lock the queue access the posix way.
Remark that I would wrap the mutex to use RAII or use boost-threading and that I would use stl::deque or stl::queue as queue, but staying as close as possible to your code:
main-process:
...
start thread Monitor
...
while (!quit)
{
...
if (Qmain.count() > 0)
{
X = Qmain.get();
process(X)
delete X;
}
...
//at some random time:
QMain.put(Y);
...
}
Monitor:
{
while (1)
{
//obtain & package data
QMain.put(data)
}
}
fifo_q:
template < class X* > class fifo_q
{
struct item
{
X* data;
item *next;
item() { data=NULL; next=NULL; }
}
item *head, *tail;
int count;
pthread_mutex_t m;
public:
fifo_q() { head=tail=NULL; count=0; }
~fifo_q() { clear(); /*deletes all items*/ }
void put(X x)
{
pthread_mutex_lock(&m);
item i=new item();
(... adds to tail...);
count++;
pthread_mutex_unlock(&m);
}
X* get()
{
pthread_mutex_lock(&m);
X *d = h.data;
(...deletes head ...);
count--;
pthread_mutex_unlock(&m);
return d;
}
clear() {...}
};
Remark too that the mutex still needs to be initialized as in the example here and that count() should also use the mutex
Use the debugger. When your solution with mutexes hangs look at what the threads are doing and you will get a good idea about the cause of the problem.
What is your platform? In Unix/Linux you can use POSIX message queues (you can also use System V message queues, sockets, FIFOs, ...) so you don't need mutexes.
Learn about condition variables. By your description it looks like your Qmaster-thread is busy looping, burning your CPU.
One of your responses suggest you are doing something like:
Q2_mutex.lock()
Qmain_mutex.lock()
Qmain.put(Q2.get())
Qmain_mutex.unlock()
Q2_mutex.unlock()
but you probably want to do it like:
Q2_mutex.lock()
X = Q2.get()
Q2_mutex.unlock()
Qmain_mutex.lock()
Qmain.put(X)
Qmain_mutex.unlock()
and as Gregory suggested above, encapsulate the logic into the get/put.
EDIT: Now that you posted your code I wonder, is this a learning exercise?
Because I see that you are coding your own FIFO queue class instead of using the C++ standard std::queue. I suppose you have tested your class really well and the problem is not there.
Also, I don't understand why you need three different queues. It seems that the Qmain queue would be enough, and then you will not need the Qmaster thread that is indeed busy waiting.
About the encapsulation, you can create a synch_fifo_q class that encapsulates the fifo_q class. Add a private mutex variable and then the public methods (put, get, clear, count,...) should be like put(X) { lock m_mutex; m_fifo_q.put(X); unlock m_mutex; }
question: what would happen if you have more than one reader from the queue? Is it guaranteed that after a "count() > 0" you can do a "get()" and get an element?
I wrote a simple application below:
#include <queue>
#include <windows.h>
#include <process.h>
using namespace std;
queue<int> QMain, Q1, Q2;
CRITICAL_SECTION csMain, cs1, cs2;
unsigned __stdcall TMaster(void*)
{
while(1)
{
if( Q1.size() > 0)
{
::EnterCriticalSection(&cs1);
::EnterCriticalSection(&csMain);
int i1 = Q1.front();
Q1.pop();
//use i1;
i1 = 2 * i1;
//end use;
QMain.push(i1);
::LeaveCriticalSection(&csMain);
::LeaveCriticalSection(&cs1);
}
if( Q2.size() > 0)
{
::EnterCriticalSection(&cs2);
::EnterCriticalSection(&csMain);
int i1 = Q2.front();
Q2.pop();
//use i1;
i1 = 3 * i1;
//end use;
QMain.push(i1);
::LeaveCriticalSection(&csMain);
::LeaveCriticalSection(&cs2);
}
}
return 0;
}
unsigned __stdcall TMoniter(void*)
{
while(1)
{
int irand = ::rand();
if ( irand % 6 >= 3)
{
::EnterCriticalSection(&cs2);
Q2.push(irand % 6);
::LeaveCriticalSection(&cs2);
}
}
return 0;
}
unsigned __stdcall TMain(void)
{
while(1)
{
if (QMain.size() > 0)
{
::EnterCriticalSection(&cs1);
::EnterCriticalSection(&csMain);
int i = QMain.front();
QMain.pop();
i = 4 * i;
Q1.push(i);
::LeaveCriticalSection(&csMain);
::LeaveCriticalSection(&cs1);
}
}
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
::InitializeCriticalSection(&cs1);
::InitializeCriticalSection(&cs2);
::InitializeCriticalSection(&csMain);
unsigned threadID;
::_beginthreadex(NULL, 0, &TMaster, NULL, 0, &threadID);
::_beginthreadex(NULL, 0, &TMoniter, NULL, 0, &threadID);
TMain();
return 0;
}
You should not lock second mutex when you already locked one.
Since the question is tagged with C++, I suggest to implement locking inside get/add logic of the queue class (e.g. using boost locks) or write a wrapper if your queue is not a class.
This allows you to simplify the locking logic.
Regarding the sources you have added: queue size check and following put/get should be done in one transaction otherwise another thread can edit the queue in between
Are you acquiring multiple locks simultaneously? This is generally something you want to avoid. If you must, ensure you are always acquiring the locks in the same order in each thread (this is more restrictive to your concurrency and why you generally want to avoid it).
Other concurrency advice: Are you acquiring the lock prior to reading the queue sizes? If you're using a mutex to protect the queues, then your queue implementation isn't concurrent and you probably need to acquire the lock before reading the queue size.
1 problem may occur due to this rule "The main-process must continue running all the time, must not be blocked on a 'read'". How did you implement it? what is the difference between 'get' and 'read'?
Problem seems to be in your implementation, not in the logic. And as you stated, you should not be in any dead lock because you are not acquiring another lock whether in a lock.