How to check what progress guarantee a concurrent program follows? - c++

I was working on some concurrent programs for the past few weeks and was wondering if there is any tool that can automatically detect what type of progress condition its operations guarantees, that is whether it is wait-free, lock-free or obstruction-free.
I searched online and didn't find any such tool.
Can one tell how to deduce progress condition of a program?

Assume that I have a program called a wait-freedom decider that can read a concurrent program describing a data structure and detect whether it is wait free, i.e. "one that guarantees that any process can complete any operation in a finite number of steps" ala Herlihy's "Wait-Free Synchronization". Then, given a single-threaded program P, create a program that we will feed into the wait-freedom decider:
class DataStructure:
def operation(this):
P
pass
Now DataStructure.operation completes in a finite number of steps if and only if P halts.
This would solve the halting problem. That's impossible, so, by contradiction, we must not be able to create a wait-freedom decider.

Related

What exactly is the meaning of "wait-free" in boost::lockfree?

I am reading the docs for spsc_queue and even after reading a bit elsewhere I am not completely convinced about the meaning of "wait-free".
What exactly do they mean here
bool push(T const & t);
Pushes object t to the ringbuffer.
Note: Thread-safe and wait-free
I mean there must be some overhead for synchronisation. Is
some_spscqueue.push(x);
guaranteed to take a constant amount of time? How does it compare to a non-thread safe queue?
PS: dont worry, I am going to measure, but due to my naive ignorance I just cant imagine a synchronisation mechanism that does not involve some kind of waiting and I am quite puzzled what "wait-free" is supposed to tell me.
A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless of the execution speeds of the other processes.
(from the abstract of the Herlihy article).
See also Wikipedia, and all the other resources you immediately find by typing "wait free" into a search engine.
So wait-free doesn't mean no delay, it means you never enter a blocking wait-state, and can't be indefinitely blocked and/or starved.
In other words, wait has the specific technical meaning that your thread is either parked (and does not execute any instructions until something wakes it up), or is in a loop waiting for some external condition to be satisfied (eg. a spinlock). If it's never woken up, or if it is woken up but always finds it can't proceed and has to wait again, or if the loop never exits, then your thread is being starved and cannot make progress.
Every operation has some latency, and wait-free doesn't say anything about what that is.
How does it compare to a non-thread safe queue?
It'll almost certainly be more expensive than a wholly-unsynchronized container, because you're still doing extra work (assuming you really do only access the container from a single thread).
"Pushes object to the ring buffer" refers to the type of array buffer that the queue is implemented on. This is probably implemented using a circular array to store the objects in the queue. See for example:
https://en.wikipedia.org/wiki/Circular_buffer
and
http://opendatastructures.org/ods-python/2_3_ArrayQueue_Array_Based_.html

How do I multiplex many asynchronous state machines over a fixed number of threads with boost::statechart?

Suppose I have many asynchronous state machines defined with boost::statechart. The clearly documented mechanism for running multiple asynchronous state machines is to fix one or more of them to a thread. However, for my purpose I need to run many, many asynchronous state machines, and one per thread will not do. Moreover, the amount of work done by any given state machine is unpredictable, so assigning state machines to fixed threads will lead to imbalance.
Instead, I'd like to have a thread pool where an idle thread can pick up some amount of work off of a queue. Some care needs to be taken here so that events to a given state machine are delivered in order. Presumably the place to start would be something involving implementing the Scheduler and perhaps the FifoWorker concepts to do what I want as an alternative to the fifo_scheduler and fifo_worker classes, respectively. However, I wonder if this problem has already been solved by someone else, or if I'm just asking the wrong question.
Answering my own question, now that I've had some time to think about it. This is pretty simple:
Every state machine gets its own fifo_scheduler
When we want the state machine to start running, a function is posted to the thread pool that:
Checks scheduler.terminated() and stops if so.
Runs scheduler(n), where n is some implementation-dependent value. We need that to prevent starvation.
Posts itself back to the thread pool.
This also ensures that events are delivered in order without resorting to other means.
This isn't the greatest answer, since the service function will occupy a space in the queue and be called even when there's no work to do.

How can I tell whether a boost::thread has finished execution or not?

I don't necessarily want to use join() and wait for a thread to finish, but more so I want to briefly check whether a thread is still executing or not. I did think of timed_join(0), but I'm not sure if that's safe at all. Any advice here?
You should be using an eventing/notification mechanism to have the threads signal when they are done and an overall event to wait for all of them to complete. This is known as a countdown latch (see http://msdn.microsoft.com/en-us/magazine/cc163427.aspx#S1 ). You could build one using a boost condition variable and an int counter protected by a boost mutex.
An even simpler approach should be to just sequentially join on all the threads in a loop. There's no need to attempt to parallelize this since in the worst case, the first join will take longer than all other threads to complete, but by that point the rest of the joins will take zero time (because they are all done).
If you have hard dependencies on which threads are allowed to finish first and complex lifetime graphs, you should consider representing these with a class/data structure to raise the level of abstraction so that the outside waiter doesn't need to care directly about these details.

Changing Thread Task?

I know you cannot kill a boost thread, but can you change it's task?
Currently I have an array of 8 threads. When a button is pressed, these threads are assigned a task. The task which they are assigned to do is completely independent of the main thread and the other threads. None of the the threads have to wait or anything like that, so an interruption point is never reach.
What I need is to is, at anytime, change the task that each thread is doing. Is this possible? I have tried looping through the array of threads and changing what each thread object points to to a new one, but of course that doesn't do anything to the old threads.
I know you can interrupt pThreads, but I cannot find a working link to download the library to check it out.
A thread is not some sort of magical object that can be made to do things. It is a separate path of execution through your code. Your code cannot be made to jump arbitrarily around its codebase unless you specifically program it to do so. And even then, it can only be done within the rules of C++ (ie: calling functions).
You cannot kill a boost::thread because killing a thread would utterly wreck some of the most fundamental assumptions a programmer makes. You now have to take into account the possibility that the next line doesn't execute for reasons that you can neither predict nor prevent.
This isn't like exception handling, where C++ specifically requires destructors to be called, and you have the ability to catch exceptions and do special cleanup. You're talking about executing one piece of code, then suddenly inserting a call to some random function in the middle of already compiled code. That's not going to work.
If you want to be able to change the "task" of a thread, then you need to build that thread with "tasks" in mind. It needs to check every so often that it hasn't been given a new task, and if it has, then it switches to doing that. You will have to define when this switching is done, and what state the world is in when switching happens.

Testing concurrent data structure

What are some methods for testing concurrent data structures to make sure the data structs behave correctly when accessed from multiple threads ?
All of the other answers have focused on actually testing the code by putting it through its paces and actually running it in one form or another or politely saying "don't do it yourself, use an existing library".
This is great and all, but IMO, the most important (practical tests are important too) test is to look at the code line by line and for every line of code ask "what happens if I get interrupted by another thread here?" Imagine another thread, running just about any of the other lines/functions during this interruption. Do things still stay consistent? When competing for resources, does the other thread[s] block or spin?
This is what we did in school when learning about concurrency and it is a surprisingly effective approach. Bottom line, I feel that taking the time to prove to yourself that things are consistent and work as expected in all states is the first technique you should use when dealing with this stuff.
Concurrent systems are probabilistic and errors are often difficult to replicate. Therefore you need to run various input/output cases, each tested over time (hours, days, etc) in order to detect possible errors.
Tests for concurrent data structure involves examining the container's state before and after expected events such as insert and delete.
Use a pre-existing, pre-tested library that meets your needs if possible.
Make sure that the code has appropriate self-consistency checks (preferably fast sanity checks), and run your code on as many different types of hardware as possible to help narrow down interesting timing problems.
Have multiple people peer review the code, preferably without a pre-explanation of how it's supposed to work. That way they have to grok the code which should help catch more bugs.
Set up a bunch of threads that do nothing but random operations on the data structures and check for consistency at some rate.
Start with the assumption that your calls to access/modify data are not thread safe and use locks to ensure only a single thread can access/modify any part of the data at a time. Only after you can prove to yourself that a specific type of access is safe outside of the lock by multiple threads at once should you move that code outside of the lock.
Assume worst case scenarios, e.g. that your code will stop right in the middle of some pointer manipulation or another critical point, and that another thread will encounter that data in mid-transition. If that would have a bad result, leave it within the lock.
I normally test these kinds of things by interjecting sleep() calls at appropriate places in the distributed threads/processes.
For instance, to test a lock, put sleep(2) in all your threads at the point of contention, and spawn two threads roughly 1 second apart. The first one should obtain the lock, and the second should have to wait for it.
Most race conditions can be tested by extending this method, but if your system has too many components it may be difficult or impossible to know every possible condition that needs to be tested.
Run your concurrent threads for one or a few days and look what happens. (Sounds strange, but finding out race conditions is such a complex topic that simply trying it is the best approach).