Is checking current thread inside a function ok? - c++

Is it ok to check the current thread inside a function?
For example if some non-thread safe data structure is only altered by one thread, and there is a function which is called by multiple threads, it would be useful to have separate code paths depending on the current thread. If the current thread is the one that alters the data structure, it is ok to alter the data structure directly in the function. However, if the current thread is some other thread, the actual altering would have to be delayed, so that it is performed when it is safe to perform the operation.
Or, would it be better to use some boolean which is given as a parameter to the function to separate the different code paths?
Or do something totally different?
What do you think?

You are not making all too much sense. You said a non-thread safe data structure is only ever altered by one thread, but in the next sentence you talk about delaying any changes made to that data structure by other threads. Make up your mind.
In general, I'd suggest wrapping the access to the data structure up with a critical section, or mutex.

It's possible to use such animals as reader/writer locks to differentiate between readers and writers of datastructures but the performance advantage for typical cases usually wont merit the additional complexity associated with their use.
From the way your question is stated, I'm guessing you're fairly new to multithreaded development. I highly suggest sticking with the simplist and most commonly used approaches for ensuring data integrity (most books/articles you readon the issue will mention the same uses for mutexes/critical sections). Multithreaded development is extremely easy to get wrong and can be difficult to debug. Also, what seems like the "optimal" solution very often doesn't buy you the huge performance benefit you might think. It's usually best to implement the simplist approach that will work then worry about optimizing it after the fact.

There is a trick that could work in case, as you said, the other threads will only make changes only once in a while, although it is still rather hackish:
make sure your "master" thread can't be interrupted by the other ones (higher priority, non fair scheduling)
check your thread
if "master", just change
if other, put off scheduling, if needed by putting off interrupts, make change, reinstall scheduling
really test to see whether there are no issues in your setup.
As you can see, if requirements change a little bit, this could turn out worse than using normal locks.

As mentioned, the simplest solution when two threads need access to the same data is to use some synchronization mechanism (i.e. critical section or mutex).
If you already have synchronization in your design try to reuse it (if possible) instead of adding more. For example, if the main thread receives its work from a synchronized queue you might be able to have thread 2 queue the data structure update. The main thread will pick up the request and can update it without additional synchronization.
The queuing concept can be hidden from the rest of the design through the Active Object pattern. The activ object may also be able to publish the data structure changes through the Observer pattern to other interested threads.

Related

Multithreading: when to call mutex.lock?

So, I have a ton ob objects, each having several fields, including a c-array, which are modified within their "Update()" method. Now I create several threads, each updating a section of these objects. As far as I understand calling lock() before calling the update function would be useless, since this would essentially cause the updates being called in a sequential order just like they would be without multithreading. Now, there objects have pointers, cross referencing to each other. Do I need to call lock every time ANY field is modified, or just before specific operations (like delete, re-initializing arrays, etc?)
Do I need to call lock every time ANY field is modified, or just before specific operations (like delete, re-initializing arrays, etc?)
Neither. You need to have a lock even to read, to make sure another thread isn't part way through modifying the data you're reading. You might want to use a many reader / one writer lock. I suggest you start by having a single lock (whether a simple mutex or the more elaborate multi-reader/writer lock) and get the code working so you can profile it and see whether you actually need more fine-grained locking, then you'll have a bit more experience and understanding of options and advice about how to manage that.
If you do need fine-grained locking, then the trick is to think about where the locks logically belong - for example - there could be one per object. You'll then need to learn about techniques for avoiding deadlocks. You should do some background reading too.
It depends on the consequences of the data changes you want to make. If each thread is, for example, changing well defined sub-blocks of data and each sub-block is entirely independent of all other sub-blocks then it might make sense to have a mutex per sub-block.
That would allow one thread to deal with one set of sub-blocks whilst another gets a different subset to process.
Having threads make changes without gaining a mutex lock first is going to lead to inconsistencies at best...
If the data and processing isn't subdivisible that way then you would probably have to start thinking about how you might handle whole objects in parallel, ie adopt a coarser granularity and one mutex per object. This is perhaps more likely to be possible - different objects are supposed to be independent of each other, so it should in theory be possible to process their data in parallel.
However the unavoidable truth is that some computer jobs require fast single thread performance. For that one starts seriously needing the right sort of supercomputer and perhaps some jolly long pipelines.

How to (unit) test if a function is lock free ?

I would like to add several unit tests to my code, also as I load plug ins I don't always have access to the code I'm running.
The test I would really like to check is if the function I'm calling is lock free ?
Is there any hook, or way to test if between a point A and B in my program there was a call to a non lock free function ?
Another less complicated function is how to hook all calls to locking functions (like locks, system calls ...). I know how to hook calls to malloc on windows but nothing else.
Thank you for your help
You can't.
You could substitute a different implementation of pthread_lock but code could make direct calls to e.g. futex, and if you replace that the code could still call it directly with syscall(SYS_futex,...). You could profile the code or use something like strace to detect all such calls, but that still wouldn't tell you if the code implements its own custom spinlock in assembly.
I'm pretty sure you can't do that without instrumenting the locks, or something similar.
One could come up with a lot of scenarios where the call of a locking function causes different behaviour in testing [possibly only when "special test-mode for identifying testing" is enabled] than in production code - for example, add a sleep for 100ms into the lock method, and try to use another locked function and compare the time with "no competiton for the lock.
Or we could keep a count of calls to lock, and see if the count before and after the function is the same (or has increased by the expected amount, if the function is supposed to call lock a certain number of times).
But a generic way that isn't intrusive into the locking mechanism, I'm pretty sure it's impossible.
Of course, code-review and clear documentation as to what code calls locks and which doesn't would also be useful - and good reviewers that spot errors.
As the others have already answered it is not possible to test whether the algorithm is lock-free or not. However, it is possible to test that it behaves consistently in a multi-threaded environment. My experience in this area is only using a lock-free queue (which I wrote myself, but based on an academic paper) so my tests are based around a queue which may or may not be useful to you.
I used multiple threads to test to hammer the queue.
Thread Safety: the queue must not crash under heavy loads
Speed: how does the response times vary under a heavy load
Consistency: the queue mustn't loose items.
In my test, I also varied the number of readers and writers. The queue will behave differently depending on the ratio of readers to writers. More readers than writers will generally result in a nearly empty queue, whereas the inverse will result in a queue that continually expands until the writers stop writing.
Point 2 might be of interest to you as you can you can generally tell if the algorithm is lock-free or not based on the variance of response times under a heavy load. If response times remain fast under a heavy load then you can infer that the algorithm is lock-free. Or at least if it isn't it behaves as it if is.

how to synchronize three dependent threads

If I have
1. mainThread: write data A,
2. Thread_1: read A and write it to into a Buffer;
3. Thread_2: read from the Buffer.
how to synchronize these three threads safely, with not much performance loss? Is there any existing solution to use? I use C/C++ on linux.
IMPORTANT: the goal is to know the synchronization mechanism or algorithms for this particular case, not how mutex or semaphore works.
First, I'd consider the possibility of building this as three separate processes, using pipes to connect them. A pipe is (in essence) a small buffer with locking handled automatically by the kernel. If you do end up using threads for this, most of your time/effort will be spent on creating nearly an exact duplicate of the pipes that are already built into the kernel.
Second, if you decide to build this all on your own anyway, I'd give serious consideration to following a similar model anyway. You don't need to be slavish about it, but I'd still think primarily in terms of a data structure to which one thread writes data, and from which another reads the data. By strong preference, all the necessary thread locking necessary would be built into that data structure, so most of the code in the thread is quite simple, reading, processing, and writing data. The main difference from using normal Unix pipes would be that in this case you can maintain the data in a more convenient format, instead of all the reading and writing being in text.
As such, what I think you're looking for is basically a thread-safe queue. With that, nearly everything else involved becomes borders on trivial (at least the threading part of it does -- the processing involved may not be, but at least building it with multiple threads isn't adding much to the complexity).
It's hard to say how much experience with C/C++ threads you have. I hate to just point to a link but have you read up on pthreads?
https://computing.llnl.gov/tutorials/pthreads/
And for a shorter example with code and simple mutex'es (lock object you need to sync data):
http://students.cs.byu.edu/~cs460ta/cs460/labs/pthreads.html
I would suggest Boost.Thread for this purpose. This is quite good framework with mutexes and semaphores, and it is multiplatform. Here you can find very good tutorial about this.
How exactly synchronize these threads is another problem and needs more information about your problem.
Edit The simplest solution would be to put two mutexes -- one on A and second on Buffer. You don't have to worry about deadlocks in this particular case. Just:
Enter mutex_A from MainThread; Thread1 waits for mutex to be released.
Leave mutex from MainThread; Thread1 enters mutex_A and mutex_Buffer, starts reading from A and writes it to Buffer.
Thread1 releases both mutexes. ThreadMain can enter mutex_A and write data, and Thread2 can enter mutex_Buffer safely read data from Buffer.
This is obviously the simplest solution, and probably can be improved, but without more knowledge about the problem, this is the best I can come up with.

How to convert my project to become a multi threaded application

I have a project and I want to convert it to multi-threaded application. What are the things that can be done to make it a multi threaded application
List out things to be done to convert into multithreaded application
e.g mutex lock on shared variables.
I was not able to find a question which list all those under single hood.
project is in C
Single threaded application need not be concerned about being thread safe.
This issue arises when you have multiple threads which are trying to access a commonly shared resource. At that time, you must be concerned.
So, no need to worry.
EDIT (after question been edited ) :
You need to go through the following links.
Single threaded to multithreaded application
Single threaded to multithreaded application - What we need to consider ?
Advice - Single threaded to multithreaded application
Also a good advice for converting single to multithreaded application.Check out.
Single threaded -> Multithreaded application :: Good advice.
The big issue is that, in general, when designing your application it is very difficult to choose single thread and then later on add multi-threading. The choice is fundamental to the design idioms you are going to strive towards. Here's a brief but poor guide of some of the things you should be paying attention towards and how to modify your code (note, none of these are set in stone, there's always a way around):
Remove all mutable global variables. I'd say this goes for single threaded applications too but that's just me.
Add "const" to as many variables as you can as a first pass to decide where there are state changes and take notes from the compilation errors. This is not to say "turn all your variables to const." It is just s simple hack to figure out where your problem areas are going to be.
For those items which are mutable and which will be shared (that is, you can't leave them as const without compilation warnings) put locks around them. Each lock should be logged.
Next, introduce your threads. You're probably about to suffer a lot of deadlocks, livelocks, race conditions, and what not as your single threaded application made assumptions about the way and order your application would run.
Start by paring away unneeded locks. That is, look to the mutable state which isn't shared amongst your threads. Those locks are superfluous and need to go.
Next, study your code. At this point, determining where your threaded issues are is more art than science. Although, there are decent principals about how to go about this, that's about all I can say.
If that sounds like too much effort, it's time to look towards the Actor model for concurrency. This would be akin to creating several different applications which call one another through a message passing scheme. I find that Actors are not only intuitive but also massively friendly to determining where and how you might encounter threading issues. When setting up Actors, it's almost impossible not to think about all the "what ifs."
Personally, when dealing with a single threaded to multi threaded conversion, I do as little as possible to meet project goals. It's just safer.
This depends very heavily on exactly how you intend to use threads. What does your program do? Where do you want to use threads? What will those threads be doing?
You will need to figure out what resources these threads will be sharing, and apply appropriate locking. Since you're starting with a single-threaded application, it's a good idea to minimize the shared resources to make porting easier. For example, if you have a single GUI thread right now, and need to do some complex computations in multiple threads, spawn those threads, but don't have them directly touch any data for the GUI - instead, send a asynchronous message to the GUI thread (how you do this depends on the OS and GUI library) and have it handle any changes to GUI-thread data in a serialized fashion on the GUI thread itself.
As general advice, don't simply add threads willy-nilly. You should know exactly which variables and data structures are shared between threads, where they are accessed, and why. And you should be keeping said sharing to the minimum.
Without a much more detailed description of your application, it's nearly impossible to give you a complete answer.
It will be a good idea to give some insight in your understanding of threading aswell.
However, the most important is that each time a global variable is accessed or a pointer is used, there's a good chance you'll need to do that inside of a mutex.
This wikipedia page should be a good start : http://en.wikipedia.org/wiki/Thread_safety

Testing concurrent data structure

What are some methods for testing concurrent data structures to make sure the data structs behave correctly when accessed from multiple threads ?
All of the other answers have focused on actually testing the code by putting it through its paces and actually running it in one form or another or politely saying "don't do it yourself, use an existing library".
This is great and all, but IMO, the most important (practical tests are important too) test is to look at the code line by line and for every line of code ask "what happens if I get interrupted by another thread here?" Imagine another thread, running just about any of the other lines/functions during this interruption. Do things still stay consistent? When competing for resources, does the other thread[s] block or spin?
This is what we did in school when learning about concurrency and it is a surprisingly effective approach. Bottom line, I feel that taking the time to prove to yourself that things are consistent and work as expected in all states is the first technique you should use when dealing with this stuff.
Concurrent systems are probabilistic and errors are often difficult to replicate. Therefore you need to run various input/output cases, each tested over time (hours, days, etc) in order to detect possible errors.
Tests for concurrent data structure involves examining the container's state before and after expected events such as insert and delete.
Use a pre-existing, pre-tested library that meets your needs if possible.
Make sure that the code has appropriate self-consistency checks (preferably fast sanity checks), and run your code on as many different types of hardware as possible to help narrow down interesting timing problems.
Have multiple people peer review the code, preferably without a pre-explanation of how it's supposed to work. That way they have to grok the code which should help catch more bugs.
Set up a bunch of threads that do nothing but random operations on the data structures and check for consistency at some rate.
Start with the assumption that your calls to access/modify data are not thread safe and use locks to ensure only a single thread can access/modify any part of the data at a time. Only after you can prove to yourself that a specific type of access is safe outside of the lock by multiple threads at once should you move that code outside of the lock.
Assume worst case scenarios, e.g. that your code will stop right in the middle of some pointer manipulation or another critical point, and that another thread will encounter that data in mid-transition. If that would have a bad result, leave it within the lock.
I normally test these kinds of things by interjecting sleep() calls at appropriate places in the distributed threads/processes.
For instance, to test a lock, put sleep(2) in all your threads at the point of contention, and spawn two threads roughly 1 second apart. The first one should obtain the lock, and the second should have to wait for it.
Most race conditions can be tested by extending this method, but if your system has too many components it may be difficult or impossible to know every possible condition that needs to be tested.
Run your concurrent threads for one or a few days and look what happens. (Sounds strange, but finding out race conditions is such a complex topic that simply trying it is the best approach).