What is the right way to use QuantLib from multiple threads? - c++

I haven't been able to find any documentation explicitly describing QuantLib's thread-safety properties (or the absence of them!). The QuantLib configuration documentation lists a number of compile-time options related to thread safety, from which i infer that, by default, QuantLib is not entirely threadsafe.
In particular, there are:
QL_ENABLE_SESSIONS - "If defined, singletons will return different instances for different sessions. You will have to provide and link with the library a sessionId() function in namespace QuantLib, returning a different session id for each session. Undefined by default."
QL_ENABLE_THREAD_SAFE_OBSERVER_PATTERN - "If defined, a thread-safe (but less performant) version of the observer pattern will be used. You should define it if you want to use QuantLib via the SWIG layer within the JVM or .NET eco system or any environment with an async garbage collector. Undefined by default."
QL_ENABLE_SINGLETON_THREAD_SAFE_INIT - "Define this to make Singleton initialization thread-safe. Undefined by default. Not compatible with multiple sessions."
Which options should i use, and what other steps should i take, if i want to use QuantLib:
From multiple threads, but never at the same time (eg only when holding a global lock)?
From multiple threads at the same time, but not sharing any objects between them?
From multiple threads at the same time, sharing objects between them?
The natural structure for my application is a directed acyclic graph, with a constant stream of market data entering at one end, being used to compute and update various objects, and producing a stream of estimated prices leaving at the other end. I would very much like to be able to have multiple cores working in parallel, as some calculations take a long time.
The application will mostly be written in Java, with minimal parts in C++ to interface with QuantLib. I am not planning to use the SWIG wrapper. I am happy to do memory management of QuantLib objects without help from Java's garbage collector.
EDIT! If you decide to set any of these options, then on unix, do it with the corresponding flag to ./configure:
--enable-sessions
--enable-thread-safe-observer-pattern
--enable-thread-safe-singleton-init

The answer from SmallChess is not far from the truth. There are almost no locks or safety nets in QuantLib, so most people use multiprocessing if they need to distribute calculations over processors---and with good reason.
For those who want a bit more insight, and not as an endorsement of using multi-threading in QuantLib:
whatever else you do, if possible, enable the configuration switches that give you some safety, such as the one for thread-safe initialization of singletons (with a caveat, see below);
you might have multiple threads running at once if they don't share any objects, and if they don't try to modify globals such as the evaluation date (look for classes inheriting from Singleton for the list of globals).
if you need different evaluation dates for different threads, you can use another compilation switch to build QuantLib so that the singletons are not actually singletons, but there's an instance per thread. Caveat: this switch is not compatible with thread-safe initialization of singletons. You still shouldn't share objects between threads.
if you want to share objects, you might be in for more trouble than it's worth. The problems are: (1) any change to the underlying data of, say, a curve will trigger a recalculation; and (2) the recalculations (such as the bootstrap of a curve) are not executed right away, but only when needed, i.e., when some curve method is called. This means that you must keep the various steps separate: first, set the values of any quotes and make sure that there aren't any further changes; then, go around the curves and trigger recalculation, for instance by asking a discount factor at some date; finally, pass the curves to the instruments and price them. Changing a value during the calculations will result in a bootstrap being done in the middle of them; and not triggering full construction before calculations might lead to two instruments triggering two simultaneous bootstraps, which wouldn't end well for any concerned parties.
As I said, it's probably more trouble than it's worth. Ideally, don't share objects between threads and don't touch the globals. Otherwise, prefer multiprocessing.

Unfortunately, QuantLib is not thread safe. None of the option you have will help you. QuantLib is a free project, it's focus is on the actual mathematical modelling and not computational optimisations such as thread safe.
You should definitely wrap QuantLib in a process. Multithreading is not encourage for QuantLib unless you absolutely know what you're doing and have checked the relevant source code.

Related

How to prevent a specific function or method from being called from within an interrupt routine?

A hurdle in using C++ (instead of handling C++ as "C with a few extra features") in low-level embedded development is caused by interrupts. Often there are cases where time must be measured in individual CPU cycles, and some classes have no business being used there. Implicit conversions, copy constructors and similar things can cause issues when used within interrupts, especially if they happen behind the scenes. On the other hand, some carefully designed classes could be meaningfully used in interrupts.
This raises a question: is there an elegant way to signal the compiler that a specific class, function, or variable (or just code lines) should throw a compile error when used inside an interrupt? (to not have to rely so heavily on comments stating //Never call this from an interrupt!)
(If not, would it make sense to try getting something like this into the next C++ standard? I'm thinking of something like how the constexpr specifier informs the compiler what a function or variable can or cannot be used for. Would a nointerrupt specifier make sense, or does something already exist that provides that functionality?)
Like the main system context (and task functions, if you are using an OS), interrupt service routines are just other contexts. The question which code to run (methods of which subset of classes) is orthogonal to the technique of the (class) libraries itself.
RTOS libraries often impose "limits" which APIs may be called in ISRs (or, in ISRs with a given priority) - but these limits are "only" required to maintain data consistency, and not for HW technical reasons imposed by the µC infrastructure.
You want to limit the use of the existing class libraries to certain context(s) because some class implementations consume more resources (time, stack memory space?) than you can afford in ISRs, but this is a matter of your deployment decisions in SW project architecture.
Therefore, the most elegant solution from my point of view would be to go back to SW architecture level, create a neat piece of documentation (maybe a UML deployment diagram may already help you quite a lot), and to specify in which contexts you want which code to be executed. You will probably find that in ISR contexts, only a tiny fraction of your code is to be executed, so it may be feasible to mark the subset of classes (or, subsets of methods of classes) that are supposed to run in ISR contexts from the top-down perspective. You can backup this procedure by starting/finishing every ISR by setting/resetting an ISR-specific flag within a dedicated shared memory area, and to add assert statements to the critical classes/methods that detect if the particular code is run in an unwanted ISR context.

C++ threading vs. visibility issues - what's the common engineering practice?

From my studies I know the concepts of starvation, deadlock, fairness and other concurrency issues. However, theory differs from practice, to an extent, and real engineering tasks often involve greater detail than academic blah blah...
As a C++ developer I've been concerned about threading issues for a while...
Suppose you have a shared variable x which refers to some larger portion of the program's memory. The variable is shared between two threads A and B.
Now, if we consider read/write operations on x from both A and B threads, possibly at the same time, there is a need to synchronize those operations, right? So the access to x needs some form of synchronization which can be achieved for example by using mutexes.
Now lets consider another scenario where x is initially written by thread A, then passed to thread B (somehow) and that thread only reads x. The thread B then produces a response to x called y and passes it back to the thread A (again, somehow). My question is: what synchronization primitives should I use to make this scenario thread-safe. I've read about atomics and, more importantly, memory fences - are these the tools I should rely on?
This is not a typical scenario in which there is a "critical section". Instead some data is passed between threads with no possibility of concurrent writes in the same memory location. So, after being written, the data should first be "flushed" somehow, so that the other threads could see it in a valid and consistent state before reading. How is it called in the literature, is it "visibility"?
What about pthread_once and its Boost/std counterpart i.e. call_once. Does it help if both x and y are passed between threads through a sort of "message queue" which is accessed by means of "once" functionality. AFAIK it serves as a sort of memory fence but I couldn't find any confirmation for this.
What about CPU caches and their coherency? What should I know about that from the engineering point of view? Does such knowledge help in the scenario mentioned above, or any other scenario commonly encountered in C++ development?
I know I might be mixing a lot of topics but I'd like to better understand what is the common engineering practice so that I could reuse the already known patterns.
This question is primarily related to the situation in C++03 as this is my daily environment at work. Since my project mainly involves Linux then I may only use pthreads and Boost, including Boost.Atomic. But I'm also interested if anything concerning such matters has changed with the advent of C++11.
I know the question is abstract and not that precise but any input could be useful.
you have a shared variable x
That's where you've gone wrong. Threading is MUCH easier if you hand off ownership of work items using some sort of threadsafe consumer-producer queue, and from the perspective of the rest of the program, including all the business logic, nothing is shared.
Message passing also helps prevent cache collisions (because there is no true sharing -- except of the producer-consumer queue itself, and that has trivial effect on performance if the unit of work is large -- and organizing the data into messages help reduce false sharing).
Parallelism scales best when you separate the problem into subproblems. Small subproblems are also much easier to reason about.
You seem to already be thinking along these lines, but no, threading primitives like atomics, mutexes, and fences are not very good for applications using message passing. Find a real queue implementation (queue, circular ring, Disruptor, they go under different names but all meet the same need). The primitives will be used inside the queue implementation, but never by application code.

The actor model: Why is Erlang/OTP special? Could you use another language?

I've been looking into learning Erlang/OTP, and as a result, have been reading (okay, skimming) about the actor model.
From what I understand, the actor model is simply a set of functions (run within lightweight threads called "processes" in Erlang/OTP), which communicate with each other only via message passing.
This seems fairly trivial to implement in C++, or any other language:
class BaseActor {
std::queue<BaseMessage*> messages;
CriticalSection messagecs;
BaseMessage* Pop();
public:
void Push(BaseMessage* message)
{
auto scopedlock = messagecs.AquireScopedLock();
messagecs.push(message);
}
virtual void ActorFn() = 0;
virtual ~BaseActor() {} = 0;
}
With each of your processes being an instance of a derived BaseActor. Actors communicate with each other only via message-passing. (namely, pushing). Actors register themselves with a central map on initialization which allows other actors to find them, and allows a central function to run through them.
Now, I understand I'm missing, or rather, glossing over one important issue here, namely:
lack of yielding means a single Actor can unfairly consume excessive time. But are cross-platform coroutines the primary thing that makes this hard in C++? (Windows for instance has fibers.)
Is there anything else I'm missing, though, or is the model really this obvious?
The C++ code does not deal with fairness, isolation, fault detection or distribution which are all things which Erlang brings as part of its actor model.
No actor is allowed to starve any other actor (fairness)
If one actor crashes, it should only affect that actor (isolation)
If one actor crashes, other actors should be able to detect and react to that crash (fault detection)
Actors should be able to communicate over a network as if they were on the same machine (distribution)
Also the beam SMP emulator brings JIT scheduling of the actors, moving them to the core which is at the moment the one with least utilization and also hibernates the threads on certain cores if they are no longer needed.
In addition all the libraries and tools written in Erlang can assume that this is the way the world works and be designed accordingly.
These things are not impossible to do in C++, but they get increasingly hard if you add the fact that Erlang works on almost all of the major hw and os configurations.
edit: Just found a description by Ulf Wiger about what he sees erlang style concurrency as.
I don't like to quote myself, but from Virding's First Rule of Programming
Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.
With respect to Greenspun. Joe (Armstrong) has a similar rule.
The problem is not to implement actors, that's not that difficult. The problem is to get everything working together: processes, communication, garbage collection, language primitives, error handling, etc ... For example using OS threads scales badly so you need to do it yourself. It would be like trying to "sell" an OO language where you can only have 1k objects and they are heavy to create and use. From our point of view concurrency is the basic abstraction for structuring applications.
Getting carried away so I will stop here.
This is actually an excellent question, and has received excellent answers that perhaps are yet unconvincing.
To add shade and emphasis to the other great answers already here, consider what Erlang takes away (compared to traditional general purpose languages such as C/C++) in order to achieve fault-tolerance and uptime.
First, it takes away locks. Joe Armstrong's book lays out this thought experiment: suppose your process acquires a lock and then immediately crashes (a memory glitch causes the process to crash, or the power fails to part of the system). The next time a process waits for that same lock, the system has just deadlocked. This could be an obvious lock, as in the AquireScopedLock() call in the sample code; or it could be an implicit lock acquired on your behalf by a memory manager, say when calling malloc() or free().
In any case, your process crash has now halted the entire system from making progress. Fini. End of story. Your system is dead. Unless you can guarantee that every library you use in C/C++ never calls malloc and never acquires a lock, your system is not fault tolerant. Erlang systems can and do kill processes at will when under heavy load in order make progress, so at scale your Erlang processes must be killable (at any single point of execution) in order to maintain throughput.
There is a partial workaround: using leases everywhere instead of locks, but you have no guarantee that all the libraries you utilize also do this. And the logic and reasoning about correctness gets really hairy quickly. Moreover leases recover slowly (after the timeout expires), so your entire system just got really slow in the face of failure.
Second, Erlang takes away static typing, which in turn enables hot code swapping and running two versions of the same code simultaneously. This means you can upgrade your code at runtime without stopping the system. This is how systems stay up for nine 9's or 32 msec of downtime/year. They are simply upgraded in place. Your C++ functions will have to be manually re-linked in order to be upgraded, and running two versions at the same time is not supported. Code upgrades require system downtime, and if you have a large cluster that cannot run more than one version of code at once, you'll need to take the entire cluster down at once. Ouch. And in the telecom world, not tolerable.
In addition Erlang takes away shared memory and shared shared garbage collection; each light weight process is garbage collected independently. This is a simple extension of the first point, but emphasizes that for true fault tolerance you need processes that are not interlocked in terms of dependencies. It means your GC pauses compared to java are tolerable (small instead of pausing a half-hour for a 8GB GC to complete) for big systems.
There are actual actor libraries for C++:
http://actor-framework.org/
http://www.theron-library.com/
And a list of some libraries for other languages.
It is a lot less about the actor model and a lot more about how hard it is to properly write something analogous to OTP in C++. Also, different operating systems provide radically different debugging and system tooling, and Erlang's VM and several language constructs support a uniform way of figuring out just what all those processes are up to which would be very hard to do in a uniform way (or maybe do at all) across several platforms. (It is important to remember that Erlang/OTP predates the current buzz over the term "actor model", so in some cases these sort of discussions are comparing apples and pterodactyls; great ideas are prone to independent invention.)
All this means that while you certainly can write an "actor model" suite of programs in another language (I know, I have done this for a long time in Python, C and Guile without realizing it before I encountered Erlang, including a form of monitors and links, and before I'd ever heard the term "actor model"), understanding how the processes your code actually spawns and what is happening amongst them is extremely difficult. Erlang enforces rules that an OS simply can't without major kernel overhauls -- kernel overhauls that would probably not be beneficial overall. These rules manifest themselves as both general restrictions on the programmer (which can always be gotten around if you really need to) and basic promises the system guarantees for the programmer (which can be deliberately broken if you really need to also).
For example, it enforces that two processes cannot share state to protect you from side effects. This does not mean that every function must be "pure" in the sense that everything is referentially transparent (obviously not, though making as much of your program referentially transparent as practical is a clear design goal of most Erlang projects), but rather that two processes aren't constantly creating race conditions related to shared state or contention. (This is more what "side effects" means in the context of Erlang, by the way; knowing that may help you decipher some of the discussion questioning whether Erlang is "really functional or not" when compared with Haskell or toy "pure" languages.)
On the other hand, the Erlang runtime guarantees delivery of messages. This is something sorely missed in an environment where you must communicate purely over unmanaged ports, pipes, shared memory and common files which the OS kernel is the only one managing (and OS kernel management of these resources is necessarily extremely minimal compared to what the Erlang runtime provides). This doesn't meant that Erlang guarantees RPC (anyway, message passing is not RPC, nor is it method invocation!), it doesn't promise that your message is addressed correctly, and it doesn't promise that a process you're trying to send a message to exists or is alive, either. It just guarantees delivery if the thing your sending to happens to be valid at that moment.
Built on this promise is the promise that monitors and links are accurate. And based on that the Erlang runtime makes the entire concept of "network cluster" sort of melt away once you grasp what is going on with the system (and how to use erl_connect...). This permits you to hop over a set of tricky concurrency cases already, which gives one a big head start on coding for the successful case instead of getting mired in the swamp of defensive techniques required for naked concurrent programming.
So its not really about needing Erlang, the language, its about the runtime and OTP already existing, being expressed in a rather clean way, and implementing anything close to it in another language being extremely hard. OTP is just a hard act to follow. In the same vein, we don't really need C++, either, we could just stick to raw binary input, Brainfuck and consider Assembler our high level language. We also don't need trains or ships, as we all know how to walk and swim.
All that said, the VM's bytecode is well documented, and a number of alternative languages have emerged that compile to it or work with the Erlang runtime. If we break the question into a language/syntax part ("Do I have to understand Moon Runes to do concurrency?") and a platform part ("Is OTP the most mature way to do concurrency, and will it guide me around the trickiest, most common pitfalls to be found in a concurrent, distributed environment?") then the answer is ("no", "yes").
Casablanca is another new kid on the actor model block. A typical asynchronous accept looks like this:
PID replyTo;
NameQuery request;
accept_request().then([=](std::tuple<NameQuery,PID> request)
{
if (std::get<0>(request) == FirstName)
std::get<1>(request).send("Niklas");
else
std::get<1>(request).send("Gustafsson");
}
(Personally, I find that CAF does a better job at hiding the pattern matching behind a nice interface.)

performance penalty of message passing as opposed to shared data

There is a lot of buzz these days about not using locks and using Message passing approaches like Erlang. Or about using immutable datastructures like in Functional programming vs. C++/Java.
But what I am concerned with is the following:
AFAIK, Erlang does not guarantee Message delivery. Messages might be lost. Won't the algorithm and code bloat and be complicated again if you have to worry about loss of messages? Whatever distributed algorithm you use must not depend on guaranteed delivery of messages.
What if the Message is a complicated object? Isn't there a huge performance penalty in copying and sending the messages vs. say keeping it in a shared location (like a DB that both processes can access)?
Can you really totally do away with shared states? I don't think so. For e.g. in a DB, you have to access and modify the same record. You cannot use message passing there. You need to have locking or assume Optimistic concurrency control mechanisms and then do rollbacks on errors. How does Mnesia work?
Also, it is not the case that you always need to worry about concurrency. Any project will also have a large piece of code that doesn't have to do anything with concurrency or transactions at all (but they do have performance and speed as a concern). A lot of these algorithms depend on shared states (that's why pass-by-reference or pointers are so useful).
Given this fact, writing programs in Erlang etc is a pain because you are prevented from doing any of these things. May be, it makes programs robust, but for things like Solving a Linear Programming problem or Computing the convex hulll etc. performance is more important and forcing immutability etc. on the algorithm when it has nothing to do with Concurrency/Transactions is a poor decision. Isn't it?
That's real life : you need to account for this possibility regardless of the language / platform. In a distributed world (the real world), things fail: live with it.
Of course there is a cost: nothing is free in our universe. But shouldn't you use another medium (e.g. file, db) instead of shuttling "big objects" in communication pipes? You can always use "message" to refer to "big objects" stored somewhere.
Of course not: the idea behind functional programming / Erlang OTP is to "isolate" as much as possible the areas were "shared state" is manipulated. Futhermore, having clearly marked places where shared state is mutated helps testability & traceability.
I believe you are missing the point: there is no such thing as a silver bullet. If your application cannot be successfully built using Erlang then don't do it. You can always some other part of the overall system in another fashion i.e. use a different language / platform. Erlang is no different from another language in this respect: use the right tool for the right job.
Remember: Erlang was designed to help solve concurrent, asynchronous and distributed problems. It isn't optimized for working efficiently on a shared block of memory for example... unless you count interfacing with nif functions working on shared blocks part of the game :-)
Real-world systems are always hybrids anyway: I don't believe the modern paradigms try, in practice, to get rid of mutable data and shared state.
The objective, however, is not to need concurrent access to this shared state. Programs can be divided into the concurrent and the sequential, and use message-passing and the new paradigms for the concurrent parts.
Not every code will get the same investment: There is concern that threads are fundamentally "considered harmful". Something like Apache may need traditional concurrent threads and a key piece of technology like that may be carefully refined over a period of years so it can blast away with fully concurrent shared state. Operating system kernels are another example where "solve the problem no matter how expensive it is" may make sense.
There is no benefit to fast-but-broken: But for new code, or code that doesn't get so much attention, it may be the case that it simply isn't thread-safe, and it will not handle true concurrency, and so the relative "efficiency" is irrelevant. One way works, and one way doesn't.
Don't forget testability: Also, what value can you place on testing? Thread-based shared-memory concurrency is simply not testable. Message-passing concurrency is. So now you have the situation where you can test one paradigm but not the other. So, what is the value in knowing that the code has been tested? The danger in not even knowing if the other code will work in every situation?
A few comments on the misunderstanding you have of Erlang:
Erlang guarantees that messages will not be lost, and that they will arrive in the order sent. A basic error situation is that machine A can not speak to machine B. When that happens process monitors and links will trigger, and system node-down messages will be sent to the processes that registered for it. Nothing will be silently dropped. Processes will "crash" and supervisors (if any) tries to restart them.
Objects can not be mutated, so they are always copied. One way to secure immutability is by copying values to other erlang process' heaps. Another way is to allocate objects in a shared heap, message references to them and simply not have any operations that mutate them. Erlang does the first for performance! Realtime suffers if you need to stop all processes to garbage collect a shared heap. Ask Java.
There is shared state in Erlang. Erlang is not proud of it, but it is pragmatic about it. One example is the local process registry which is a global map that maps a name to a process so that system processes can be restarted and claim their old name. Erlang just tries to avoid shared state if it possibly can. ETS tables that are public are another example.
Yes, sometimes Erlang is too slow. This happens all languages. Sometimes Java is too slow. Sometimes C++ is too slow. Just because a tight loop in a game had to drop down to assembly to kick off some serious SIMD-based vector mathematics you can't deduce that everything should be written in assembly because it is the only language that is fast when it matters. What matters is being able to write systems that have good performance, and Erlang manages quite well. See benchmarks on yaws or rabbitmq.
Your facts are not facts about Erlang. Even if you think Erlang programming is a pain, you will find other people create some awesome software thanks to it. You should attempt writing an IRC server in Erlang, or something else very concurrent. Even if you're never going to use Erlang again, you would have learned to think about concurrency another way. But of course, you will, because Erlang is awesome easy.
Those that do not understand Erlang are doomed to re-implement it badly.
Okay, the original was about Lisp, but... its true!
There are some implicit assumption in your questions - you assume that all the data can fit
on one machine and that the application is intrinsically localised to one place.
What happens if the application is so large it cannot fit on one machine? What happens if the application outgrows one machine?
You don't want to have one way to program an application if it fits on one machine and
a completely different way of programming it as soon as it outgrows one machine.
What happens if you want make a fault-tolerant application? To make something fault-tolerant you need at least two physically separated machines and no sharing.
When you talk about sharing and data bases you omit to mention that things like mySQL
cluster achieve fault-tolerence precisely by maintaining synchronised copies of the
data in physically separated machines - there is a lot of message passing and
copying that you don't see on the surface - Erlang just exposes this.
The way you program should not suddenly change to accommodate fault-tolerance and scalability.
Erlang was designed primarily for building fault-tolerant applications.
Shared data on a multi-core has it's own set of problems - when you access shared data
you need to acquire a lock - if you use a global lock (the easiest approach) you can end up
stopping all the cores while you access the shared data. Shared data access on a multicore
can be problematic due to caching problems, if the cores have local data caches then accessing "far away" data (in some other processors cache) can be very expensive.
Many problems are intrinsically distributed and the data is never available in one place
at the same time so - these kind of problems fit well with the Erlang way of thinking.
In a distributed setting "guaranteeing message delivery" is impossible - the destination machine might have crashed. Erlang cannot thus guarantee message delivery -
it takes a different approach - the system will tell you if it failed to deliver a message
(but only if you have used the link mechanism) - then you can write you own custom error
recovery.)
For pure number crunching Erlang is not appropriate - but in a hybrid system Erlang
is good at managing how computations get distributed to available processors, so we see a lot of systems where Erlang manages the distribution and fault-tolerent aspects of the problem, but the problem itself is solved in a different language.
and other languages are used
For e.g. in a DB, you have to access and modify the same record
But that is handled by the DB. As a user of the database, you simply execute your query, and the database ensures it is executed in isolation.
As for performance, one of the most important things about eliminating shared state is that it enables new optimizations. Shared state is not particularly efficient. You get cores fighting over the same cache lines, and data has to be written through to memory where it could otherwise stay in a register or in CPU cache.
Many compiler optimizations rely on absence of side effects and shared state as well.
You could say that a stricter language guaranteeing these things requires more optimizations to be performant than something like C, but it also makes these optimizations much much easier for the compiler to implement.
Many concerns similar to concurrency issues arise in singlethreaded code. Modern CPUs are pipelined, execute instructions out of order, and can run 3-4 of them per cycle. So even in a single-threaded program, it is vital that the compiler and CPU is able to determine which instructions can be interleaved and executed in parallel.
For correctness, shared is the way to go, and keep the data as normalized as possible. For immediacy, send messages to inform of changes, but always back them up with polling. Messages get dropped, duplicated, re-ordered, delayed - don't rely on them.
If speed is what you're worried about, first do it single-thread and tune the daylights out of it. Then if you've got multiple cores and know how to split up the work, use parallelism.
Erlang provides supervisors and gen_server callbacks for synchronous calls, so you will know about it if a message isn't delivered: either the gen_server call returns a timeout, or your whole node will be brought down and up if the supervisor is triggered.
usually if the processes are on the same node, message-passing languages optimise away the data copying, so it's almost like shared memory, except if the object is changed used by both afterward, which can not be done using shared memory either anyways
There is some state which is kept by processes by passing it around to themselves in the recursive tail-calls, also some state can be of course passed through messages. I don't use mnesia much, but it is a transactional database, so once you have passed the operation to mnesia (and it has returned) you are pretty much guaranteed it will go through..
Which is why it is easy to tie such applications into erlang with the use of ports or drivers. The easiest are the ports, it's much like a unix pipe, though I think performance isn't that great...and as said, message-passing usually ends up just being pointer passing anyways as the VM/compiler optimise the memory copy out.

What are the "things to know" when diving into multi-threaded programming in C++

I'm currently working on a wireless networking application in C++ and it's coming to a point where I'm going to want to multi-thread pieces of software under one process, rather than have them all in separate processes. Theoretically, I understand multi-threading, but I've yet to dive in practically.
What should every programmer know when writing multi-threaded code in C++?
I would focus on design the thing as much as partitioned as possible so you have the minimal amount of shared things across threads. If you make sure you don't have statics and other resources shared among threads (other than those that you would be sharing if you designed this with processes instead of threads) you would be fine.
Therefore, while yes, you have to have in mind concepts like locks, semaphores, etc, the best way to tackle this is to try to avoid them.
I am no expert at all in this subject. Just some rule of thumb:
Design for simplicity, bugs really are hard to find in concurrent code even in the simplest examples.
C++ offers you a very elegant paradigm to manage resources(mutex, semaphore,...): RAII. I observed that it is much easier to work with boost::thread than to work with POSIX threads.
Build your code as thread-safe. If you don't do so, your program could behave strangely
I am exactly in this situation: I wrote a library with a global lock (many threads, but only one running at a time in the library) and am refactoring it to support concurrency.
I have read books on the subject but what I learned stands in a few points:
think parallel: imagine a crowd passing through the code. What happens when a method is called while already in action ?
think shared: imagine many people trying to read and alter shared resources at the same time.
design: avoid the problems that points 1 and 2 can raise.
never think you can ignore edge cases, they will bite you hard.
Since you cannot proof-test a concurrent design (because thread execution interleaving is not reproducible), you have to ensure that your design is robust by carefully analyzing the code paths and documenting how the code is supposed to be used.
Once you understand how and where you should bottleneck your code, you can read the documentation on the tools used for this job:
Mutex (exclusive access to a resource)
Scoped Locks (good pattern to lock/unlock a Mutex)
Semaphores (passing information between threads)
ReadWrite Mutex (many readers, exclusive access on write)
Signals (how to 'kill' a thread or send it an interrupt signal, how to catch these)
Parallel design patterns: boss/worker, producer/consumer, etc (see schmidt)
platform specific tools: openMP, C blocks, etc
Good luck ! Concurrency is fun, just take your time...
You should read about locks, mutexes, semaphores and condition variables.
One word of advice, if your app has any form of UI make sure you always change it from the UI thread. Most UI toolkits/frameworks will crash (or behave unexpectedly) if you access them from a background thread. Usually they provide some form of dispatching method to execute some function in the UI thread.
Never assume that external APIs are threadsafe. If it is not explicitly stated in their docs, do not call them concurrently from multiple threads. Instead, limit your use of them to a single thread or use a mutex to prevent concurrent calls (this is rather similar to the aforementioned GUI libraries).
Next point is language-related. Remember, C++ has (currently) no well-defined approach to threading. The compiler/optimizer does not know if code might be called concurrently. The volatile keyword is useful to prevent certain optimizations (i.e. caching of memory fields in CPU registers) in multi-threaded contexts, but it is no synchronization mechanism.
I'd recommend boost for synchronization primitives. Don't mess with platform APIs. They make your code difficult to port because they have similar functionality on all major platforms, but slightly different detail behaviour. Boost solves these problems by exposing only common functionality to the user.
Furthermore, if there's even the smallest chance that a data structure could be written to by two threads at the same time, use a synchronization primitive to protect it. Even if you think it will only happen once in a million years.
One thing I've found very useful is to make the application configurable with regard to the actual number of threads it uses for various tasks. For example, if you have multiple threads accessing a database, make the number of those threads be configurable via a command line parameter. This is extremely handy when debugging - you can exclude threading issues by setting the number to 1, or force them by setting it to a high number. It's also very handy when working out what the optimal number of threads is.
Make sure you test your code in a single-cpu system and a multi-cpu system.
Based on the comments:-
Single socket, single core
Single socket, two cores
Single socket, more than two cores
Two sockets, single core each
Two sockets, combination of single, dual and multi core cpus
Mulitple sockets, combination of single, dual and multi core cpus
The limiting factor here is going to be cost. Ideally, concentrate on the types of system your code is going to run on.
In addition to the other things mentioned, you should learn about asynchronous message queues. They can elegantly solve the problems of data sharing and event handling. This approach works well when you have concurrent state machines that need to communicate with each other.
I'm not aware of any message passing frameworks tailored to work only at the thread level. I've only seen home-brewed solutions. Please comment if you know of any existing ones.
EDIT:
One could use the lock-free queues from Intel's TBB, either as-is, or as the basis for a more general message-passing queue.
Since you are a beginner, start simple. First make it work correctly, then worry about optimizations. I've seen people try to optimize by increasing the concurrency of a particular section of code (often using dubious tricks), without ever looking to see if there was any contention in the first place.
Second, you want to be able to work at as high a level as you can. Don't work at the level of locks and mutexs if you can using an existing master-worker queue. Intel's TBB looks promising, being slightly higher level than pure threads.
Third, multi-threaded programming is hard. Reduce the areas of your code where you have to think about it as much as possible. If you can write a class such that objects of that class are only ever operated on in a single thread, and there is no static data, it greatly reduces the things that you have to worry about in the class.
A few of the answers have touched on this, but I wanted to emphasize one point:
If you can, make sure that as much of your data as possible is only accessible from one thread at a time. Message queues are a very useful construct to use for this.
I haven't had to write much heavily-threaded code in C++, but in general, the producer-consumer pattern can be very helpful in utilizing multiple threads efficiently, while avoiding the race conditions associated with concurrent access.
If you can use someone else's already-debugged code to handle thread interaction, you're in good shape. As a beginner, there is a temptation to do things in an ad-hoc fashion - to use a "volatile" variable to synchronize between two pieces of code, for example. Avoid that as much as possible. It's very difficult to write code that's bulletproof in the presence of contending threads, so find some code you can trust, and minimize your use of the low-level primitives as much as you can.
My top tips for threading newbies:
If you possibly can, use a task-based parallelism library, Intel's TBB being the most obvious one. This insulates you from the grungy, tricky details and is more efficient than anything you'll cobble together yourself. The main downside is this model doesn't support all uses of multithreading; it's great for exploiting multicores for compute power, less good if you wanted threads for waiting on blocking I/O.
Know how to abort threads (or in the case of TBB, how to make tasks complete early when you decide you didn't want the results after all). Newbies seem to be drawn to thread kill functions like moths to a flame. Don't do it... Herb Sutter has a great short article on this.
Make sure to explicitly know what objects are shared and how they are shared.
As much as possible make your functions purely functional. That is they have inputs and outputs and no side effects. This makes it much simpler to reason about your code. With a simpler program it isn't such a big deal but as the complexity rises it will become essential. Side effects are what lead to thread-safety issues.
Plays devil's advocate with your code. Look at some code and think how could I break this with some well timed thread interleaving. At some point this case will happen.
First learn thread-safety. Once you get that nailed down then you move onto the hard part: Concurrent performance. This is where moving away from global locks is essential. Figuring out ways to minimize and remove locks while still maintaining the thread-safety is hard.
Keep things dead simple as much as possible. It's better to have a simpler design (maintenance, less bugs) than a more complex solution that might have slightly better CPU utilization.
Avoid sharing state between threads as much as possible, this reduces the number of places that must use synchronization.
Avoid false-sharing at all costs (google this term).
Use a thread pool so you're not frequently creating/destroying threads (that's expensive and slow).
Consider using OpenMP, Intel and Microsoft (possibly others) support this extension to C++.
If you are doing number crunching, consider using Intel IPP, which internally uses optimized SIMD functions (this isn't really multi-threading, but is parallelism of a related sorts).
Have tons of fun.
Stay away from MFC and it's multithreading + messaging library.
In fact if you see MFC and threads coming toward you - run for the hills (*)
(*) Unless of course if MFC is coming FROM the hills - in which case run AWAY from the hills.
The biggest "mindset" difference between single-threaded and multi-threaded programming in my opinion is in testing/verification. In single-threaded programming, people will often bash out some half-thought-out code, run it, and if it seems to work, they'll call it good, and often get away with it using it in a production environment.
In multithreaded programming, on the other hand, the program's behavior is non-deterministic, because the exact combination of timing of which threads are running for which periods of time (relative to each other) will be different every time the program runs. So just running a multithreaded program a few times (or even a few million times) and saying "it didn't crash for me, ship it!" is entirely inadequate.
Instead, when doing a multithreaded program, you always should be trying to prove (at least to your own satisfaction) that not only does the program work, but that there is no way it could possibly not work. This is much harder, because instead of verifying a single code-path, you are effectively trying to verify a near-infinite number of possible code-paths.
The only realistic way to do that without having your brain explode is to keep things as bone-headedly simple as you can possibly make them. If you can avoid using multithreading totally, do that. If you must do multithreading, share as little data between threads as possible, and use proper multithreading primitives (e.g. mutexes, thread-safe message queues, wait conditions) and don't try to get away with half-measures (e.g. trying to synchronize access to a shared piece of data using only boolean flags will never work reliably, so don't try it)
What you want to avoid is the multithreading hell scenario: the multithreaded program that runs happily for weeks on end on your test machine, but crashes randomly, about once a year, at the customer's site. That kind of race-condition bug can be nearly impossible to reproduce, and the only way to avoid it is to design your code extremely carefully to guarantee it can't happen.
Threads are strong juju. Use them sparingly.
You should have an understanding of basic systems programing, in particular:
Synchronous vs Asynchronous I/O (blocking vs. non-blocking)
Synchronization mechanisms, such as lock and mutex constructs
Thread management on your target platform
I found viewing the introductory lectures on OS and systems programming here by John Kubiatowicz at Berkeley useful.
Part of my graduate study area relates to parallelism.
I read this book and found it a good summary of approaches at the design level.
At the basic technical level, you have 2 basic options: threads or message passing. Threaded applications are the easiest to get off the ground, since pthreads, windows threads or boost threads are ready to go. However, it brings with it the complexity of shared memory.
Message-passing usability seems mostly limited at this point to the MPI API. It sets up an environment where you can run jobs and partition your program between processors. It's more for supercomputer/cluster environments where there's no intrinsic shared memory. You can achieve similar results with sockets and so forth.
At another level, you can use language type pragmas: the popular one today is OpenMP. I've not used it, but it appears to build threads in via preprocessing or a link-time library.
The classic problem is synchronization here; all the problems in multiprogramming come from the non-deterministic nature of multiprograms, which can not be avoided.
See the Lamport timing methods for a further discussion of synchronizations and timing.
Multithreading is not something that only Ph.D.`s and gurus can do, but you will have to be pretty decent to do it without making insane bugs.
I'm in the same boat as you, I am just starting multi threading for the first time as part of a project and I've been looking around the net for resources. I found this blog to be very informative. Part 1 is pthreads, but I linked starting on the boost section.
I have written a multithreaded server application and a multithreaded shellsort. They were both written in C and use NT's threading functions "raw" that is without any function library in-between to muddle things. They were two quite different experiences with different conclusions to be drawn. High performance and high reliability were the main priorities although coding practices had a higher priority if one of the first two was judged to be threatened in the long term.
The server application had both a server and a client part and used iocps to manage requests and responses. When using iocps it is important never to use more threads than you have cores. Also I found that requests to the server part needed a higher priority so as not to lose any requests unnecessarily. Once they were "safe" I could use lower priority threads to create the server responses. I judged that the client part could have an even lower priority. I asked the questions "what data can't I lose?" and "what data can I allow to fail because I can always retry?" I also needed to be able to interface to the application's settings through a window and it had to be responsive. The trick was that the UI had normal priority, the incoming requests one less and so on. My reasoning behind this was that since I will use the UI so seldom it can have the highest priority so that when I use it it will respond immediately. Threading here turned out to mean that all separate parts of the program in the normal case would/could be running simultaneously but when the system was under higher load, processing power would be shifted to the vital parts due to the prioritization scheme.
I've always liked shellsort so please spare me from pointers about quicksort this or that or blablabla. Or about how shellsort is ill-suited for multithreading. Having said that, the problem I had had to do with sorting a semi-largelist of units in memory (for my tests I used a reverse-sorted list of one million units of forty bytes each. Using a single-threaded shellsort I could sort them at a rate of roughly one unit every two us (microseconds). My first attempt to multithread was with two threads (though I soon realized that I wanted to be able to specify the number of threads) and it ran at about one unit every 3.5 seconds, that is to say SLOWER. Using a profiler helped a lot and one bottleneck turned out to be the statistics logging (i e compares and swaps) where the threads would bump into each other. Dividing up the data between the threads in an efficient way turned out to be the biggest challenge and there is definitley more I can do there such as dividing the vector containing the indeces to the units in cache-line size adapted chunks and perhaps also comparing all indeces in two cache lines before moving to the next line (at least I think there is something I can do there - the algorithms get pretty complicated). In the end, I achieved a rate of one unit every microsecond with three simultaneous threads (four threads about the same, I only had four cores available).
As to the original question my advice to you would be
If you have the time, learn the threading mechanism at the lowest possible level.
If performance is important learn the related mechanisms that the OS provides. Multi-threading by itself is seldom enough to achieve an application's full potential.
Use profiling to understand the quirks of multiple threads working on the same memory.
Sloppy architectural work will kill any app, regardless of how many cores and systems you have executing it and regardless of the brilliance of your programmers.
Sloppy programming will kill any app, regardless of the brilliance of the architectural foundation.
Understand that using libraries lets you reach the development goal faster but at the price of less understanding and (usually) lower performance .
Before giving any advice on do's and dont's about multi-thread programming in C++, I would like to ask the question Is there any particular reason you want to start writing the application in C++?
There are other programming paradigms where you utilize the multi-cores without getting into multi-threaded programming. One such paradigm is functional programming. Write each piece of your code as functions without any side effects. Then it is easy to run it in multiple thread without worrying about synchronization.
I am using Erlang for my development purpose. It has increased by productivity by at least 50%. Code running may not be as fast as the code written in C++. But I have noticed that for most of the back-end offline data processing, speed is not as important as distribution of work and utilizing the hardware as much as possible. Erlang provides a simple concurrency model where you can execute a single function in multiple-threads without worrying about the synchronization issue. Writing multi-threaded code is easy, but debugging that is time consuming. I have done multi-threaded programming in C++, but I am currently happy with Erlang concurrency model. It is worth looking into.
Make sure you know what volatile means and it's uses(which may not be obvious at first).
Also, when designing multithreaded code, it helps to imagine that an infinite amount of processors is executing every single line of code in your application at once. (er, every single line of code that is possible according to your logic in your code.) And that everything that isn't marked volatile the compiler does a special optimization on it so that only the thread that changed it can read/set it's true value and all the other threads get garbage.