I am in the design phase of a programming language, currently thinking about the concurrency aspects. I need to figure out a consistency model, i.e. how data is handled by concurrent processes programmed in this language.
There are two important criteria:
I prefer ease-of-use over performance, as long as the consistency model allows good scaling,
I cannot use a consistency model that requires blocking or dynamic memory allocation.
My two candidates right now are non-blocking software transactional memory on one side, and copying message-passing semantics without sharing a la Erlang.
I'm particularly worried about ease-of-use, so I'll present the major arguments I have against each of these two models.
In the case of STM, the user must understand what members of a class must mutate atomically and correctly delimit atomic code sections. These must be written so that they can be repeated an undefined number of times, they may not perform any I/O, may not call some foreign functions, etc. I see this as far from easy for a non-experienced programmer.
Erlang-style share-nothing concurrency is attractive, but there is a catch: real-time processes cannot copy the objects they send over, because they cannot perform any memory allocation, and so objects have to "move" from one process to the other via queues. The user must be aware that if one real-time process has two references to an object, both those references will be cleared if he sends the object to another process. This is a little like weak pointers that may or may not be null at any point of use: it may be surprising.
I tend towards the second model because it appears easier to understand and it naturally extends to distributed systems.
What do you recommend?
Non-blocking software transactional memory?
Erlang-style concurrency with the difficulties of real-time constraints?
Something else I haven't considered?
I have done a little with Erlang, not much, but although the share-nothing message passing paradigm was new for me I would say that it was easy to understand in visual and physical terms.
If your language is to be widespread, I would say that the Erlang-style is at least something I can wrap my mind around without too much work. I assume others will be able to learn and apply that kind of model easier than the STM method.
I'm not speaking from experience, but it seems like the Erlang model would be easier to implement, as it doesn't have to deal with a lot of the low level memory operations, you just share nothing, and manage the memory passing between processes.
I don't think a single paradigm will solve all the issues and are incompatible. For example one application can use the message passing interface for some part of the program and STM for other parts, and direct locking for other more specific parts.
You can also take a look at Join calculus (JoCaml, Boost.Join), which can be considered as a variant of the message passing interface.
Related
I am writing right now a multi-threaded application (game to be precise) as a hobby/research project. I have lately run into a really "simple" problem, which is making synchronization between threads (if it matters a lot to you, it's in c++).
My main issue is — I try to learn good design, and mutexing my whole model everywhere I can, is (in my opinion) resource wasteful, and just plainly asking for problems in further development. I have thought about making the whole process of synchronization transaction-based, but I feel it just does not fit the game type required performance/extensibility. I am new to concurrent programming, and I am here to learn something new about patterns specific to concurrent programming.
Some words about current design:
MVC approach
Online synchronization is being handled by a separate agent, which is identical on slave-client and master-server and is being handled separately from any server logic
Database like structure is being synced undependably from server logic and has some minor subscription/observer pattern build in to notify controllers about changes.
Notes
I do not look for documentation specific pieces of information (if they are not directly connected to performance or design), I know my cppreference,
I do look for some extensive blog post/websites which can teach me some more about concurrent design patterns,
I do want to know If I am just plainly doing things wrong (not in the wrong order, though).
EDIT
Like Mike has mentioned, I did not ask the question:
1) What are the best design patterns/norms which can be used in concurrent programming (Mostly usable in my case),
2) What are the biggest no-goes when it comes to concurrent programming performance.
You are starting from a bit of a mistaken idea. Parallelism is about performance, concurrency is about correctness. A concurrent system isn't necessarily the fastest solution. A good concurrent system minimizes and explicitly defines dependencies; enabling a robust, reactive system with minimal latency. In contrast, a parallel system seeks to minimize its execution time by maximizing its utilization of resources; in doing so, it might maximize latency. There is overlap, but the mindset is quite different.
There are many good concurrent languages. C++ isn't one of them. That said, you can write good concurrent systems in any language. Most concurrent languages have a strong message passing bias, but good message passing libraries are available to most languages.
Message passing is a distinct from low level synchronization mechanism in that it is a model or way of thinking in and of itself. Mutexes, semaphores, etc... are not. They are tools, and should likely be ignored until the design is reasonably complete.
The design phase should be more abstract than synchronization mechanisms. Ideally, it should thresh out the operations (or transactions, if you prefer) and the necessary interactions between them. From that schema, choices about how to arrange data and code for concurrent access should be natural. If it isn't, your schema is incomplete.
From my studies I know the concepts of starvation, deadlock, fairness and other concurrency issues. However, theory differs from practice, to an extent, and real engineering tasks often involve greater detail than academic blah blah...
As a C++ developer I've been concerned about threading issues for a while...
Suppose you have a shared variable x which refers to some larger portion of the program's memory. The variable is shared between two threads A and B.
Now, if we consider read/write operations on x from both A and B threads, possibly at the same time, there is a need to synchronize those operations, right? So the access to x needs some form of synchronization which can be achieved for example by using mutexes.
Now lets consider another scenario where x is initially written by thread A, then passed to thread B (somehow) and that thread only reads x. The thread B then produces a response to x called y and passes it back to the thread A (again, somehow). My question is: what synchronization primitives should I use to make this scenario thread-safe. I've read about atomics and, more importantly, memory fences - are these the tools I should rely on?
This is not a typical scenario in which there is a "critical section". Instead some data is passed between threads with no possibility of concurrent writes in the same memory location. So, after being written, the data should first be "flushed" somehow, so that the other threads could see it in a valid and consistent state before reading. How is it called in the literature, is it "visibility"?
What about pthread_once and its Boost/std counterpart i.e. call_once. Does it help if both x and y are passed between threads through a sort of "message queue" which is accessed by means of "once" functionality. AFAIK it serves as a sort of memory fence but I couldn't find any confirmation for this.
What about CPU caches and their coherency? What should I know about that from the engineering point of view? Does such knowledge help in the scenario mentioned above, or any other scenario commonly encountered in C++ development?
I know I might be mixing a lot of topics but I'd like to better understand what is the common engineering practice so that I could reuse the already known patterns.
This question is primarily related to the situation in C++03 as this is my daily environment at work. Since my project mainly involves Linux then I may only use pthreads and Boost, including Boost.Atomic. But I'm also interested if anything concerning such matters has changed with the advent of C++11.
I know the question is abstract and not that precise but any input could be useful.
you have a shared variable x
That's where you've gone wrong. Threading is MUCH easier if you hand off ownership of work items using some sort of threadsafe consumer-producer queue, and from the perspective of the rest of the program, including all the business logic, nothing is shared.
Message passing also helps prevent cache collisions (because there is no true sharing -- except of the producer-consumer queue itself, and that has trivial effect on performance if the unit of work is large -- and organizing the data into messages help reduce false sharing).
Parallelism scales best when you separate the problem into subproblems. Small subproblems are also much easier to reason about.
You seem to already be thinking along these lines, but no, threading primitives like atomics, mutexes, and fences are not very good for applications using message passing. Find a real queue implementation (queue, circular ring, Disruptor, they go under different names but all meet the same need). The primitives will be used inside the queue implementation, but never by application code.
I used to see the term "lock free data structure" and think "ooooo that must be really complex". However, I have been reading "C++ Concurrency in Action" and it seems to write a lock-free data structure all you do is stop using mutexes/locks and replace them with atomic code (along with possible memory-ordering barriers).
So my question is- am I missing something here? Is it really that much simpler due to C++11? Is writing a lock-free data structure just a case of replacing the locks with atomic operations?
Ooooo but that is really complex.
If you don't see the difference between a mutex and an atomic access, there is something wrong with the way you look at parallel processing, and there will soon be something wrong with the code you write.
Most likely it will run slower than the equivalent blocking version, and if you (or rather your coworkers) are really unlucky, it will spout the occasional inconsistent data and crash randomly.
Even more likely, it will propagate real-time constraints to large parts of your application, forcing your coworkers to waste a sizeable amount of their time coping with arbitrary requirements they and their software would have quite happily lived without, and resort to various superstitions good practices to obfuscate their code into submission.
Oh well, as long as the template guys and the wait-free guys had their little fun...
Parallel processing, be it blocking or supposedly wait-free, is inherently resource consuming,complex and costly to implement. Designing a software architecture that takes a real advantage from non-trivial parallel processing is a job for specialists.
A good software design should on the contrary limit the parallelism to the bare minimum, leaving most of the programmers free to implement linear, sequential code.
As for C++, I find this whole philosophy of wrapping indifferently a string, a thread and a coffee machine in the same syntactic goo a disastrous design choice.
C++ is allowing you to create a multiprocessor synchronization object out of about anything, like you would allocate a mere string, which is akin to presenting an assault rifle next to a squirt gun in the same display case.
No doubt a lot of people are making a living by selling the idea that an assault rifle and a squirt gun are, after all, not so different. But still, they are.
Two things to consider:
Only a single operations is atomic when using C++11 atomic. But often when you want to use mutexes to protect a larger region of code.
If you use std::atomic with a type that the compiler cannot convert to an atomic operation in machine code, then the compiler will have to insert a mutex for that operation.
Overall you probably want to stick with using mutexes and only use lock-free code for performance critical sections, or if you were implementing your own structures to use for synchronization.
You are missing something. While lock free data structures do use the primitives you mention, simply invoking their existance will not provide you with a lock free queue.
Lock-free codes are not simpler due to C++, without C++, operating systems often provides similar stuff for memory ordering and fencing in C/Assembly.
C++ provides a better & easier to use interface (and of course more standardized so you can use in multiple OS, multiple machine structures with the same interface), but programming lock-free codes in C++ won't be simpler than without C++ if you target only one specific type of OS/machine structure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
AFAIK, a major goal of multi-threaded programming is increasing performance by utilizing multiple processing cores. The point is maximizing parallel execution.
When I see thread-safe generic data structure classes, I feel some irony. Because thread-safety means enforcing serial execution (lock, atomic operation, or whatever), so it's anti-parallel. Thread-safe classes means that serialization is encapsulated and hidden into the class, so we will get more chance to force serial execution - losing performance. It would be better to manage those critical section in larger (or largest) unit - application logic.
So why do people want thread-safe classes? What's the real benefit of them?
P.S.
I meant thread-safe class is a class has only thread-safe methods which is safe to be called from multiple threads simultaneously. Safe means it guarantees correct read/write result. Correct means its result is equal to the result under single-threaded execution. (for example avoiding ABA problem)
So I think the term thread-safety in my question contains serial execution by definition. And that's why I was confused for its purpose and asked this question.
I think you're question has a false assumption: synchronization operations are simply not anti-parallel. There is simply no way to build a parallel mutable data structure without some form of synchronization. Yes heavy usages of those synchronization mechanisms will detract from the ability of the code to run in parallel. But without those mechanisms it wouldn't be possible to write the code in the first place.
The one form of thread safe data structure that doesn't require synchronization are immutable values. However they only work for a subset of scenarios (parallel reads, data passing, etc ...)
Thread safe data structures can be implemented without serialization. It's tricky to get right, but it's doable and is done. Then you have the benefits of concurrency without any bottleneck.
Thread-safe classes means that serialization is encapsulated and hidden into the class, so we will get more chance to force serial execution - losing performance.
Making thread safety the client's responsibility defeats encapsulation (not always). Depending on the context/design, thread safety can be either very complex, or is susceptible to change over time (breaks your program when APIs change), or they are simply not uniform. Abstracting synchronization does not have to equate to a loss; it also has the potential for great benefits -- especially because it is not a subject for novices.
It would be better to manage those critical section in larger (or largest) unit - application logic.
I'm not sure who told you that, but that is not necessarily ideal for all scenarios. Once you get down to implementing concurrent systems, you will realize that choosing the best granularity of synchronization within your designs can make a huge difference in how it operates. Note that the 'best' general design is not always the best for a given usage.
There is not a hard and fast rule here -- Small and shortest (potentially using and acquiring a higher number of locks, however) is better for many designs, whereas largest-unit can increase contention and result in significant blocking. It's really easy to begin an update then spend a lot of time doing things within that update which do not require sustained synchronization of the entire structure during the update. Locking down the whole graph at each access is not always better, and certain components of the structure may be thread safe independent of other components. Therefore, the largest unit approach is frequently likely to enforce serialization which impacts performance, especially as size and complexity grows.
So why do people want thread-safe classes? What's the real benefit of them?
A few good reasons come to mind:
They can be hard to implement correctly, diagnose, and test. High performance concurrent designs are not concepts learned by attending a talk or going through a few online tutorials. It takes a lot of mistakes and time-invested to understand what goes into a good design.
Some structures are very specialized. These may be non-blocking, rely on atomics, or use less typical concurrent patterns or synchronization forms. Example: By default, you may just reach for a mutex when you need a lock, but sometimes a rwlock or spinlock would be better. Sometimes immutability may be better.
Some contexts or domains are very specialized. Designing a single component is often a simple task, but designing an entire system and how components interact is a much larger challenge, and the system may need to operate under special constraints -- relying on that design's synchronization can save you a lot of headache. You may not take the time to benchmark under many different workloads, whereas the person who wrote it has invested the time to understand the implementation and its execution.
It just works. Some people don't want to spend their energy obsessing over concurrency issues. They would rather use a proven, reliable implementation and focus on other aspects of their program. In some cases, people whose software you wind up using may not understand some of these concepts well enough, and you will be grateful when they had chosen to use a proven (or even familiar) design.
Encapsulation. Sometimes encapsulation can result in big performance boosts in concurrent systems. Example: a member or parameter may be conditionally immutable, and that trait may be taken advantage of. In other cases, encapsulation can result in lower acquisitions or reduced blocking. Another case is that encapsulation can reduce the complexity of using the interface -- entire categories of potential threading issues may be removed (although you may be left with a smaller set of constraints).
Less to comprehend. Reuse a well known implementation and understand how it operates, and you have less to learn compared to reviewing an implementation which was hand written (e.g. by your colleague who departed last year).
There are of course downsides, but that is not your question ;)
Often this is why performance-critical multi-threaded code avoids using "thread-safe" containers. Containers like std::vector, etc., are not thread safe. If an application needs shared access to these containers amongst different threads, then the application is responsible for managing that access.
On the other hand, sometimes performance is not the driver for multi-threading. GUI programs benefit from keeping the UI thread separate from the thread that is doing the work. Other threads may be spun off for all sorts of reasons. Generally this can allow a nice separation of responsibilities in the code, and give better overall liveliness to the application. In these cases the goal often isn't high performance per se. Using thread-safe containers may be a perfectly natural choice for these applications.
Of course the best option is to have your cake and eat it too, like some lock-free queue implementations, which allow one thread to feed the queue, another to consume, with no locking (relying only on the atomic nature of certain basic operations).
It all rather depends on what the class is.
Consider a queue. Not every queue needs to be thread-safe. But there is most certainly a need in some cases for a data structure that you can push "stuff" into from one thread, and have the other thread pull "stuff" out of. This improves the parallelism of the threads, because it focuses inter-thread communication into a single location: the inter-thread queue. One side stuffs a sequence of commands in, and the other reads them and executes them when it can. If there are no commands available, it blocks or does whatever.
That demands, on some level, to have a thread-safe class. And since users will likely want to customize it with different kinds of "stuff", a generic implementation provided by the standard library is not unreasonable. Granted, no such thing exists in the C++ standard today, but it's almost certainly coming.
This is not "anti-parallel"; it improves parallelism. Without it, you would have to find some other way for the two threads to communicate. One that will more than likely force one of them to block more often.
Consider a shared_ptr. The cost of making shared_ptr's reference counter thread-safe is trivial next to the very likely possibility of someone screwing it up. It isn't free of course; an atomic increment/decrement isn't free. But it's far from "enforcing serial execution", since any moment of "serial execution" is so short as to be irrelevant in any real program.
So no, these things are not "anti-parallel".
There is a lot of buzz these days about not using locks and using Message passing approaches like Erlang. Or about using immutable datastructures like in Functional programming vs. C++/Java.
But what I am concerned with is the following:
AFAIK, Erlang does not guarantee Message delivery. Messages might be lost. Won't the algorithm and code bloat and be complicated again if you have to worry about loss of messages? Whatever distributed algorithm you use must not depend on guaranteed delivery of messages.
What if the Message is a complicated object? Isn't there a huge performance penalty in copying and sending the messages vs. say keeping it in a shared location (like a DB that both processes can access)?
Can you really totally do away with shared states? I don't think so. For e.g. in a DB, you have to access and modify the same record. You cannot use message passing there. You need to have locking or assume Optimistic concurrency control mechanisms and then do rollbacks on errors. How does Mnesia work?
Also, it is not the case that you always need to worry about concurrency. Any project will also have a large piece of code that doesn't have to do anything with concurrency or transactions at all (but they do have performance and speed as a concern). A lot of these algorithms depend on shared states (that's why pass-by-reference or pointers are so useful).
Given this fact, writing programs in Erlang etc is a pain because you are prevented from doing any of these things. May be, it makes programs robust, but for things like Solving a Linear Programming problem or Computing the convex hulll etc. performance is more important and forcing immutability etc. on the algorithm when it has nothing to do with Concurrency/Transactions is a poor decision. Isn't it?
That's real life : you need to account for this possibility regardless of the language / platform. In a distributed world (the real world), things fail: live with it.
Of course there is a cost: nothing is free in our universe. But shouldn't you use another medium (e.g. file, db) instead of shuttling "big objects" in communication pipes? You can always use "message" to refer to "big objects" stored somewhere.
Of course not: the idea behind functional programming / Erlang OTP is to "isolate" as much as possible the areas were "shared state" is manipulated. Futhermore, having clearly marked places where shared state is mutated helps testability & traceability.
I believe you are missing the point: there is no such thing as a silver bullet. If your application cannot be successfully built using Erlang then don't do it. You can always some other part of the overall system in another fashion i.e. use a different language / platform. Erlang is no different from another language in this respect: use the right tool for the right job.
Remember: Erlang was designed to help solve concurrent, asynchronous and distributed problems. It isn't optimized for working efficiently on a shared block of memory for example... unless you count interfacing with nif functions working on shared blocks part of the game :-)
Real-world systems are always hybrids anyway: I don't believe the modern paradigms try, in practice, to get rid of mutable data and shared state.
The objective, however, is not to need concurrent access to this shared state. Programs can be divided into the concurrent and the sequential, and use message-passing and the new paradigms for the concurrent parts.
Not every code will get the same investment: There is concern that threads are fundamentally "considered harmful". Something like Apache may need traditional concurrent threads and a key piece of technology like that may be carefully refined over a period of years so it can blast away with fully concurrent shared state. Operating system kernels are another example where "solve the problem no matter how expensive it is" may make sense.
There is no benefit to fast-but-broken: But for new code, or code that doesn't get so much attention, it may be the case that it simply isn't thread-safe, and it will not handle true concurrency, and so the relative "efficiency" is irrelevant. One way works, and one way doesn't.
Don't forget testability: Also, what value can you place on testing? Thread-based shared-memory concurrency is simply not testable. Message-passing concurrency is. So now you have the situation where you can test one paradigm but not the other. So, what is the value in knowing that the code has been tested? The danger in not even knowing if the other code will work in every situation?
A few comments on the misunderstanding you have of Erlang:
Erlang guarantees that messages will not be lost, and that they will arrive in the order sent. A basic error situation is that machine A can not speak to machine B. When that happens process monitors and links will trigger, and system node-down messages will be sent to the processes that registered for it. Nothing will be silently dropped. Processes will "crash" and supervisors (if any) tries to restart them.
Objects can not be mutated, so they are always copied. One way to secure immutability is by copying values to other erlang process' heaps. Another way is to allocate objects in a shared heap, message references to them and simply not have any operations that mutate them. Erlang does the first for performance! Realtime suffers if you need to stop all processes to garbage collect a shared heap. Ask Java.
There is shared state in Erlang. Erlang is not proud of it, but it is pragmatic about it. One example is the local process registry which is a global map that maps a name to a process so that system processes can be restarted and claim their old name. Erlang just tries to avoid shared state if it possibly can. ETS tables that are public are another example.
Yes, sometimes Erlang is too slow. This happens all languages. Sometimes Java is too slow. Sometimes C++ is too slow. Just because a tight loop in a game had to drop down to assembly to kick off some serious SIMD-based vector mathematics you can't deduce that everything should be written in assembly because it is the only language that is fast when it matters. What matters is being able to write systems that have good performance, and Erlang manages quite well. See benchmarks on yaws or rabbitmq.
Your facts are not facts about Erlang. Even if you think Erlang programming is a pain, you will find other people create some awesome software thanks to it. You should attempt writing an IRC server in Erlang, or something else very concurrent. Even if you're never going to use Erlang again, you would have learned to think about concurrency another way. But of course, you will, because Erlang is awesome easy.
Those that do not understand Erlang are doomed to re-implement it badly.
Okay, the original was about Lisp, but... its true!
There are some implicit assumption in your questions - you assume that all the data can fit
on one machine and that the application is intrinsically localised to one place.
What happens if the application is so large it cannot fit on one machine? What happens if the application outgrows one machine?
You don't want to have one way to program an application if it fits on one machine and
a completely different way of programming it as soon as it outgrows one machine.
What happens if you want make a fault-tolerant application? To make something fault-tolerant you need at least two physically separated machines and no sharing.
When you talk about sharing and data bases you omit to mention that things like mySQL
cluster achieve fault-tolerence precisely by maintaining synchronised copies of the
data in physically separated machines - there is a lot of message passing and
copying that you don't see on the surface - Erlang just exposes this.
The way you program should not suddenly change to accommodate fault-tolerance and scalability.
Erlang was designed primarily for building fault-tolerant applications.
Shared data on a multi-core has it's own set of problems - when you access shared data
you need to acquire a lock - if you use a global lock (the easiest approach) you can end up
stopping all the cores while you access the shared data. Shared data access on a multicore
can be problematic due to caching problems, if the cores have local data caches then accessing "far away" data (in some other processors cache) can be very expensive.
Many problems are intrinsically distributed and the data is never available in one place
at the same time so - these kind of problems fit well with the Erlang way of thinking.
In a distributed setting "guaranteeing message delivery" is impossible - the destination machine might have crashed. Erlang cannot thus guarantee message delivery -
it takes a different approach - the system will tell you if it failed to deliver a message
(but only if you have used the link mechanism) - then you can write you own custom error
recovery.)
For pure number crunching Erlang is not appropriate - but in a hybrid system Erlang
is good at managing how computations get distributed to available processors, so we see a lot of systems where Erlang manages the distribution and fault-tolerent aspects of the problem, but the problem itself is solved in a different language.
and other languages are used
For e.g. in a DB, you have to access and modify the same record
But that is handled by the DB. As a user of the database, you simply execute your query, and the database ensures it is executed in isolation.
As for performance, one of the most important things about eliminating shared state is that it enables new optimizations. Shared state is not particularly efficient. You get cores fighting over the same cache lines, and data has to be written through to memory where it could otherwise stay in a register or in CPU cache.
Many compiler optimizations rely on absence of side effects and shared state as well.
You could say that a stricter language guaranteeing these things requires more optimizations to be performant than something like C, but it also makes these optimizations much much easier for the compiler to implement.
Many concerns similar to concurrency issues arise in singlethreaded code. Modern CPUs are pipelined, execute instructions out of order, and can run 3-4 of them per cycle. So even in a single-threaded program, it is vital that the compiler and CPU is able to determine which instructions can be interleaved and executed in parallel.
For correctness, shared is the way to go, and keep the data as normalized as possible. For immediacy, send messages to inform of changes, but always back them up with polling. Messages get dropped, duplicated, re-ordered, delayed - don't rely on them.
If speed is what you're worried about, first do it single-thread and tune the daylights out of it. Then if you've got multiple cores and know how to split up the work, use parallelism.
Erlang provides supervisors and gen_server callbacks for synchronous calls, so you will know about it if a message isn't delivered: either the gen_server call returns a timeout, or your whole node will be brought down and up if the supervisor is triggered.
usually if the processes are on the same node, message-passing languages optimise away the data copying, so it's almost like shared memory, except if the object is changed used by both afterward, which can not be done using shared memory either anyways
There is some state which is kept by processes by passing it around to themselves in the recursive tail-calls, also some state can be of course passed through messages. I don't use mnesia much, but it is a transactional database, so once you have passed the operation to mnesia (and it has returned) you are pretty much guaranteed it will go through..
Which is why it is easy to tie such applications into erlang with the use of ports or drivers. The easiest are the ports, it's much like a unix pipe, though I think performance isn't that great...and as said, message-passing usually ends up just being pointer passing anyways as the VM/compiler optimise the memory copy out.