Simultaneity in C++ versus Visual Basic? [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I used to play with Visual Basic 5.0 as a kid. It allowed me to place lots of 'timers' on a 'form' that would seemingly run simultaneously... whereas now I'm starting to learn lower-level programming languages and everything seems to run one-thing-at-a-time.
Can someone help my mind grasp this concept of simultaneity and why VB seemed to have it easily available but learning c++ so far I've not met anything that feels like I can replicate that simultaneous running of code?
Is most of the 'simultaneity' in simple Visual Basic programs actually an illusion that c++ code can easily recreate? Sorry for lacking the terminology.
edit: Thanks for the replies. They have clarified that it was indeed usually an illusion of simultaneity. To explain further what was in my mind, at my early stage in learning c++, I don't think I know how to write a program that, every 2 seconds, will increment the value of 'x'... while simultaneously every 5 seconds, incrementing 'y'.

This question isn't appropriate for Stack Overflow, but I really like answering people's questions, so I will anyway, even though the answer will be necessarily incomplete.
What Visual Basic was doing was providing you with a special environment in which things appeared to be running simultaneously. All those timers were just entries in a data structure that told Visual Basic when it had to go act like that timer had just expired. It used the current time to decide when it should run them.
In general, Visual Basic is what controlled the flow of your program. In between all the statements you wrote, the Visual Basic runtime was doing all kinds of things behind the scenes for you. Things like checking up on the data structures it was keeping of which timer to fire when and so on.
There are two main ways of controlling the flow of your program in such a way as to make it seem like several things are happening at the same time. What Visual Basic was doing is called an 'event loop'. That's a main loop in your program that keeps track of all the things that need doing and goes and runs the code that does them at the right times. When someone clicks on a window, your program gets an event, the event loop sees that event and runs the 'clicked on a window' code when it gets control. It sort of seems like your program just responds instantly to the click, but that's not what's really happening.
The other way is called 'multi-threading'. In that way your program may really do several things at the same time. The operating system itself decides how to allocate CPU resources to running 'threads'. If there is a lot of competition for CPU resources, the operating system may start running one program (or thread) for a little while (a thousandth of a second or so) and then switch to running a different program (aka process (threads and processes are strongly related, but definitely distinct concepts)). The switching happens so fast, and can happen at any instant, so it seems like several things are happening at once. But, if there are enough CPU resources available (multiple cores) the computer can actually just run several things at the same time.
There are advantages and disadvantages to each approach. Actually dealing with two things modifying the same data at the exact same time and making sure everything stays consistent can be very hard, and is usually accomplished by having sections of your program in only one thread is allowed to be running at a time so that those modifications can't actually happen in the same instant and they are 'serialized' so that one thread's modification happens before another. These are usually called 'critical sections' and access to these critical sections is typically controlled by something called a 'mutex' (aka mutual exclusion) lock.
Event driven code doesn't have that problem since typically one event must be fully processed before the code to process the next event can start. But this can lead to under-utilization of CPU resources, and it can also sometimes introduce noticeable delays in handling events as one event can't be processed until the code to process the preceding event is finished.
In short, Visual Basic was making event driven programming really easy and clean to use, and providing you with the illusion that several things were running at the same time.
These, in general, are fairly deep computer science topics, and people get their PhDs studying some of this stuff. But a working understanding of exactly how event driven code and how multi-threading code works isn't that hard.
There are also other models of concurrency out there, and hybrid models like 'co-routines' that look like threads but are really events.
I put all the most useful concept handles in quotes to emphasize them.

Related

Making C++ run at full speed [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
To compare C++ and Java on certain tasks I have made two similar programs, one in Java and one in C++. When I run the Java one it takes 25% CPU without fluctuation, which you would expect as I'm using a quad core. However, the C++ version only uses about 8% and fluctuates havily. I run both programs on the same computer, on the same OS, with the same programs active in the background. How do I make C++ use one full core? These are 2 programs both not interrupted by anything. They both ask for some info and then enter an infinite loop until you exit the program, giving feedback on how many calculations per second.
The code:
http://pastebin.com/5rNuR9wA
http://pastebin.com/gzSwgBC1
http://pastebin.com/60wpcqtn
To answer some questions:
I'm basically looping a bunch of code and seeing how often per second it loops. The problem is: it doesn't use all the CPU it can use. The whole point is to have the same processor do the same task in Java and C++ and compare the amount of loops per second. But if one is using irregular amounts of CPU time and the other one is looping stable at a certain percentage they are hard to compare. By the way, if I ask it to execute this:
while(true){}
it takes 25%, why doesn't it do that with my code?
----edit:----
After some experimenting it seems that my code starts to use less than 25% if I use a cout statement. It isn't clear to me why a cout would cause the program to use less cpu (I guess it pauses until the statement is written which appearantly takes a while?
With this knowledge I will reprogram both programs (to keep them comparable) and just let it report the results after 60 seconds instead of every time it completed a loop.
Thanks for all the help, some of the tips were really helpful. After I discovered the answer someone also turned out to give this as an answer, so even if I wouldn't have found it myself I would have gotten the answer. Thanks!
(though I would like to know why a std::cout takes such an amount of time)
Your main loop has a cout in it, which will call out to the OS to write the accumulated output at some point. Either OS time is not counted against your app, or it causes some disk IO or other activity that forces your program to wait.
It's probably not accurate to compare both of these running at the same time without considering the fact that they will compete for cpu time. The OS will automatically choose the scheduling for these two tasks which can be affected by which one started first and a multitude of other criteria.
Running them both at the same time would require some type of configuration of the scheduling so that each one is confined to run to one (or two) cpus and each application uses different cpus. This can be done by having each main function execute a separate thread that performs all the work and then setting the cpu where this thread will run. In c++11 this can be done using a std::thread and then setting the underlying cpu affinity by getting the native_handle and setting it there.
I'm not sure how to do this in Java but I'm sure the process is similar.

isn't a gas stand a lock? with regards to the following erlang article and why the world has no locks [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
in this article What's all this fuss about Erlang?
it is said that:
" The world IS concurrent. It IS parallel. Things happen all over the
place at the same time. I could not drive my car on the highway if I
did not intuitively understand the notion of concurrency; pure
message-passing concurrency is what we do all the time."
i dont get it i dont think this is correct when i drive my car into a gas station i wait for the person before me to finish filling his gas tank as he is using (locking) the gas stand anyone thinks what im saying is incorrect?
The article never says "The world does not need locks." The article says, "In Erlang, given that there is no shared state, Erlang programs have no need for locks." Locks are one way of achieving concurrency by sharing mutable state. Erlang achieves concurrency by passing messages instead of sharing state.
A gas stand is just a place to get gas. How people decide to make sure only one person is using it at a time is a separate matter. In a shared state language, you might have one gas stand instance that you lock when you want to use it. In a message passing language, you could send a message to the gas stand process "Is someone using you?" and the gas stand will reply yes or no. You can achieve the same basic goal either way.
You might be wondering, "That sounds like a lock to me!" The important distinction is, there is exactly one process responsible for each piece of state in Erlang, but any number of threads can influence on piece of state with mutable locked data. If the gas stand state gets corrupted with locking semantics, you don't know what thread broke it. In Erlang, you can see every message that comes into the process responsible for that data, and see what messages are damaging it. It might sound like a useless distinction, but believe me, it makes concurrency much easier to deal with.
The simplest answer is that it's just an analogy. That particular paragraph is really about why concurrency matters, and why it's not as unintuitive as one might at first think, coming from a procedural programming world (see? I said 'world', but I really meant something like 'background' or 'context' it's easy to mix metaphors).
Anyway, I wouldn't read too far into that statement, I don't think it's meant to imply (nor does it explicitly say) that the world itself is lockless, just that the world is concurrent. That's where the analogy starts to veer left; just like you and I do not share state (which is mentioned), we also are not immutable. You can change your opinion, change your shirt, etc. without forking and creating a new entity with a new shirt. As mentioned elsewhere in the article, Erlang gets around some problems inherent in maintaining concurrent state by making everything immutable. We solve things by politely waiting for the guy in front of us at the gas station.
Gases etc are fluids only if you don't examine them too closely. You need a large sample if you want to describe things with continuous functions. If you look too closely the fluid approximation breaks down. Going the other way, if you zoom out far enough, you can treat (eg) grain as a fluid.
The fact that these things are in fact made up of indivisible units with quantised, deterministic behaviour does not stop fluid dynamics equations from decribing them accurately on a macro level.
Are they fluids, or not? The answer is "Yes, they are fluids. Or not, depending."
There are conditions under which a model applies, and other conditions under which it does not. Failure to apprehend this leads to belief in silver bullets, and dashed hopes.

Testing approach for multi-threaded software

I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out?
What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results.
The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead.
What other tests would you carry out?
Edit: I have cross posted this question on a number of different testing and programming forums, as I'm keen to see how the different mind-sets and schools of thought would approach this issue. So apologies if you see it cross-posted elsewhere. I'll provide a summary links to responses after a week or so
Some suggestions:
Utilize the law of large numbers and perform the operation under test not only once, but many times.
Stress-test your code by exaggerating the scenarios. E.g. to test your mutex-holding class, use scenarios where the mutex-protected code:
is very short and fast (a single instruction)
is time-consuming (Sleep with a large value)
contains explicit context switches (Sleep (0))
Run your test on various different architectures. (Even if your software is Windows-only, test it on single- and multicore processors with and without hyperthreading, and a wide range of clock speeds)
Try to design your code such that most of it is not exposed to multithreading issues. E.g. instead of accessing shared data (which requires locking or very carefully designed lock-avoidance techniques), let your worker threads operate on copies of the data, and communicate with them using queues. Then you only have to test your queue class for thread-safety
Run your tests when the system is idle as well as when it is under load from other tasks (e.g. our build server frequently runs multiple builds in parallel. This alone revealed many multithreading bugs that happened when the system was under load.)
Avoid asserting on timeouts. If such an assert fails, you don't know whether the code is broken or whether the timeout was too short. Instead, use a very generous timeout (just to ensure that the test eventually fails). If you want to test that an operation doesn't take longer than a certain time, measure the duration, but don't use a timeout for this.
Whilst I agree with #rstevens answer in that there's currently no way to unit test threading issues with 100% certainty there are some things that I've found useful.
Firstly whatever tests you have make sure you run them on lots of different spec boxes. I have several build machines, all different, multi-core, single core, fast, slow, etc. The good thing about how diverse they are is that different ones will throw up different threading issues. I've regularly been surprised to add a new build machine to my farm and suddenly have a new threading bug exposed; and I'm talking about a new bug being exposed in code that has run 10000s of times on the other build machines and which shows up 1 in 10 on the new one...
Secondly most of the unit testing that you do on your code needn't involve threading at all. The threading is, generally, orthogonal. So step one is to tease the code apart so that you can test the actual code that does the work without worrying too much about the threaded nature. This usually means creating an interface that the threading code uses to drive the real code. You can then test the real code in isolation.
Thridly you can test where the threaded code interacts with the main body of code. This means writing a mock for the interface that you developed to separate the two blocks of code. By now the threading code is likely much simpler and you can then often place synchronisation objects in the mock that you've made so that you can control the code under test. So, you'd spin up your thread and wait for it to set an event by calling into your mock and then have it block on another event which your test code controls. The test code can then step the threaded code from one point in your interface to the next.
Finally (if you've decoupled things enough that you can do the earlier stuff then this is easy) you can then run larger pieces of the multi-threaded parts of the app under test and make sure you get the results that you expect; you can play with the priority of the threads and maybe even add a couple of test threads that simply eat CPU to stir things up a bit.
Now you run all of these tests many many times on different hardware...
I've also found that running the tests (or the app) under something like DevPartner BoundsChecker can help a lot as it messes with the thread scheduling such that it sometimes shakes out hard to find bugs. I also wrote a deadlock detection tool which checks for lock inversions during program execution but I only use that rarely.
You can see an example of how I test multi-threaded C++ code here: http://www.lenholgate.com/blog/2004/05/practical-testing.html
Not really an answer:
Testing multithreaded bugs is very difficult. Most bugs only show up if two (or more) threads go to specific places in code in a specific order.
If and when this condition is met may depend on the timing of the process running. This timing may change due to one of the following pre-conditions:
Type of processor
Processor speed
Number of processors/cores
Optimization level
Running inside or outside the debugger
Operating system
There are for sure more pre-conditions that I forgot.
Because MT-bugs so highly depend on the exact timing of the code running Heisenberg's "Uncertainty principle" comes in here: If you want to test for MT bugs you change the timing by your "measures" which may prevent the bug from occurring...
The timing thing is what makes MT bugs so highly non-deterministic.
In other words: You may have a software that runs for months and then crashes some day and after that may run for years. If you don't have some debug logs/core dumps etc. you may never know why it crashes.
So my conclusion is: There is no really good way to Unit-Test for thread-safety. You always have to keep your eyes open when programming.
To make this clear I will give you a (simplified) example from real life (I encountered this when changing my employer and looking on the existing code there):
Imagine you have a class. You want that class to automatically deleted if no-one uses it anymore. So you build a reference-counter into that class:
(I know it is a bad style to delete an instance of a class in one of it's methods. This is because of the simplification of the real code which uses a Ref class to handle counted references.)
class A {
private:
int refcount;
public:
A() : refcount(0) {
}
void Ref() {
refcount++;
}
void Release() {
refcount--;
if (refcount == 0) {
delete this;
}
}
};
This seams pretty simple and nothing to worry about. But this is not thread-safe!
It's because "refcount++" and "refcount--" are not atomic operations but both are three operations:
read refcount from memory to register
increment/decrement register
write refcount from register to memory
Each of those operations can be interrupted and another thread may, at the same time manipulate the same refcount. So if for example two threads want to incremenet refcount the following COULD happen:
Thread A: read refcount from memory to register (refcount: 8)
Thread A: increment register
CONTEXT CHANGE -
Thread B: read refcount from memory to register (refcount: 8)
Thread B: increment register
Thread B: write refcount from register to memory (refcount: 9)
CONTEXT CHANGE -
Thread A: write refcount from register to memory (refcount: 9)
So the result is: refcount = 9 but it should have been 10!
This can only be solved by using atomic operations (i.e. InterlockedIncrement() & InterlockedDecrement() on Windows).
This bug is simply untestable! The reason is that it is so highly unlikely that there are two threads at the same time trying to modify the refcount of the same instance and that there are context switches in between that code.
But it can happen! (The probability increases if you have a multi-processor or multi-core system because there is no context switch needed to make it happen).
It will happen in some days, weeks or months!
Looks like you are using Microsoft tools. There's a group at Microsoft Research that has been working on a tool specifically designed to shake out concurrency bugz. Check out CHESS. Other research projects, in their early stages, are Cuzz and Featherlite.
VS2010 includes a very good looking concurrency profiler, video is available here.
As Len Holgate mentions, I would suggest refactoring (if needed) and creating interfaces for the parts of the code where different threads interact with objects carrying a state. These parts of the code can then be tested separate from the code containing the actual functionality. To verify such a unit test, I would consider using a code coverage tool (I use gcov and lcov for this) to verify that everything in the thread safe interface is covered.
I think this is a pretty convenient way of verifying that new code is covered in the tests.
The next step is then to follow the advice of the other answers regarding how to run the tests.
Firstly, many thanks for the responses. For the responses posted across different forumes see;
http://www.sqaforums.com/showflat.php?Cat=0&Number=617621&an=0&page=0#Post617621
Testing approach for multi-threaded software
http://www.softwaretestingclub.com/forum/topics/testing-approach-for?xg_source=activity
and the following mailing list; software-testing#yahoogroups.com
The testing took significantly longer than expected, hence this late reply, leading me to the conclusion that adding multi-threading to existing apps is liable to be very expensive in terms of testing, even if the coding is quite straightforward. This could prove interesting for the SQA community, as there is increasingly more multi-threaded development going on out there.
As per Joe Strazzere's advice, I found the most effective way of hitting bugs was via automation with varied input. I ended up doing this on three PCs, which have ran a bank of tests repeatedly with varied input over about six weeks. Initially, I was seeing crashes one or two times per PC per day. As I tracked these down, it ended up with one or two per week between the three PCs, and we haven't had any further problems for the last two weeks. For the last two weeks we have also had a version with users beta testing, and are using the software in-house.
In addition to varying the input under automation, I also got good results from the following;
Adding a test option that allowed mutex time-outs to be read from a configuration file, which in turn could be controlled by my automation.
Extending mutex time-outs beyond the typical time expected to execute a section of thread code, and firing a debug exception on time-out.
Running the automation in conjunction with a debugger (VS2008) such that when a problem occurred there was a better chance of tracking it down.
Running without a debugger to ensure that the debugger was not hiding other timing related bugs.
Running the automation against normal release, debug, and fully optimised build. FWIW, the optimised build threw up errors not reproducible in the other builds.
The type of bugs uncovered tended to be serious in nature, e.g. dereferencing invalid pointers, and even under the debugger took quite a bit of tracking down. As has been discussed elsewhere, the SuspendThread and ResumeThread functions ended up being major culprits, and all use of these functions were replaced by mutexes. Similarly all critical sections were removed due to lack of time-outs. Closing documents and exiting the program were also a bug source, where in one instance a document was destroyed with a worker thread still active. To overcome this a single mutex was added per thread to control the life of the thread, and aquired by the document destructor to ensure the thread had terminated as expected.
Once again, many thanks for the all the detailed and varied responses. Next time I take on this type of activity, I'll be better prepared.

Advice for converting a large monolithic singlethreaded application to a multithreaded architecture?

My company's main product is a large monolithic C++ application, used for scientific data processing and visualisation. Its codebase goes back maybe 12 or 13 years, and while we have put work into upgrading and maintaining it (use of STL and Boost - when I joined most containers were custom, for example - fully upgraded to Unicode and the 2010 VCL, etc) there's one remaining, very significant problem: it's fully singlethreaded. Given it's a data processing and visualisation program, this is becoming more and more of a handicap.
I'm both a developer and the project manager for the next release where we want to tackle this, and this is going to be a difficult job in both areas. I'm seeking concrete, practical, and architectural advice on how to tackle the problem.
The program's data flow might go something like this:
a window needs to draw data
In the paint method, it will call a GetData method, often hundreds of times for hundreds of bits of data in one paint operation
This will go and calculate or read from file or whatever else is required (often quite a complex data flow - think of this as data flowing through a complex graph, each node of which performs operations)
Ie, the paint message handler will block while processing is done, and if the data hasn't already been calculated and cached, this can be a long time. Sometimes this is minutes. Similar paths occur for other parts of the program that perform lengthy processing operations - the program is unresponsive for the entire time, sometimes hours.
I'm seeking advice on how to approach changing this. Practical ideas. Perhaps things like:
design patterns for asynchronously requesting data?
storing large collections of objects such that threads can read and write safely?
handling invalidation of data sets while something is trying to read it?
are there patterns and techniques for this sort of problem?
what should I be asking that I haven't thought of?
I haven't done any multithreaded programming since my Uni days a few years ago, and I think the rest of my team is in a similar position. What I knew was academic, not practical, and is nowhere near enough to have confidence approaching this.
The ultimate objective is to have a fully responsive program, where all calculations and data generation is done in other threads and the UI is always responsive. We might not get there in a single development cycle :)
Edit: I thought I should add a couple more details about the app:
It's a 32-bit desktop application for Windows. Each copy is licensed. We plan to keep it a desktop, locally-running app
We use Embarcadero (formerly Borland) C++ Builder 2010 for development. This affects the parallel libraries we can use, since most seem (?) to be written for GCC or MSVC only. Luckily they're actively developing it and its C++ standards support is much better than it used to be. The compiler supports these Boost components.
Its architecture is not as clean as it should be and components are often too tightly coupled. This is another problem :)
Edit #2: Thanks for the replies so far!
I'm surprised so many people have recommended a multi-process architecture (it's the top-voted answer at the moment), not multithreading. My impression is that's a very Unix-ish program structure, and I don't know anything about how it's designed or works. Are there good resources available about it, on Windows? Is it really that common on Windows?
In terms of concrete approaches to some of the multithreading suggestions, are there design patterns for asynchronous request and consuming of data, or threadaware or asynchronous MVP systems, or how to design a task-oriented system, or articles and books and post-release deconstructions illustrating things that work and things that don't work? We can develop all this architecture ourselves, of course, but it's good to work from what others have done before and know what mistakes and pitfalls to avoid.
One aspect that isn't touched on in any answers is project managing this. My impression is estimating how long this will take and keeping good control of the project when doing something as uncertain as this may be hard. That's one reason I'm after recipes or practical coding advice, I guess, to guide and restrict coding direction as much as possible.
I haven't yet marked an answer for this question - this is not because of the quality of the answers, which is great (and thankyou) but simply that because of the scope of this I'm hoping for more answers or discussion. Thankyou to those who have already replied!
You have a big challenge ahead of you. I had a similar challenge ahead of me -- 15 year old monolithic single threaded code base, not taking advantage of multicore, etc. We expended a great deal of effort in trying to find a design and solution that was workable and would work.
Bad news first. It will be somewhere between impractical and impossible to make your single-threaded app multithreaded. A single threaded app relies on it's singlethreaded-ness is ways both subtle and gross. One example is if the computation portion requires input from the GUI portion. The GUI must run in the main thread. If you try to get this data directly from the computation engine, you will likely run in to deadlock and race conditions that will require major redesigns to fix. Many of these reliances will not crop up during the design phase, or even during the development phase, but only after a release build is put in a harsh environment.
More bad news. Programming multithreaded applications is exceptionally hard. It might seem fairly straightforward to just lock stuff and do what you have to do, but it is not. First of all if you lock everything in sight you end up serializing your application, negating every benefit of mutithreading in the first place while still adding in all the complexity. Even if you get beyond this, writing a defect-free MP application is hard enough, but writing a highly-performant MP application is that much more difficult. You could learn on the job in a kind of baptismal by fire. But if you are doing this with production code, especially legacy production code, you put your buisness at risk.
Now the good news. You do have options that don't involve refactoring your whole app and will give you most of what you seek. One option in particular is easy to implement (in relative terms), and much less prone to defects than making your app fully MP.
You could instantiate multiple copies of your application. Make one of them visible, and all the others invisible. Use the visible application as the presentation layer, but don't do the computational work there. Instead, send messages (perhaps via sockets) to the invisible copies of your application which do the work and send the results back to the presentation layer.
This might seem like a hack. And maybe it is. But it will get you what you need without putting the stability and performance of your system at such great risk. Plus there are hidden benefits. One is that the invisible engine copies of your app will have access to their own virtual memory space, making it easier to leverage all the resources of the system. It also scales nicely. If you are running on a 2-core box, you could spin off 2 copies of your engine. 32 cores? 32 copies. You get the idea.
So, there's a hint in your description of the algorithm as to how to proceed:
often quite a complex data flow - think of this as data flowing through a complex graph, each node of which performs operations
I'd look into making that data-flow graph be literally the structure that does the work. The links in the graph can be thread-safe queues, the algorithms at each node can stay pretty much unchanged, except wrapped in a thread that picks up work items from a queue and deposits results on one. You could go a step further and use sockets and processes rather than queues and threads; this will let you spread across multiple machines if there is a performance benefit in doing this.
Then your paint and other GUI methods need split in two: one half to queue the work, and the other half to draw or use the results as they come out of the pipeline.
This may not be practical if the app presumes that data is global. But if it is well contained in classes, as your description suggests it may be, then this could be the simplest way to get it parallelised.
Don't attempt to multithread everything in the old app. Multithreading for the sake of saying it's multithreaded is a waste of time and money. You're building an app that does something, not a monument to yourself.
Profile and study your execution flows to figure out where the app spends most of its time. A profiler is a great tool for this, but so is just stepping through the code in the debugger. You find the most interesting things in random walks.
Decouple the UI from long-running computations. Use cross-thread communications techniques to send updates to the UI from the computation thread.
As a side-effect of #3: think carefully about reentrancy: now that the compute is running in the background and the user can smurf around in the UI, what things in the UI should be disabled to prevent conflicts with the background operation? Allowing the user to delete a dataset while a computation is running on that data is probably a bad idea. (Mitigation: computation makes a local snapshot of the data) Does it make sense for the user to spool up multiple compute operations concurrently? If handled well, this could be a new feature and help rationalize the app rework effort. If ignored, it will be a disaster.
Identify specific operations that are candidates to be shoved into a background thread. The ideal candidate is usually a single function or class that does a lot of work (requires a "lot of time" to complete - more than a few seconds) with well defined inputs and outputs, that makes use of no global resources, and does not touch the UI directly. Evaluate and prioritize candidates based on how much work would be required to retrofit to this ideal.
In terms of project management, take things one step at a time. If you have multiple operations that are strong candidates to be moved to a background thread, and they have no interaction with each other, these might be implemented in parallel by multiple developers. However, it would be a good exercise to have everybody participate in one conversion first so that everyone understands what to look for and to establish your patterns for UI interaction, etc. Hold an extended whiteboard meeting to discuss the design and process of extracting the one function into a background thread. Go implement that (together or dole out pieces to individuals), then reconvene to put it all together and discuss discoveries and pain points.
Multithreading is a headache and requires more careful thought than straight up coding, but splitting the app into multiple processes creates far more headaches, IMO. Threading support and available primitives are good in Windows, perhaps better than some other platforms. Use them.
In general, don't do any more than what is needed. It's easy to severely over implement and over complicate an issue by throwing more patterns and standard libraries at it.
If nobody on your team has done multithreading work before, budget time to make an expert or funds to hire one as a consultant.
The main thing you have to do is to disconnect your UI from your data set. I'd suggest that the way to do that is to put a layer in between.
You will need to design a data structure of data cooked-for-display. This will most likely contain copies of some of your back-end data, but "cooked" to be easy to draw from. The key idea here is that this is quick and easy to paint from. You may even have this data structure contain calculated screen positions of bits of data so that it's quick to draw from.
Whenever you get a WM_PAINT message you should get the most recent complete version of this structure and draw from it. If you do this properly, you should be able to handle multiple WM_PAINT messages per second because the paint code never refers to your back end data at all. It's just spinning through the cooked structure. The idea here is that its better to paint stale data quickly than to hang your UI.
Meanwhile...
You should have 2 complete copies of this cooked-for-display structure. One is what the WM_PAINT message looks at. (call it cfd_A) The other is what you hand to your CookDataForDisplay() function. (call it cfd_B). Your CookDataForDisplay() function runs in a separate thread, and works on building/updating cfd_B in the background. This function can take as long as it wants because it isn't interacting with the display in any way. Once the call returns cfd_B will be the most up-to-date version of the structure.
Now swap cfd_A and cfd_B and InvalidateRect on your application window.
A simplistic way to do this is to have your cooked-for-display structure be a bitmap, and that might be a good way to go to get the ball rolling, but I'm sure with a bit of thought you can do a much better job with a more sophisticated structure.
So, referring back to your example.
In the paint method, it will call a GetData method, often hundreds of times for hundreds of bits of data in one paint operation
This is now 2 threads, the paint method refers to cfd_A and runs on the UI thread. Meanwhile cfd_B is being built by a background thread using GetData calls.
The quick-and-dirty way to do this is
Take your current WM_PAINT code, stick it into a function called PaintIntoBitmap().
Create a bitmap and a Memory DC, this is cfd_B.
Create a thread and pass it cfd_B and have it call PaintIntoBitmap()
When this thread completes, swap cfd_B and cfd_A
Now your new WM_PAINT method just takes the pre-rendered bitmap in cfd_A and draws it to the screen. Your UI is now disconnnected from your backend GetData() function.
Now the real work begins, because the quick-and-dirty way doesn't handle window resizing very well. You can go from there to refine what your cfd_A and cfd_B structures are a little at a time until you reach a point where you are satisfied with the result.
You might just start out breaking the the UI and the work task into separate threads.
In your paint method instead of calling getData() directly, it puts the request in a thread-safe queue. getData() is run in another thread that reads its data from the queue. When the getData thread is done, it signals the main thread to redraw the visualisation area with its result data using thread syncronization to pass the data.
While all this is going on you of course have a progress bar saying reticulating splines so the user knows something is going on.
This would keep your UI snappy without the significant pain of multithreading your work routines (which can be akin to a total rewrite)
It sounds like you have several different issues that parallelism can address, but in different ways.
Performance increases through utilizing multicore CPU Architecutres
You're not taking advantage of the multi-core CPU architetures that are becoming so common. Parallelization allow you to divide work amongst multiple cores. You can write that code through standard C++ divide and conquer techniques using a "functional" style of programming where you pass work to separate threads at the divide stage. Google's MapReduce pattern is an example of that technique. Intel has the new CILK library to give you C++ compiler support for such techniques.
Greater GUI responsiveness through asynchronous document-view
By separating the GUI operations from the document operations and placing them on different threads, you can increase the apparent responsiveness of your application. The standard Model-View-Controller or Model-View-Presenter design patterns are a good place to start. You need to parallelize them by having the model inform the view of updates rather than have the view provide the thread on which the document computes itself. The View would call a method on the model asking it to compute a particular view of the data, and the model would inform the presenter/controller as information is changed or new data becomes available, which would get passed to the view to update itself.
Opportunistic caching and pre-calculation
It sounds like your application has a fixed base of data, but many possible compute-intensive views on the data. If you did a statistical analysis on which views were most commonly requested in what situations, you could create background worker threads to pre-calculate the likely-requested values. It may be useful to put these operations on low-priority threads so that they don't interfere with the main application processing.
Obviously, you'll need to use mutexes (or critical sections), events, and probably semaphores to implement this. You may find some of the new synchronization objects in Vista useful, like the slim reader-writer lock, condition variables, or the new thread pool API. See Joe Duffy's book on concurrency for how to use these basic techniques.
There is something that no-one has talked about yet, but which is quite interesting.
It's called futures. A future is the promise of a result... let's see with an example.
future<int> leftVal = computeLeftValue(treeNode); // [1]
int rightVal = computeRightValue(treeNode); // [2]
result = leftVal + rightVal; // [3]
It's pretty simple:
You spin off a thread that starts computing leftVal, taking it from a pool for example to avoid the initialization problem.
While leftVal is being computed, you compute rightVal.
You add the two, this may block if leftVal is not computed yet and wait for the computation to end.
The great benefit here is that it's straightforward: each time you have one computation followed by another that is independent and you then join the result, you can use this pattern.
See Herb Sutter's article on futures, they will be available in the upcoming C++0x but there are already libraries available today even if the syntax is perhaps not as pretty as I would make you believe ;)
If it was my development dollars I was spending, I would start with the big picture:
What do I hope to accomplish, and how much will I spend to accomplish this, and how will I be further ahead? (If the answer to this is, my app will run 10% better on quadcore PCs, and I could have achieved the same result by spending $1000 more per customer PC , and spending $100,000 less this year on R&D, then, I would skip the whole effort).
Why am I doing multi-threaded instead of massively parallel distributed? Do I really think threads are better than processes? Multi-core systems also run distributed apps pretty well. And there are some advantages to message-passing process based systems that go beyond the benefits (and the costs!) of threading. Should I consider a process-based approach? SHould I consider a background running entirely as a service, and a foreground GUI? Since my product is node-locked and licensed, I think services would suit me (vendor) quite well. Also, separating stuff into two processes (background service and foreground) just might force the kind of rewrite and rearchitecting to occur that I might not be forced to do, if I was to just add threading into my mix.
This is just to get you thinking: What if you were to rewrite it as a service (background app) and a GUI, because that would actually be easier than adding threading, without also adding crashes, deadlocks, and race conditions?
Consider the idea that for your needs, perhaps threading is evil. Develop your religion, and stick with that. Unless you have a real good reason to go the other way. For many years, I religiously avoided threading. Because one thread per process is good enough for me.
I don't see any really solid reasons in your list why you need threading, except ones that could be more inexpensively solved by more expensive target computer hardware. If your app is "too slow" adding in threads might not even speed it up.
I use threads for background serial communications, but I would not consider threading merely for computationally heavy applications, unless my algorithms were so inherently parallel as to make the benefits clear, and the drawbacks minimal.
I wonder if the "design" problems that this C++Builder app has are like my Delphi "RAD Spaghetti" application disease. I have found that a wholesale refactor/rewrite (over a year per major app that I have done this to), was a minimum amount of time for me to get a handle on application "accidental complexity". And that was without throwing a "threads where possible" idea. I tend to write my apps with threads for serial communication and network socket handling, only. And maybe the odd "worker-thread-queue".
If there is a place in your app you can add ONE thread, to test the waters, I would look for the main "work queue" and I would create an experimental version control branch, and I would learn about how my code works by breaking it in the experimental branch. Add that thread. And see where you spend your first day of debugging. Then I might just abandon that branch and go back to my trunk until the pain in my temporal lobe subsides.
Warren
Here's what I would do...
I would start by profiling your and seeing:
1) what is slow and what the hot paths are
2) which calls are reentrant or deeply nested
you can use 1) to determine where the opportunity is for speedups and where to start looking for parallelization.
you can use 2) to find out where the shared state is likely to be and get a deeper sense of how much things are tangled up.
I would use a good system profiler and a good sampling profiler (like the windows perforamnce toolkit or the concurrency views of the profiler in Visual Studio 2010 Beta2 - these are both 'free' right now).
Then I would figure out what the goal is and how to separate things gradually to a cleaner design that is more responsive (moving work off the UI thread) and more performant (parallelizing computationally intensive portions). I would focus on the highest priority and most noticable items first.
If you don't have a good refactoring tool like VisualAssist, invest in one - it's worth it. If you're not familiar with Michael Feathers or Kent Beck's refactoring books, consider borrowing them. I would ensure my refactorings are well covered by unit tests.
You can't move to VS (I would recommend the products I work on the Asynchronous Agents Library & Parallel Pattern Library, you can also use TBB or OpenMP).
In boost, I would look carefully at boost::thread, the asio library and the signals library.
I would ask for help / guidance / a listening ear when I got stuck.
-Rick
You can also look at this article from Herb Sutter You have a mass of existing code and want to add concurrency. Where do you start?
Well, I think you're expecting a lot based on your comments here. You're not going to go from minutes to milliseconds by multithreading. The most you can hope for is the current amount of time divided by the number of cores. That being said, you're in a bit of luck with C++. I've written high performance multiprocessor scientific apps, and what you want to look for is the most embarrassingly parallel loop you can find. In my scientific code, the heaviest piece is calculating somewhere between 100 and 1000 data points. However, all of the data points can be calculated independently of the others. You can then split the loop using openmp. This is the easiest and most efficient way to go. If you're compiler doesn't support openmp, then you will have a very hard time porting existing code. With openmp (if you're lucky), you may only have to add a couple of #pragmas to get 4-8x the performance. Here's an example StochFit
I hope this will help you in understanding and converting your monolithic single threaded app to multi thread easily. Sorry it is for another programming language but never the less the principles explained are the same all over.
http://www.freevbcode.com/ShowCode.Asp?ID=1287
Hope this helps.
The first thing you must do is to separate your GUI from your data, the second is to create a multithreaded class.
STEP 1 - Responsive GUI
We can assume that the image you are producing is contained in the canvas of a TImage. You can put a simple TTimer in you form and you can write code like this:
if (CurrenData.LastUpdate>CurrentUpdate)
{
Image1->Canvas->Draw(0,0,CurrenData.Bitmap);
CurrentUpdate=Now();
}
OK! I know! Is a little bit dirty, but it's fast and is simple.The point is that:
You need an Object that is created in the main thread
The object is copied in the Form you need, only when is needed and in a safe way (ok, a better protection for the Bitmap may be is needed, but for semplicity...)
The object CurrentData is your actual project, single threaded, that produces an image
Now you have a fast and responsive GUI. If your algorithm as slow, the refresh is slow, but your user will never think that your program is freezed.
STEP 2 - Multithread
I suggest you to implement a class like the following:
SimpleThread.h
typedef void (__closure *TThreadFunction)(void* Data);
class TSimpleThread : public TThread
{
public:
TSimpleThread( TThreadFunction _Action,void* _Data = NULL, bool RunNow = true );
void AbortThread();
__property Terminated;
protected:
TThreadFunction ThreadFunction;
void* Data;
private:
virtual void __fastcall Execute() { ThreadFunction(Data); };
};
SimpleThread.c
TSimpleThread::TSimpleThread( TThreadFunction _Action,void* _Data, bool RunNow)
: TThread(true), // initialize suspended
ThreadFunction(_Action), Data(_Data)
{
FreeOnTerminate = false;
if (RunNow) Resume();
}
void TSimpleThread::AbortThread()
{
Suspend(); // Can't kill a running thread
Free(); // Kills thread
}
Let's explain. Now, in your simple threaded class you can create an object like this:
TSimpleThread *ST;
ST=new TSimpleThread( RefreshFunction,NULL,true);
ST->Resume();
Let's explain better: now, in your own monolithic class, you have created a thread. More: you bring a function (ie: RefreshFunction) in a separate thread. The scope of your funcion is the same, the class is the same, the execution is separate.
My number one suggestion, although it's very late (sorry for reviving old thread, it's interesting!) is seek out homogeneous transform loops where each iteration of the loop is mutating a completely independent piece of data from the other iterations.
Instead of thinking about how to turn this old codebase into an asynchronous one running all kinds of operations in parallel (which could be asking for all kinds of trouble from worse than single-threaded performance from poor locking patterns or exponentially worse, race conditions/deadlocks by trying to do this in hindsight to code you can't fully comprehend), stick to the sequential mindset for the overall application design for now but identify or extract simple, homogeneous transform loops. Don't go from intrusive broad design-level multithreading and then try to drill into details. Work from non-intrusive multithreading of fine implementation details and specific hotspots first.
What I mean by homogeneous loops is basically one that transforms data in a very straightforward way, like:
for each pixel in image:
make it brighter
That is very simple to reason about and you can safely parallelize this loop without any problems whatsoever using OMP or TBB or whatever and without getting tangled up in thread synchronization. It only takes one glance at this code to fully comprehend its side effects.
Try to find as many hotspots as you can which fit this type of simple homogeneous transform loop and if you have complex loops which update many different types of data with complex control flows that trigger complex side effects, then seek to refactor towards these homogeneous loops. Often a complex loop which causes 3 disparate side effects to 3 different types of data can be turned into 3 simple homogeneous loops which each trigger just one kind of side effect to one type of data with a simpler control flow. Doing multiple loops instead of one might seem a tad wasteful, but the loops become simpler, the homogeneity will often lead to more cache-friendly sequential memory access patterns vs. sporadic random-access patterns, and you then tend to find much more opportunities to safely parallelize (as well as vectorize) the code in a straightforward way.
First you have to thoroughly understand the side effects of any code you attempt to parallelize (and I mean thoroughly!!!), so seeking out these homogeneous loops gives you isolated areas of the codebase you can easily reason about in terms of the side effects to the point where you can confidently and safely parallelize those hotspots. It'll also improve the maintainability of the code by making it very easy to reason about the state changes going on in that particular piece of code. Save the dream of the uber multithreaded application running everything in parallel for later. For now, focus on identifying/extracting performance-critical, homogeneous loops with simple control flows and simple side effects. Those are your priority targets for parallelization with simple parallelized loops.
Now admittedly I somewhat dodged your questions, but most of them don't need apply if you do what I suggest, at least until you've kind of worked your way out to the point where you're thinking more about multithreading designs as opposed to simply parallelizing implementation details. And you might not even need to go that far to have a very competitive product in terms of performance. If you have beefy work to do in a single loop, you can devote the hardware resources to making that loop go faster instead of making many operations run simultaneously. If you have to resort to more async methods like if your hotspots are more I/O bound, seek an async/wait approach where you fire off an async task but do some things in the meantime and then wait on the async task(s) to complete. Even if that's not absolutely necessary, the idea is to section off isolated areas of your codebase where you can, with 100% confidence (or at least 99.9999999%) say that the multithreaded code is correct.
You don't ever want to gamble with race conditions. There's nothing more demoralizing than finding some obscure race condition that only occurs once in a full moon on some random user's machine while your entire QA team is unable to reproduce it, only to, 3 months later, run into it yourself except during that one time you ran a release build without debugging info available while you then toss and turn in your sleep knowing your codebase can flake out at any given moment but in ways that no one will ever be able to consistently reproduce. So take it easy with multithreading legacy codebases, at least for now, and stick to multithreading isolated but critical sections of the codebase where the side effects are dead simple to reason about. And test the crap out of it -- ideally apply a TDD approach where you write a test for the code you're going to multithread to ensure it gives the correct output after you finish... though race conditions are the types of things that easily fly under the radar of unit and integration testing, so again you absolutely need to be able to comprehend the entirety of the side effects that go on in a given piece of code before you attempt to multithread it. The best way to do that is to make the side effects as easy to comprehend as possible with the simplest control flows causing just one type of side effect for an entire loop.
It is hard to give you proper guidelines. But...
The easiest way out according to me is to convert your application to ActiveX EXE as COM has support for Threading, etc. built right into it your program will automatically become Multi Threading application. Of course you will have to make quite a few changes to your code. But this is the shortest and safest way to go.
I am not sure but probably RichClient Toolset lib may do the trick for you. On the site the author has written:
It also offers registration free Loading/Instancing-capabilities
for ActiveX-Dlls and new, easy to use Threading-approach,
which works with Named-Pipes under the
hood and works therefore also
cross-process.
Please check it out. Who knows it may be the right solution for your requirements.
As for Project management I think you can continue using what is provided in your choice IDE by integrating it with SVN through plugins.
I forgot to mention that we have completed an application for Share market that automatically trades (buys and sells based on lows and highs) into those scripts that are in user portfolio based on an algorithm that we have developed.
While developing this software we were facing the same kind of problem as you have illustrated here. To solve it we converted out application in ActiveX EXE and we converted all those parts that need to execute parallely into ActiveX DLLs. We have not used any third party libs for this!
HTH

What are the "things to know" when diving into multi-threaded programming in C++

I'm currently working on a wireless networking application in C++ and it's coming to a point where I'm going to want to multi-thread pieces of software under one process, rather than have them all in separate processes. Theoretically, I understand multi-threading, but I've yet to dive in practically.
What should every programmer know when writing multi-threaded code in C++?
I would focus on design the thing as much as partitioned as possible so you have the minimal amount of shared things across threads. If you make sure you don't have statics and other resources shared among threads (other than those that you would be sharing if you designed this with processes instead of threads) you would be fine.
Therefore, while yes, you have to have in mind concepts like locks, semaphores, etc, the best way to tackle this is to try to avoid them.
I am no expert at all in this subject. Just some rule of thumb:
Design for simplicity, bugs really are hard to find in concurrent code even in the simplest examples.
C++ offers you a very elegant paradigm to manage resources(mutex, semaphore,...): RAII. I observed that it is much easier to work with boost::thread than to work with POSIX threads.
Build your code as thread-safe. If you don't do so, your program could behave strangely
I am exactly in this situation: I wrote a library with a global lock (many threads, but only one running at a time in the library) and am refactoring it to support concurrency.
I have read books on the subject but what I learned stands in a few points:
think parallel: imagine a crowd passing through the code. What happens when a method is called while already in action ?
think shared: imagine many people trying to read and alter shared resources at the same time.
design: avoid the problems that points 1 and 2 can raise.
never think you can ignore edge cases, they will bite you hard.
Since you cannot proof-test a concurrent design (because thread execution interleaving is not reproducible), you have to ensure that your design is robust by carefully analyzing the code paths and documenting how the code is supposed to be used.
Once you understand how and where you should bottleneck your code, you can read the documentation on the tools used for this job:
Mutex (exclusive access to a resource)
Scoped Locks (good pattern to lock/unlock a Mutex)
Semaphores (passing information between threads)
ReadWrite Mutex (many readers, exclusive access on write)
Signals (how to 'kill' a thread or send it an interrupt signal, how to catch these)
Parallel design patterns: boss/worker, producer/consumer, etc (see schmidt)
platform specific tools: openMP, C blocks, etc
Good luck ! Concurrency is fun, just take your time...
You should read about locks, mutexes, semaphores and condition variables.
One word of advice, if your app has any form of UI make sure you always change it from the UI thread. Most UI toolkits/frameworks will crash (or behave unexpectedly) if you access them from a background thread. Usually they provide some form of dispatching method to execute some function in the UI thread.
Never assume that external APIs are threadsafe. If it is not explicitly stated in their docs, do not call them concurrently from multiple threads. Instead, limit your use of them to a single thread or use a mutex to prevent concurrent calls (this is rather similar to the aforementioned GUI libraries).
Next point is language-related. Remember, C++ has (currently) no well-defined approach to threading. The compiler/optimizer does not know if code might be called concurrently. The volatile keyword is useful to prevent certain optimizations (i.e. caching of memory fields in CPU registers) in multi-threaded contexts, but it is no synchronization mechanism.
I'd recommend boost for synchronization primitives. Don't mess with platform APIs. They make your code difficult to port because they have similar functionality on all major platforms, but slightly different detail behaviour. Boost solves these problems by exposing only common functionality to the user.
Furthermore, if there's even the smallest chance that a data structure could be written to by two threads at the same time, use a synchronization primitive to protect it. Even if you think it will only happen once in a million years.
One thing I've found very useful is to make the application configurable with regard to the actual number of threads it uses for various tasks. For example, if you have multiple threads accessing a database, make the number of those threads be configurable via a command line parameter. This is extremely handy when debugging - you can exclude threading issues by setting the number to 1, or force them by setting it to a high number. It's also very handy when working out what the optimal number of threads is.
Make sure you test your code in a single-cpu system and a multi-cpu system.
Based on the comments:-
Single socket, single core
Single socket, two cores
Single socket, more than two cores
Two sockets, single core each
Two sockets, combination of single, dual and multi core cpus
Mulitple sockets, combination of single, dual and multi core cpus
The limiting factor here is going to be cost. Ideally, concentrate on the types of system your code is going to run on.
In addition to the other things mentioned, you should learn about asynchronous message queues. They can elegantly solve the problems of data sharing and event handling. This approach works well when you have concurrent state machines that need to communicate with each other.
I'm not aware of any message passing frameworks tailored to work only at the thread level. I've only seen home-brewed solutions. Please comment if you know of any existing ones.
EDIT:
One could use the lock-free queues from Intel's TBB, either as-is, or as the basis for a more general message-passing queue.
Since you are a beginner, start simple. First make it work correctly, then worry about optimizations. I've seen people try to optimize by increasing the concurrency of a particular section of code (often using dubious tricks), without ever looking to see if there was any contention in the first place.
Second, you want to be able to work at as high a level as you can. Don't work at the level of locks and mutexs if you can using an existing master-worker queue. Intel's TBB looks promising, being slightly higher level than pure threads.
Third, multi-threaded programming is hard. Reduce the areas of your code where you have to think about it as much as possible. If you can write a class such that objects of that class are only ever operated on in a single thread, and there is no static data, it greatly reduces the things that you have to worry about in the class.
A few of the answers have touched on this, but I wanted to emphasize one point:
If you can, make sure that as much of your data as possible is only accessible from one thread at a time. Message queues are a very useful construct to use for this.
I haven't had to write much heavily-threaded code in C++, but in general, the producer-consumer pattern can be very helpful in utilizing multiple threads efficiently, while avoiding the race conditions associated with concurrent access.
If you can use someone else's already-debugged code to handle thread interaction, you're in good shape. As a beginner, there is a temptation to do things in an ad-hoc fashion - to use a "volatile" variable to synchronize between two pieces of code, for example. Avoid that as much as possible. It's very difficult to write code that's bulletproof in the presence of contending threads, so find some code you can trust, and minimize your use of the low-level primitives as much as you can.
My top tips for threading newbies:
If you possibly can, use a task-based parallelism library, Intel's TBB being the most obvious one. This insulates you from the grungy, tricky details and is more efficient than anything you'll cobble together yourself. The main downside is this model doesn't support all uses of multithreading; it's great for exploiting multicores for compute power, less good if you wanted threads for waiting on blocking I/O.
Know how to abort threads (or in the case of TBB, how to make tasks complete early when you decide you didn't want the results after all). Newbies seem to be drawn to thread kill functions like moths to a flame. Don't do it... Herb Sutter has a great short article on this.
Make sure to explicitly know what objects are shared and how they are shared.
As much as possible make your functions purely functional. That is they have inputs and outputs and no side effects. This makes it much simpler to reason about your code. With a simpler program it isn't such a big deal but as the complexity rises it will become essential. Side effects are what lead to thread-safety issues.
Plays devil's advocate with your code. Look at some code and think how could I break this with some well timed thread interleaving. At some point this case will happen.
First learn thread-safety. Once you get that nailed down then you move onto the hard part: Concurrent performance. This is where moving away from global locks is essential. Figuring out ways to minimize and remove locks while still maintaining the thread-safety is hard.
Keep things dead simple as much as possible. It's better to have a simpler design (maintenance, less bugs) than a more complex solution that might have slightly better CPU utilization.
Avoid sharing state between threads as much as possible, this reduces the number of places that must use synchronization.
Avoid false-sharing at all costs (google this term).
Use a thread pool so you're not frequently creating/destroying threads (that's expensive and slow).
Consider using OpenMP, Intel and Microsoft (possibly others) support this extension to C++.
If you are doing number crunching, consider using Intel IPP, which internally uses optimized SIMD functions (this isn't really multi-threading, but is parallelism of a related sorts).
Have tons of fun.
Stay away from MFC and it's multithreading + messaging library.
In fact if you see MFC and threads coming toward you - run for the hills (*)
(*) Unless of course if MFC is coming FROM the hills - in which case run AWAY from the hills.
The biggest "mindset" difference between single-threaded and multi-threaded programming in my opinion is in testing/verification. In single-threaded programming, people will often bash out some half-thought-out code, run it, and if it seems to work, they'll call it good, and often get away with it using it in a production environment.
In multithreaded programming, on the other hand, the program's behavior is non-deterministic, because the exact combination of timing of which threads are running for which periods of time (relative to each other) will be different every time the program runs. So just running a multithreaded program a few times (or even a few million times) and saying "it didn't crash for me, ship it!" is entirely inadequate.
Instead, when doing a multithreaded program, you always should be trying to prove (at least to your own satisfaction) that not only does the program work, but that there is no way it could possibly not work. This is much harder, because instead of verifying a single code-path, you are effectively trying to verify a near-infinite number of possible code-paths.
The only realistic way to do that without having your brain explode is to keep things as bone-headedly simple as you can possibly make them. If you can avoid using multithreading totally, do that. If you must do multithreading, share as little data between threads as possible, and use proper multithreading primitives (e.g. mutexes, thread-safe message queues, wait conditions) and don't try to get away with half-measures (e.g. trying to synchronize access to a shared piece of data using only boolean flags will never work reliably, so don't try it)
What you want to avoid is the multithreading hell scenario: the multithreaded program that runs happily for weeks on end on your test machine, but crashes randomly, about once a year, at the customer's site. That kind of race-condition bug can be nearly impossible to reproduce, and the only way to avoid it is to design your code extremely carefully to guarantee it can't happen.
Threads are strong juju. Use them sparingly.
You should have an understanding of basic systems programing, in particular:
Synchronous vs Asynchronous I/O (blocking vs. non-blocking)
Synchronization mechanisms, such as lock and mutex constructs
Thread management on your target platform
I found viewing the introductory lectures on OS and systems programming here by John Kubiatowicz at Berkeley useful.
Part of my graduate study area relates to parallelism.
I read this book and found it a good summary of approaches at the design level.
At the basic technical level, you have 2 basic options: threads or message passing. Threaded applications are the easiest to get off the ground, since pthreads, windows threads or boost threads are ready to go. However, it brings with it the complexity of shared memory.
Message-passing usability seems mostly limited at this point to the MPI API. It sets up an environment where you can run jobs and partition your program between processors. It's more for supercomputer/cluster environments where there's no intrinsic shared memory. You can achieve similar results with sockets and so forth.
At another level, you can use language type pragmas: the popular one today is OpenMP. I've not used it, but it appears to build threads in via preprocessing or a link-time library.
The classic problem is synchronization here; all the problems in multiprogramming come from the non-deterministic nature of multiprograms, which can not be avoided.
See the Lamport timing methods for a further discussion of synchronizations and timing.
Multithreading is not something that only Ph.D.`s and gurus can do, but you will have to be pretty decent to do it without making insane bugs.
I'm in the same boat as you, I am just starting multi threading for the first time as part of a project and I've been looking around the net for resources. I found this blog to be very informative. Part 1 is pthreads, but I linked starting on the boost section.
I have written a multithreaded server application and a multithreaded shellsort. They were both written in C and use NT's threading functions "raw" that is without any function library in-between to muddle things. They were two quite different experiences with different conclusions to be drawn. High performance and high reliability were the main priorities although coding practices had a higher priority if one of the first two was judged to be threatened in the long term.
The server application had both a server and a client part and used iocps to manage requests and responses. When using iocps it is important never to use more threads than you have cores. Also I found that requests to the server part needed a higher priority so as not to lose any requests unnecessarily. Once they were "safe" I could use lower priority threads to create the server responses. I judged that the client part could have an even lower priority. I asked the questions "what data can't I lose?" and "what data can I allow to fail because I can always retry?" I also needed to be able to interface to the application's settings through a window and it had to be responsive. The trick was that the UI had normal priority, the incoming requests one less and so on. My reasoning behind this was that since I will use the UI so seldom it can have the highest priority so that when I use it it will respond immediately. Threading here turned out to mean that all separate parts of the program in the normal case would/could be running simultaneously but when the system was under higher load, processing power would be shifted to the vital parts due to the prioritization scheme.
I've always liked shellsort so please spare me from pointers about quicksort this or that or blablabla. Or about how shellsort is ill-suited for multithreading. Having said that, the problem I had had to do with sorting a semi-largelist of units in memory (for my tests I used a reverse-sorted list of one million units of forty bytes each. Using a single-threaded shellsort I could sort them at a rate of roughly one unit every two us (microseconds). My first attempt to multithread was with two threads (though I soon realized that I wanted to be able to specify the number of threads) and it ran at about one unit every 3.5 seconds, that is to say SLOWER. Using a profiler helped a lot and one bottleneck turned out to be the statistics logging (i e compares and swaps) where the threads would bump into each other. Dividing up the data between the threads in an efficient way turned out to be the biggest challenge and there is definitley more I can do there such as dividing the vector containing the indeces to the units in cache-line size adapted chunks and perhaps also comparing all indeces in two cache lines before moving to the next line (at least I think there is something I can do there - the algorithms get pretty complicated). In the end, I achieved a rate of one unit every microsecond with three simultaneous threads (four threads about the same, I only had four cores available).
As to the original question my advice to you would be
If you have the time, learn the threading mechanism at the lowest possible level.
If performance is important learn the related mechanisms that the OS provides. Multi-threading by itself is seldom enough to achieve an application's full potential.
Use profiling to understand the quirks of multiple threads working on the same memory.
Sloppy architectural work will kill any app, regardless of how many cores and systems you have executing it and regardless of the brilliance of your programmers.
Sloppy programming will kill any app, regardless of the brilliance of the architectural foundation.
Understand that using libraries lets you reach the development goal faster but at the price of less understanding and (usually) lower performance .
Before giving any advice on do's and dont's about multi-thread programming in C++, I would like to ask the question Is there any particular reason you want to start writing the application in C++?
There are other programming paradigms where you utilize the multi-cores without getting into multi-threaded programming. One such paradigm is functional programming. Write each piece of your code as functions without any side effects. Then it is easy to run it in multiple thread without worrying about synchronization.
I am using Erlang for my development purpose. It has increased by productivity by at least 50%. Code running may not be as fast as the code written in C++. But I have noticed that for most of the back-end offline data processing, speed is not as important as distribution of work and utilizing the hardware as much as possible. Erlang provides a simple concurrency model where you can execute a single function in multiple-threads without worrying about the synchronization issue. Writing multi-threaded code is easy, but debugging that is time consuming. I have done multi-threaded programming in C++, but I am currently happy with Erlang concurrency model. It is worth looking into.
Make sure you know what volatile means and it's uses(which may not be obvious at first).
Also, when designing multithreaded code, it helps to imagine that an infinite amount of processors is executing every single line of code in your application at once. (er, every single line of code that is possible according to your logic in your code.) And that everything that isn't marked volatile the compiler does a special optimization on it so that only the thread that changed it can read/set it's true value and all the other threads get garbage.