What are the recommended C++ parallelization libraries for large data processing - c++

Can some one recommend approaches to parallelize in C++, when the data to be acted up on is huge. I have been reading about openMP and Intel's TBB for parallelization in C++, but have not experimented with them yet. Which of these is better for parallel data processing ? Any other libraries/ approaches ?

"large" and "data processing" cover a lot of ground here, and it's hard to give a sensible answer without more information.
If the data processing is "embarrassingly parallel" -- if it involves doing lots and lots of calculations that are completely independant of each other -- then there's a million things that will work and it's just a matter of finding something that matches your code and background.
If it isn't embarrasingly parallel, but nearly so - the computations take a big chunk of data but just distill it into a handfull of numbers - there's fewer, but still lots of options.
If the calculation is more tightly coupled than this - where you need the processors to work on tandem on big chunks of data then you're probably stuck with the standbys - the OpenMP features of your compiler if it will work on a single machine (there's TBB, too, but usually for number crunching OpenMP is faster and easier) or MPI if it needs several machines simultaneously. You mentioned C++; Boost has a very nice MPI layer.
But thinking about which library to use for parallelization is probably thinking about the wrong end of the problem first. In many cases, you don't necessarily need to deal with these layers directly. If the number crunching involves lots of linear algebra (for instance), then PLASMA (for multicore machines - http://icl.cs.utk.edu/plasma/ ) or PetSC, which has support for distributed memory machines, eg, multiple computers ( http://www.mcs.anl.gov/petsc/petsc-as/ ) are good choices, which can completely hide the actual details of the parallel implementation from you. Other sorts of techniques have other libraries, too. It's probably best to think about what sort of analysis you need to do, and look to see if existing toolkits have the amount of parallization you need. Only once you've determined the answer is no should you start to worry about how to roll your own.

Both OpenMP and Intel TBB are for local use as they help in writing multithreaded applications.
If you have truly huge datasets, you may need to split load over several machines -- and then libraries like Open MPI for parallel programming with MPI come into play. Open MPI has a C++ interface, but you now also face a networking component and some administrative issues you do not have with a single computer.

MPI is also useful on a single local machine. It will run a job across multiple cores/CPUs, while this is probably overkill compared to threading it does mean you can move the job to a cluster with no changes. Most MPI implementations also optimize a local job to use shared memory instead of TCP for data connections.

Related

Can an OCaml program use more than one processor core?

Isn't it important to be able to do that in order to attain maximal speed?
Edit:
Clojure, for example, has pmap, which uses more than one core.
Dr. Harrop wrote (Jan. 9, 2011):
The new features being added to the language, such as first-class
modules in OCaml 3.12, are nowhere near as valuable as multicore
capability would have been.
Yes it can; for this, you should use a multi-processing model, where you program spawns multiple processes to do computation independently and then merge results.
The simplest way to do is to use the Unix.fork system call to fork your program into two processes. This is described for example in the online book Unix system programming in OCaml. If the computation you want to split across cores has a simple structure (iteration, mapping over a pool of inputs), Parmap is a library that will let you benefit from parallelism rather easily, just by changing some function calls in your application (if it is well structured already). If you want to do more sophisticated things (direct access to shared memory structures, message boxes...), the Ocaml-net project supports lot of convenient features through the Netmulticore library.
If you want to do distributed programming (programs that run on a cluster of several machines), the OcamlMPI library provides support for the well-known distributed message passing framework MPI. There is also the more experimental and high-level JoCaml extension, that uses an interesting, more researchy approach to concurrent communication.
Note that if you have no specific performance constraints, or if you application is inherently sequential, it makes no sense to bother to try to parallelize some computation (at the cost of the higher book-keeping overhead of synchronization), in the latter case because of Amdahl's law.
If your parallel code generates a large amount of data then there is no easy way to get it back efficiently in OCaml. The traditional workaround is to fork processes and marshal results back to the parent process but your parent process then deserializes all of the data on a single core, reallocating everything on its own heap. That is very inefficient on a multicore and means that OCaml cannot express efficient implementations of most parallel algorithms including pmap.

OpenMP & MPI explanation

A few minutes ago I stumbled upon some text, which reminded me of something that has been wondering my mind for a while, but I had nowhere to ask.
So, in hope this may be the place, where people have hands on
experience with both, I was wondering if someone could explain
what is the difference between OpenMP and MPI ?
I've read the Wikipedia articles in whole, understood them in
segments, but am still pondering;
for a Fortran programmer who wishes one day to enter the world of
paralellism (just learning the basics of OpenMP now), what is the more
future-proof way to go ?
I would be grateful on all your comments
OpenMP is primarily for tightly coupled multiprocessing -- i.e., multiple processors on the same machine. It's mostly for things like spinning up a number of threads to execute a loop in parallel.
MPI is primarily for loosely couple multiprocessing -- i.e., a cluster of computers talking to each other via a network. It can be used on a single machine as kind of a degenerate form of a network, but it does relatively little to take advantage of its being a single machine (e.g., having extremely high bandwidth communication between the "nodes").
Edit (in response to comment): for a cluster of 24 machines, MPI becomes the obvious choice. As noted above (and similar to #Mark's comments) OpenMP is primarily for multiple processors that share memory. When you don't have shared memory, MPI becomes the clear choice.
At the same time, assuming you're going to be using multiprocessor machines (is there anything else anymore?) you might want to use OpenMP to spread the load in each machine across all its processors.
Keep in mind, however, that OpenMP is generally quite a lot quicker/easier to put into use than MPI. Depending on how much speedup you need, scaling up instead of out (i.e. a fewer machines with more processors each) can make the software development enough quicker/cheaper that it can be worthwhile even though it rarely gives the lowest price per core.
Another view, not inconsistent with what #Jerry has already written is that OpenMP is for shared-memory parallelisation and MPI is for distributed-memory parallelisation. Emulating shared-memory on distributed systems is rarely convincing or successful, but it's a perfectly reasonable approach to use MPI on a shared-memory system.
Of course, all (?) multicore PCs and servers are shared-memory systems these days so the execution model for OpenMP is widely applicable. MPI tends to come into its own on clusters on which processors communicate with each other over a network (which is sometimes called an interconnect and is often of a higher-spec than office Ethernet).
In terms of applications I would estimate that a large proportion of parallel programs can be successfully implemented with either OpenMP or MPI and that your choice between the two is probably best driven by the availability of hardware. Most of us (parallel-ists) would regard OpenMP as easier to get into than MPI, and it is certainly (I assert) easier to incrementally parallelise an existing program with OpenMP than with MPI.
However, if you need to use more processors than you can get in one box (and how many processors that is is increasing steadily) then MPI is your better choice. You may also stumble across the idea of hybrid programming -- for example if you have a cluster of multicore PCs you might use MPI between PCs, and OpenMP within a PC. I've not seen any evidence that the additional complexity of programming is rewarded by improved performance, and I've seen some evidence that it is definitely not worth the effort.
And, as one of the comments has already stated, I think that Fortran is future-proof enough in the domain of parallel, high-performance, scientific and engineering applications. The latest (2008) edition of the standard incorporates co-arrays (ie arrays which themselves are distributed across a memory system with non-local and local access) right into the language. There are even one or two early implementations of this feature. I don't yet have any experience of them and expect that there will be teething issues for a few years.
EDIT to pick up on a number of points in OP's comments ...
No, I don't think that it's a bad idea to approach parallel computing via OpenMP. I think that OpenMP and MPI (or, more accurately, the models of parallel computing that they implement) are complementary. I certainly use both, and I suspect that most professional parallel programmers do too. I hadn't done much OpenMP since leaving university about 6 years ago until about 2 years ago when multicores really started popping up everywhere. Now I probably do about equal amounts of both.
In terms of your further (self-)education I think that the book Using OpenMP by Chapman et al is better than the one by Chandra, if only because it is much more up to date. I think that the Chandra book pre-dates OpenMP 2, and the Chapman book pre-dates OpenMP 3 which is worth learning.
On the MPI side the books by Gropp et al, Using MPI and Using MPI-2 are indispensable; this is perhaps because they are (as far as I have found) the only tutorial introductions to MPI rather than because of their excellence. I don't think that they are bad, mind you but they don't have a lot of competition. I like Parallel Scientific Computing in C++ and MPI by Karniadakis and Kirby too; depending on your level of scientific computing knowledge though you may find much of the material too basic.
But what I think the field lacks entirely (hope someone can prove me wrong here ?) is a good textbook (or handful of textbooks) on the design of programs for parallel execution, something to help the experienced Fortran (in our case) programmer make the jump from serial to parallel program design. Lots of info on how to parallelise a loop or nest of loops, not so much on options for parallelising computations on structured positive semi-definite matrices (or whatever). For that level of information we have to dig quite hard into the research papers (ACM and IEEE digital libraries are well worth the modest annual costs -- if you are at an academic institution your library probably has subscriptions to these and a lot more, I'm lucky in that my employers pay for my professional society memberships and add-ons, but if they didn't I would pay myself).
As to your plans for a new lab with, say, 24 processors (CPUs ? or cores ?, doesn't really matter, just asking) then the route you take should depend on the depths of your pocket. If you can afford it I'd suggest:
-- Consider a shared-memory computer, certainly a year ago Sun, SGI and IBM could all supply a shared-memory system with that sort of number of cores, I'm not sure of the current state of the market but since you have until Feb to decide it's worth looking into. A shared-memory system gives you the shared-memory parallelism option, which a cluster doesn't, and message-passing on a shared-memory platform should run at light speed. (By the way, if you go this route, benchmark this aspect of the system, there have been some bad MPI implementations on shared-memory computers.) A good MPI implementation on a shared-memory computer (my last experience of this was on a 512 processor SGI Altix) doesn't send any messages, it just moves a few pointers around and is, consequently, blisteringly fast. Trouble with the Altix was that beyond 128 processors the memory bus tended to get overwhelmed by all the traffic; that was the time to switch to MPI on a cluster or an MPP box.
-- Again, if you can afford it, I'd recommend having a system integrator deliver you a working system, and avoid building a cluster (or whatever) yourself. If, like me, you are a programmer first and a reluctant system integrator way second, this is an easier approach and will deliver you a working system on which you can start programming far sooner.
If you can't afford the expensive options, then I'd go for as many rack-mounted servers with 4 or 8 cores per box (choice is price dependent, and maybe even 16 cores per box is worth considering today) and, today, I'd be planning for at least 4GB RAM per core. Then you need the fastest interconnect you can afford; GB Ethernet is fine, but Infiniband (or the other one whose name I forget) is finer, though the jump in price is noticeable. And you'll need a PC to act as head node for your new cluster, running the job management system and other stuff. There's a ton of excellent material on the Internet on building and running clusters, often under the heading of Beowulf, which was the name of what is considered to have been the first 'home-brew' cluster.
Now, since you have until February to get your lab up and running, fire 2 of your colleagues and turn their PCs into a mini-Beowulf. Download and install a likely-looking MPI installation (OpenMPI is good but there are others to consider and your o/s might dictate another choice). Now you can start getting ready for when the lab is ready.
PS You don't have to fire 2 people if you can scavenge 2 PCs some other way. And the PCs can be old and inadequate for desktop use they are just going to be a training platform for you and your colleagues (if you have any left). The more nearly identical they are the better.
As said above, OpenMP is certainly the easier way to program as compared to MPI because of incremental parallelization. OpenMP has been used mostly for fine grain parallelism (loop level), while MPI more of a coarse-grain parallelism (domain decomposition). Both are good ways to obtain parallel performance.
We have an OpenMP and an MPI versions of our software (Fortran) and Customers use both depending on their needs.
With the current trends in multi-core architecture, hybrid OpenMP-MPI is another viable approach.

how to use quad core CPU in application

For using all the cores of a quad core processor what do I need to change in my code is it about adding support of multi threading or is it which is taken care by OS itself. I am having FreeBSD and language I am using is C++. I want to give complete CPU cycles to my application at least 90%.
You need some form of parallelism. Multi-threading or multi-processing would be fine.
Usually, multiple threads are easier to handle (since they can access shared data) than multiple processes. However, usually, multiple threads are harder to handle (since they access shared data) than multiple processes.
And, yes, I wrote this deliberately.
If you have a SIMD scenario, Ninefingers' suggestion to look at OpenMP is also very good. (If you don't know what SIMD means, see Ninefingers' helpful comment below.)
For multi-threaded applications in C++ may I suggest Boost.Thread which should help you access the full potential of your quad-core machine.
As for changing your code, you might want to consider making things as immutable as possible. State transitions between threads are much more difficult to debug. There a plethora of things that could potentially happen in unexpected ways. See this SO thread.
Another option not mentioned here, threading aside, is the use of OpenMP available via the -fopenmp and the libgomp library, both of which I have installed on my FreeBSD 8 system.
These give you #pragma directives to parallelise certain loops, while statements etc i.e. the bits you can parallelise. It takes care of threading and cpu association for you. Note it is a general solution and therefore might not be the optimum way to parallelise, but it will allow you to parallelise certain routines.
Take a look at this: https://computing.llnl.gov/tutorials/openMP/
As for using threads/processes themselves, certain routines and ways of working lend themselves to it. Can you break tasks out into such a way? Does it make sense to fork() your process or create a thread? If so, do so, but if not, don't try to force your application to be multi-threaded just because. An example I usually give is the greatest common divisor algorithm - it relies on the step before all the time in the traditional implementation therefore is difficult to make parallel.
Also note it is well known that for certain algorithms, parallelisation is actually slower for small values of whatever you are doing in parallel, because although the jobs complete more quickly, the associated time cost of forking and joining (be that threads or processes) actually pushes the time above that of a serial implementation.
I think your only option is to run several threads. If your application is single-threaded, then it will only run on one of the cores (at a time), but if you have more threads, they can run simultaneously.
You need to add support to your application for parallelism through the use of Threading.
Once you have support for parallelism, it's up to the OS to assign your threads to CPU cores.
The first thing I think you should look at is whether your application and its algorithms are suited to be executed in parellel (or possibly as a set of serial tasks that can be processed independently). If this is not the case, it will be difficult to multithread it or break it up into parallel processes, and you may need to look into modifying the way it works.
Once you have established that you will be able to benefit from parallel processing you have the option to either use several processes or threads. The choice depends a lot on the nature of your application and how independent the parallel processes can be. It is easier to coordinate and share data between threads since they are in the same process, but also quite a bit more challenging to develop and debug.
Boost.Thread is a good library if you decide to go down the multi-threaded route.
I want to give complete CPU cycles to my application at least 90%.
Why? Your chip's not hot enough?
Seriously, it takes world experts dozens if not hundreds of hours to parallelize and load-balance an application so that it uses 90% of all four cores. Your CPU is already paid for and it costs the same whether you use it or not. (Actually, it costs slightly less to run, electrically speaking, if you don't use it.) How much is your time worth? How many hours are you willing to invest in order to make more effective use of a resource that may have cost you $300 and is probably sitting idle most of the time anyway?
It's possible to get speedups through parallelism, but it's expensive in human time. You need a good reason to justify it. (Learning how is a good enough reason.)
All the good books I know on parallel programming are for languages other than C++, and for good reason. If you want interesting stuff on parallelism check out Implicit Parallel Programmin in pH or Concurrent Programming in ML or the Fortress Project.

Thread Building Block versus MPI, which one fits mt need better?

Now I have a serial solver in C++ for solving optimization problems and I am supposed to parallelize my solver with different parameters to see whether it can help improve the performance of the solver. Now I am not sure whther I should use TBB or MPI. From a TBB book I read, I feel TBB is more suitable for looping or fine-grained code. Since I do not have much experience with TBB, I feel it is difficult to divide my code to small parts in order to realize the parallelization. In addition, from the literature, I find many authors used MPI to parallel several solvers and make it cooperate. I guess maybe MPI fits my need more. Since I do not have much knowledge on either TBB or MPI. Anyone can tell me whether my feeling is right? Will MPI fit me better? If so, what material is good for start learning MPI. I have no experience with MPI and I use Windows system and c++. Thanks a lot.
The basic thing you need to have in mind is to choose between shared-memory and distributed-memory.
Shared-memory is when you have more than one process (normally more than one thread within a process) that can access a common memory. This can be quite fine-grained and it is normally simpler to adapt a single-threaded program to have several threads. You will need to design the program in a way that the threads work most of the time in separate parts of the memory (exploit data parallelism) and that the shared part is protected against concurrent accesses using locks.
Distributed-memory means that you have different processes that might be executed in one or several distributed computers but these process have together a common goal and share data through message-passing (data communication). There is no common memory space and all the data one process need from another process will require communication.
It is a more general approach but, because of communication requirements, it requires coarse grains.
TBB is a library support for thread-based shared-memory parallelism while MPI is a library for distributed-memory parallelism (it has simple primitives for communication and also scripts for several processes in different nodes execution).
The most important thing is for you to identify the parallelisms within your solver and then choose the best solution. Do you have data parallelism (different thread/processes could be working in parallel in different chunks of data without the need of communication or sharing parts of this data)? Task parallelism (different threads/processes could be performing a different transformation to your data or a different step in the data processing in a pipeline or graph fashion)?

How are you taking advantage of Multicore?

As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few.
My questions are:
How does this affect your software roadmap?
I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc).
What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else?
What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores?
If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too.
Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too.
Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?
My research work includes work on compilers and on spam filtering. I also do a lot of 'personal productivity' Unix stuff. Plus I write and use software to administer classes that I teach, which includes grading, testing student code, tracking grades, and myriad other trivia.
Multicore affects me not at all except as a research problem for compilers to support other applications. But those problems lie primarily in the run-time system, not the compiler.
At great trouble and expense, Dave Wortman showed around 1990 that you could parallelize a compiler to keep four processors busy. Nobody I know has ever repeated the experiment. Most compilers are fast enough to run single-threaded. And it's much easier to run your sequential compiler on several different source files in parallel than it is to make your compiler itself parallel. For spam filtering, learning is an inherently sequential process. And even an older machine can learn hundreds of messages a second, so even a large corpus can be learned in under a minute. Again, training is fast enough.
The only significant way I have of exploiting parallel machines is using parallel make. It is a great boon, and big builds are easy to parallelize. Make does almost all the work automatically. The only other thing I can remember is using parallelism to time long-running student code by farming it out to a bunch of lab machines, which I could do in good conscience because I was only clobbering a single core per machine, so using only 1/4 of CPU resources. Oh, and I wrote a Lua script that will use all 4 cores when ripping MP3 files with lame. That script was a lot of work to get right.
I will ignore tens, hundreds, and thousands of cores. The first time I was told "parallel machines are coming; you must get ready" was 1984. It was true then and is true today that parallel programming is a domain for highly skilled specialists. The only thing that has changed is that today manufacturers are forcing us to pay for parallel hardware whether we want it or not. But just because the hardware is paid for doesn't mean it's free to use. The programming models are awful, and making the thread/mutex model work, let alone perform well, is an expensive job even if the hardware is free. I expect most programmers to ignore parallelism and quietly get on about their business. When a skilled specialist comes along with a parallel make or a great computer game, I will quietly applaud and make use of their efforts. If I want performance for my own apps I will concentrate on reducing memory allocations and ignore parallelism.
Parallelism is really hard. Most domains are hard to parallelize. A widely reusable exception like parallel make is cause for much rejoicing.
Summary (which I heard from a keynote speaker who works for a leading CPU manufacturer): the industry backed into multicore because they couldn't keep making machines run faster and hotter and they didn't know what to do with the extra transistors. Now they're desperate to find a way to make multicore profitable because if they don't have profits, they can't build the next generation of fab lines. The gravy train is over, and we might actually have to start paying attention to software costs.
Many people who are serious about parallelism are ignoring these toy 4-core or even 32-core machines in favor of GPUs with 128 processors or more. My guess is that the real action is going to be there.
For web applications it's very, very easy: ignore it. Unless you've got some code that really begs to be done in parallel you can simply write old-style single-threaded code and be happy.
You usually have a lot more requests to handle at any given moment than you have cores. And since each one is handled in its own Thread (or even process, depending on your technology) this is already working in parallel.
The only place you need to be careful is when accessing some kind of global state that requires synchronization. Keep that to a minimum to avoid introducing artificial bottlenecks to an otherwise (almost) perfectly scalable world.
So for me multi-core basically boils down to these items:
My servers have less "CPUs" while each one sports more cores (not much of a difference to me)
The same number of CPUs can substain a larger amount of concurrent users
When the seems to be performance bottleneck that's not the result of the CPU being 100% loaded, then that's an indication that I'm doing some bad synchronization somewhere.
At the moment - doesn't affect it that much, to be honest. I'm more in 'preparation stage', learning about the technologies and language features that make this possible.
I don't have one particular domain, but I've encountered domains like math (where multi-core is essential), data sort/search (where divide & conquer on multi-core is helpful) and multi-computer requirements (e.g., a requirement that a back-up station's processing power is used for something).
This depends on what language I'm working. Obviously in C#, my hands are tied with a not-yet-ready implementation of Parallel Extensions that does seem to boost performance, until you start comparing same algorithms with OpenMP (perhaps not a fair comparison). So on .NET it's going to be an easy ride with some for → Parallel.For refactorings and the like.Where things get really interesting is with C++, because the performance you can squeeze out of things like OpenMP is staggering compared to .NET. In fact, OpenMP surprised me a lot, because I didn't expect it to work so efficiently. Well, I guess its developers have had a lot of time to polish it. I also like that it is available in Visual Studio out-of-the-box, unlike TBB for which you have to pay.As for MPI, I use PureMPI.net for little home projects (I have a LAN) to fool around with computations that one machine can't quite take. I've never used MPI commercially, but I do know that MKL has some MPI-optimized functions, which might be interesting to look at for anyone needing them.
I plan to do 'frivolous computing', i.e. use extra cores for precomputation of results that might or might not be needed - RAM permitting, of course. I also intend to delve into costly algorithms and approaches that most end users' machines right now cannot handle.
As for domains not benefitting from parallellization... well, one can always find something. One thing I am concerned about is decent support in .NET, though regrettably I have given up hope that speeds similar to C++ can be attained.
I work in medical imaging and image processing.
We're handling multiple cores in much the same way we handled single cores-- we have multiple threads already in the applications we write in order to have a responsive UI.
However, because we can now, we're taking strong looks at implementing most of our image processing operations in either CUDA or OpenMP. The Intel Compiler provides a lot of good sample code for OpenMP, and is just a much more mature product than CUDA, and provides a much larger installed base, so we're probably going to go with that.
What we tend to do for expensive (ie, more than a second) operations is to fork that operation off into another process, if we can. That way, the main UI remains responsive. If we can't, or it's just far too inconvenient or slow to move that much memory around, the operation is still in a thread, and then that operation can itself spawn multiple threads.
The key for us is to make sure that we don't hit concurrency bottlenecks. We develop in .NET, which means that UI updates have to be done from an Invoke call to the UI in order to have the main thread update the UI.
Maybe I'm lazy, but really, I don't want to have to spend too much time figuring a lot of this stuff out when it comes to parallelizing things like matrix inversions and the like. A lot of really smart people have spent a lot of time making that stuff fast like nitrous, and I just want to take what they've done and call it. Something like CUDA has an interesting interface for image processing (of course, that's what it's defined for), but it's still too immature for that kind of plug-and-play programming. If I or another developer get a lot of spare time, we might give it a try. So instead, we'll just go with OpenMP to make our processing faster (and that's definitely on the development roadmap for the next few months).
So far, nothing more than more efficient compilation with make:
gmake -j
the -j option allows tasks that don't depend on one another to run in parallel.
I'm developing ASP.NET web applications. There is little possibility to use multicore directly in my code, however IIS already scales well for multiple cores/CPU's by spawning multiple worker threads/processes when under load.
We're having a lot of success with task parallelism in .NET 4 using F#. Our customers are crying out for multicore support because they don't want their n-1 cores idle!
I'm in image processing. We're taking advantage of multicore where possible by processing images in slices doled out to different threads.
I said some of this in answer to a different question (hope this is OK!): there is a concept/methodology called Flow-Based Programming (FBP) that has been around for over 30 years, and is being used to handle most of the batch processing at a major Canadian bank. It has thread-based implementations in Java and C#, although earlier implementations were fiber-based (C++ and mainframe Assembler). Most approaches to the problem of taking advantage of multicore involve trying to take a conventional single-threaded program and figure out which parts can run in parallel. FBP takes a different approach: the application is designed from the start in terms of multiple "black-box" components running asynchronously (think of a manufacturing assembly line). Since the interface between components is data streams, FBP is essentially language-independent, and therefore supports mixed-language applications, and domain-specific languages. Applications written this way have been found to be much more maintainable than conventional, single-threaded applications, and often take less elapsed time, even on single-core machines.
My graduate work is in developing concepts for doing bare-metal multicore work & teaching same in embedded systems.
I'm also working a bit with F# to bring up my high-level multiprocess-able language facilities to speed.
We create the VivaMP code analyzer for error detecting in parallel OpenMP programs.
VivaMP is a lint-like static C/C++ code analyzer meant to indicate errors in parallel programs based on OpenMP technology. VivaMP static analyzer adds much to the abilities of the existing compilers, diagnoses any parallel code which has some errors or is an eventual source of such errors. The analyzer is integrated into VisualStudio2005/2008 development environment.
VivaMP – a tool for OpenMP
32 OpenMP Traps For C++ Developers
I believe that "Cycles are an engineers' best friend".
My company provides a commercial tool for analyzing
and transforming very
large software systems in many computer languages.
"Large" means 10-30 million lines of code.
The tool is the DMS Software Reengineering Toolkit
(DMS for short).
Analyses (and even transformations) on such huge systems
take a long time: our points-to analyzer for C
code takes 90 CPU hours on an x86-64 with 16 Gb RAM.
Engineers want answers faster than that.
Consequently, we implemented DMS in PARLANSE,
a parallel programming language of our own design,
intended to harness small-scale multicore shared
memory systems.
The key ideas behind parlanse are:
a) let the programmer expose parallelism,
b) let the compiler choose which part it can realize,
c) keep the context switching to an absolute minimum.
Static partial orders over computations are
an easy to help achieve all 3; easy to say,
relatively easy to measure costs,
easy for compiler to schedule computations.
(Writing parallel quicksort with this is trivial).
Unfortunately, we did this in 1996 :-(
The last few years have finally been a vindication;
I can now get 8 core machines at Fry's for under $1K
and 24 core machines for about the same price as a small
car (and likely to drop rapidly).
The good news is that DMS is now a fairly mature,
and there are a number of key internal mechanisms
in DMS which take advantage of this, notably
an entire class of analyzers call "attribute grammars",
which we write using a domain-specific language
which is NOT parlanse. DMS compiles these
atrribute grammars into PARLANSE and then they
are executed in parallel. Our C++ front
end uses attribute grammars, and is about 100K
sloc; it is compiled into 800K SLOC of parallel
parlanse code that actually works reliably.
Now (June 2009), we are pretty busy making DMS useful, and
don't always have enough time to harness the parallelism
well. Thus the 90 hour points-to analysis.
We are working on parallelizing that, and
have reasonable hope of 10-20x speedup.
We believe that in the long run, harnessing
SMP well will make workstations far more
friendly to engineers asking hard questions.
As well they should.
Our domain logic is based heavily on a workflow engine and each workflow instance runs off the ThreadPool.
That's good enough for us.
I can now separate my main operating system from my development / install whatever I like os using vitualisation setups with Virtual PC or VMWare.
Dual core means that one CPU runs my host OS, the other runs my development OS with a decent level of performance.
Learning a functional programming language might use multiple cores... costly.
I think it's not really hard to use extra cores. There are some trivialities as web apps that does not need to have any extra care as the web server does its work running the queries in parallel. The questions are for long running algorythms (long is what you call long). These need to be split over smaller domains that does not depend each other, or synchronize the dependencies. A lot of algs can do this, but sometimes horribly different implementations needed (costs again).
So, no silver bullet until you are using imperative programming languages, sorry. Either you need skilled programmers (costly) or you need to turn to an other programming language (costly). Or you may have luck simply (web).
I'm using and programming on a Mac. Grand Central Dispatch for the win. The Ars Technica review of Snow Leopard has a lot of interesting things to say about multicore programming and where people (or at least Apple) are going with it.
I've decided to take advantage of multiple cores in an implementation of the DEFLATE algorithm. MArc Adler did something similar in C code with PIGZ (parallel gzip). I've delivered the philosophical equivalent, but in a managed code library, in DotNetZip v1.9. This is not a port of PIGZ, but a similar idea, implemented independently.
The idea behind DEFLATE is to scan a block of data, look for repeated sequences, build a "dictionary" that maps a short "code" to each of those repeated sequences, then emit a byte stream where each instance of one of the repeated sequences is replaced by a "code" from the dictionary.
Because building the dictionary is CPU intensive, DEFLATE is a perfect candidate for parallelization. i've taken a Map+Reduce type approach, where I divide the incoming uncompressed bytestreeam into a set of smaller blocks (map), say 64k each, and then compress those independently. Then I concatenate the resulting blocks together (reduce). Each 64k block is compressed independently, on its own thread, without regard for the other blocks.
On a dual-core machine, this approach compresses in about 54% of the time of the traditional serial approach. On server-class machines, with more cores available, it can potentially deliver even better results; with no server machine, I haven't tested it personally, but people tell me it's fast.
There's runtime (cpu) overhead associated to the management of multiple threads, runtime memory overhead associated to the buffers for each thead, and data overhead associated to concatenating the blocks. So this approach pays off only for larger bytestreams. In my tests, above 512k, it can pay off. Below that, it is better to use a serial approach.
DotNetZip is delivered as a library. My goal was to make all of this transparent. So the library automatically uses the extra threads when the buffer is above 512kb. There's nothing the application has to do, in order to use threads. It just works, and when threads are used, it's magically faster. I think this is a reasonable approach to take for most libbraries being consumed by applications.
It would be nice for the computer to be smart about automatically and dynamically exploiting resources on parallizable algorithms, but the reality today is that apps designers have to explicitly code the parallelization in.
I work in C# with .Net Threads.
You can combine object-oriented encapsulation with Thread management.
I've read some posts from Peter talking about a new book from Packt Publishing and I've found the following article in Packt Publishing web page:
http://www.packtpub.com/article/simplifying-parallelism-complexity-c-sharp
I've read Concurrent Programming with Windows, Joe Duffy's book. Now, I am waiting for "C# 2008 and 2005 Threaded Programming", Hillar's book - http://www.amazon.com/2008-2005-Threaded-Programming-Beginners/dp/1847197108/ref=pd_rhf_p_t_2
I agree with Szundi "No silver bullet"!
You say "For web applications it's very, very easy: ignore it. Unless you've got some code that really begs to be done in parallel you can simply write old-style single-threaded code and be happy."
I am working with Web applications and I do need to take full advantage of parallelism.
I understand your point. However, we must prepare for the multicore revolution. Ignoring it is the same than ignoring the GUI revolution in the 90's.
We are not still developing for DOS? We must tackle multicore or we'll be dead in many years.
I think this trend will first persuade some developers, and then most of them will see that parallelization is a really complex task.
I expect some design pattern to come to take care of this complexity. Not low level ones but architectural patterns which will make hard to do something wrong.
For example I expect messaging patterns to gain popularity, because it's inherently asynchronous, but you don't think about deadlock or mutex or whatever.
How does this affect your software roadmap?
It doesn't. Our (as with almost all other) business related apps run perfectly well on a single core. So long as adding more cores doesn't significantly reduce the performance of single threaded apps, we're happy
...real stories...
Like everyone else, parallel builds are the main benefit we get. The Visual Studio 2008 C# compiler doesn't seem to use more than one core though, which really sucks
What are you doing with your existing code to take advantage of multicore machines
We may look into using the .NET parallel extensions if we ever have a long-running algorithm that can be parallelized, but the odds of this actually occurring are slim. The most likely answer is that some of the developers will play around with it for interest's sake, but not much else
how will you deal with hundreds or thousands of cores?
Head -> Sand.
If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too.
The client app mostly pushes data around, the server app mostly relies on SQL server to do the heavy lifting
I'm taking advantage of multicore using C, PThreads, and a home brew implementation of Communicating Sequential Processes on an OpenVPX platform with Linux using the PREEMPT_RT patch set's scheduler. It all adds up to nearly 100% CPU utilisation across multiple OS instances with no CPU time used for data exchange between processor cards in the OpenVPX chassis, and very low latency too. Also using sFPDP to join multiple OpenVPX chassis together into a single machine. Am not using Xeon's internal DMA so as to relieve memory pressure inside CPUs (DMA still uses memory bandwidth at the expense of the CPU cores). Instead we're leaving data in place and passing ownership of it around in a CSP way (so not unlike the philosophy of .NET's task parallel data flow library).
1) Software Roadmap - we have pressure to maximise the use real estate and available power. Making the very most of the latest hardware is essential
2) Software domain - effectively Scientific Computing
3) What we're doing with existing code? Constantly breaking it apart and redistributing parts of it across threads so that each core is maxed out doing the most it possibly can without breaking out real time requirement. New hardware means quite a lot of re-thinking (faster cores can do more in the given time, don't want them to be under utilised). Not as bad as it sounds - the core routines are very modular so easily assembled into thread-sized lumps. Although we planned on taking control of thread affinity away from Linux, we've not yet managed to extract significant extra performance by doing so. Linux is pretty good at getting data and code in more or less the same place.
4) In effect already there - total machine already adds up to thousands of cores
5) Parallel computing is essential - it's a MISD system.
If that sounds like a lot of work, it is. some jobs require going whole hog on making the absolute most of available hardware and eschewing almost everything that is high level. We're finding that the total machine performance is a function of CPU memory bandwidth, not CPU core speed, L1/L2/L3 cache size.