Is ITC faster than IPC of the same kind? - c++

I am trying to find out if there is an inherent speed difference between inter-thread and inter-process communication.
I know that when using threads the threads share the same memory, can use the same global variables, and the such while processes have to use other tricks, which basically means queues.
But take the following case:
An application is comprised of several completely separate .exe files. When all are run they form a producer/consumer (or publisher/subscriber) architecture, with some processes producing some values and other processes reading and using those values and maybe producing some other values.
This communication is done with conventional ways of IPC.
My question is: if I were to move the code around so that it's one process with multiple threads (assuming no conflicts with variable names and the such), but keep the communication methods the same, queues with all the locks and semaphores behind them, will the thread-based application be faster than the process-based one?
The startup costs of processes vs. threads are not important because the application is meant to run for a long time (hours) so a few milliseconds will not be important.
Google has yielded no conclusive answers to this.
To clarify some aspects of the question:
The factor I want to maximize is throughput.
Some external factor (an arduino sensor for example) produces an input for one of the nodes and the entire network takes some time while all the nodes consume and produce values. Then a new input can be processed. I would like to be able to process more inputs per minute/second.
The data being passed back and forth are mostly numbers or small arrays of numbers.
The entire network can have lets say between 5 and 25 nodes.
As for platform (if it is relevant) I would like answers for both Linux and Windows.
The specific use-case is too large to be described here so consider the use-case provided above. This is as much, if not more, an educational question for my own knowledge as it is a question about a specific problem.
Please ask for any other relevant information that I have not included here.

If I were to move the code around so that it's one process with multiple threads (assuming no conflicts with variable names and the such), but keep the communication methods the same, queues with all the locks and semaphores behind them, will the thread-based application be faster than the process-based one?
That is not possible. The multiple thread version will take advantage of the shared memory space and the multi-process version cannot do so. For example, you can take an ordinary object in one thread and access it through a pointer in another thread, and all the referenced sub-objects will "just work". Anything not modified can be accessed just as easily in one thread as another with no special effort.
This simply won't work across processes at all since they don't share an address space.

Related

10 threads in a single program or 1 thread program ran 10 times (C++)?

I am wondering whether there is any difference in performance in running a single program (exe) with 10 different threads or running the program with a single thread 10 times in parallel (starting it from a .bat file) assuming the work done is the same and only the number of threads spawned by the program change?
I am developing a client/server communication program and want to test it for throughput. I'm currently learning about parallel programming and threading as wasn't sure how Windows would handle the above scenario. Will the scheduler schedule work the same way for both scenarios? Will there be a performance difference?
The machine the program is running on has 4 threads.
Threads are slightly lighter weight than processes as there are many things a process gets it's own copy of. Especially when you compare the time it takes to start a new thread, vs starting a new process (from scratch, fork where available also avoids a lot of costs). Although in either case you can generally get even better performance using a worker pool where possible rather than starting and stopping fresh processes/threads.
The other major difference is that threads by default all share the same memory while processes get their own and need to communicate through more explicit means (which may include blocks of shared memory). This might make it easier for a threaded solution to avoid copying data, but this is also one of the dangers of multithreaded programming when care is not taken in how they use the shared memory/objects.
Also there may be API's that are more orientated around a single process. For example on Windows there is IO Completion Ports which basically works on the idea of having many in-progress IO operations for different files, sockets, etc. with multiple threads (but generally far less than the number of files/sockets) handling the results as they become available through a GetQueuedCompletionStatus loop.

Benefits of a multi thread program in a unicore system [duplicate]

This question already has answers here:
How can multithreading speed up an application (when threads can't run concurrently)?
(9 answers)
Closed 9 years ago.
My professor causally mentioned that we should program multi-thread programs even if we are using a unicore processor however because of the lack of time , he did not elaborate on it .
I would like to know what are the benefits of a multi-thread program in a unicore processor ??
It won't be as significant as a multi-core system but it can still provide some benefits.
Mainly all the benefits that you are going to get will be regarding to the context switch that will happen after a input miss to the already executing thread. Executing thread may be waiting for anything such as a hardware resource or a branch mis-prediction or even data transfer after a cache miss.
At this point the waiting thread can be executed to benefit from this "waiting time". But of course context switch will take some time. Also managing threads inside the code rather than sequential computation can create some extra complexity to your program. And as it has been said, some applications needs to be multi-threaded so there is no escape from the context switch in some cases.
Some applications need to be multi-threaded. Multi-threading isn't just about improving performance by using more cores, it's also about performing multiple tasks at once.
Take Skype for example - The GUI needs to be able to accept the text you're entering, display it on the screen, listen for new messages coming from the user you're talking to, and display them. This wouldn't be a trivial task in a single threaded application.
Even if there's only one core available, the OS thread scheduler will give you the illusion of parallelism.
Usually it is about not blocking. Running many threads on a single core still gives the illusion of concurrency. So you can have, say, a thread doing IO while another one does user interactions. The user interaction thread is not blocked while the other does IO, so the user is free to carry on interacting.
Benefits could be different.
One of the widely used examples is the application with GUI, which supposed to perform some kind of computations. If you will have a single thread - the user will have to wait the result before dealing something else with the application, but if you start it in the separate thread - user interface could be still available for user during the computation process. So, multi-thread program could emulate multi-task environment even on a unicore system. That's one of the points.
As others have already mentioned, not blocking is one application. Another one is separation of logic for unrelated tasks that are to be executed simultaneously. Using threads for that leaves handling of scheduling these tasks to the OS.
However, note that it may also be possible to implement similar behavior using asynchronous operations in a single thread. "Future" and boost::asio provide ways of doing non-blocking stuff without necessarily resorting to multiple threads.
I think it depends a bit on how exactly you design your threads and which logic is actually in the thread. Some benefits you can even get on a single core:
A thread can wrap a blocking/long-during call you can't circumvent otherwise. For some operations there are polling mechanisms, but not for all.
A thread can wrap an almost standalone part of your application that has virtually no interaction with other code. For example background polling for updates, monitoring some resource (e.g. free storage), checking internet connectivity. If you keep them in a separate thread you can keep the code relatively simple in its own 'runtime' without caring too much about the impact on the main program, the sole communication with the main logic is usually a single 'event'.
In some environments you might get more processing time. This mainly depends on how your OS scheduling system works, but if this allocates time per thread, the more threads you have the more your app will be scheduled.
Some benefits long-term:
Where it's not hard to do you benefit if your hardware evolves. You never know what's going to happen, today your app runs on a single-core embedded device, tomorrow that embedded device gets a quad core. Programming threaded from the beginning improves your future scalability.
One example is an environment where you can deterministically assign work to a thread, e.g. based on some hash all related operations end up in the same thread. The advantage for single cores is 'small' but it's not hard to do as you need little synchronization primitives so the overhead stays small.
That said, I think there are situations where it's very ill advise:
As soon as your required synchronization mechanism with other threads becomes complex (e.g. multiple locks, lots of critical sections, ...). It might still be then that multi-threading gives you a benefit when effectively moving to multiple CPUs, but the overhead is huge both for your single core and your programming time.
For instance think about operations that block because of slow peripheral devices (harddisk access etc.). While these are waiting, even the single core can do other things asyncronously.
In a lot of applications the bottleneck is not CPU processing power. So when the program flow is waiting for completion of IO requests (user input, network/disk IO), critical resources to be available, or any sort of asynchroneously triggered events, the CPU can be scheduled to do other work instead of just blocking.
In this case you don't necessarily need multiple threads that can actually run in parallel. Cooperative multi-tasking concepts like asynchroneous IO, coroutines, or fibers come into mind.
If however the application's bottleneck is CPU processing power (constantly 100% CPU usage), then it makes sense to increase the number of CPUs available to the application. At that point it is easier to scale the application up to use more CPUs if it was designed to run in parallel upfront.
As far as I can see, one answer was not yet given:
You will have to write multithreaded applications in the future!
The average number of cores will double every 18 months in the future. People have learned single-threaded programming for 50 years now, and now they are confronted with devices that have multiple cores. The programming style in a multi-threaded environment differs significantly from single-threaded programming. This refers to low-level aspects like avoiding race conditions and proper synchronization, as well as the high-level aspects like the general algorithm design.
So in addition to the points already mentioned, it's also about writing future-proof software, scalability and the development of the skills that are required to achieve these goals.

C++ Server - To Thread or not to Thread?

I'm working on a game server, written in C++, and I'm trying to decide how many threads to use and what tasks to thread. The basic server skeleton consists of keyboard I/O and output to a console, accepting incoming connects, sending outgoing connects, and doing the game "stuff".
What I'd like to know is which things should be given a separate thread. Should each connect have its own thread? I know this is variable, it depends on the project or so, but I would like it to support a pretty decent number of players (somewhere in the hundreds if possible).
The standard answer should always be: Try it the simplest way first, and only look for ways to improve performance if the simple way isn't good enough. However, re-architecting a large C++ program can be a painful experience, so some guesses about performance in advance may be appropriate.
Theoretically, hundreds of threads are probably OK on modern machines. The NPTL implementation for Linux was tested with tens of thousands of threads, as I recall. If that's the easiest way for you to implement, it may be the right answer.
However, high-performance web servers and similar typically use event-driven models instead. Consider a library like libevent. I'm sure there are C++ libraries for the same purpose.
I personally believe that languages without first-class continuations, or at least coroutines, are poor choices for this kind of work, but the C language family is how we get work done today, so off we go. :-)
A good solution could be to use a Thread pool.
Idea is to let the main thread dispatch equitably all connexions in a fixed number of threads.
With a good design, you can easily set the number of thread on runtime.
You can find more informations here.
Create more threads than you have CPU cores is not productive, and adding too threads decrease performances due to time taken for switching between threads.
By example, for compiling a large project (it's not exactly the same thing, but it's valid for both case), it's often recommended to use no more thread than number of CPU cores + 1.
A very common technique is to have the game server run on one thread to monitor several connections (i.e. sockets) by using a select on each socket. When data is available, grab the data and enqueue it in a producer/consumer type model for the game engine to pick up.
This is by no means the be-all-end-all implementation, but it should be enough to get you started. Sounds like a cool project. Good luck!
If you setup the connections and utilize them in a manner that cause the thread to block waiting on IO then you should be able to service all of the connections and the keyboard on one thread. You may not want to put the console output on that same thread, as I've seen cases (on windows at least), where the speed of writing to the console is actually a bottleneck (i.e. if the console window is minimized the process runs considerably faster).
If the work of your game engine parallelizes well then you probably want to set use as many threads as there are CPUs less one (for the OS and the other two threads). If you expect the client to run on the same machine the server will want to detect that and scale back the number of threads it uses.

What is process and thread?

Yes, I have read many materials related to operating system. And I am still reading. But it seems all of them are describing the process and thread in a "abstract" way, which makes a lot of high level elabration on their behavior and logic orgnization. I am wondering what are they physically? In my opinion, they are just some in-memory "data structures" which are maintained and used by the kernel codes to facilitate the execution of program. For example, operating system use some process data structure (PCB) to describe the aspects of the process assigned for a certain program, such as its priority, its address space and so on. Is this all right?
First thing you need to know to understand the difference between a process and a thread, is a fact, that processes do not run, threads do.
So, what is a thread? Closest I can get explaining it is an execution state, as in: a combination of CPU registers, stack, the lot. You can see a proof of that, by breaking in a debugger at any given moment. What do you see? A call stack, a set of registers. That's pretty much it. That's the thread.
Now, then, what is a process. Well, it's a like an abstract "container" entity for running threads. As far as OS is concerned in a first approximation, it's an entity OS allocates some VM to, assigns some system resources to (like file handles, network sockets), &c.
How do they work together? The OS creates a "process" by reserving some resources to it, and starting a "main" thread. That thread then can spawn more threads. Those are the threads in one process. They more or less can share those resources one way or another (say, locking might be needed for them not to spoil the fun for others &c). From there on, OS is normally responsible for maintaining those threads "inside" that VM (detecting and preventing attempts to access memory which doesn't "belong" to that process), providing some type of scheduling those threads, so that they can run "one-after-another-and-not-just-one-all-the-time".
Normally when you run an executable like notepad.exe, this creates a single process. These process could spawn other processes, but in most cases there is a single process for each executable that you run. Within the process, there can be many threads. Usually at first there is one thread, which usually starts at the programs "entry point" which is the main function usually. Instructions are executed one by one in order, like a person who only has one hand, a thread can only do one thing at a time before it moves on to the next.
That first thread can create additional threads. Each additional thread has it's own entry point, which is usually defined with a function. The process is like a container for all the threads that have been spawned within it.
That is a pretty simplistic explanation. I could go into more detail but probably would overlap with what you will find in your textbooks.
EDIT: You'll notice there are lot's of "usually"'s in my explanation, as there are occasionally rare programs that do things drastically different.
One of the reasons why it is pretty much impossible to describe threads and processes in a non-abstract way is that they are abstractions.
Their concrete implementations differ tremendously.
Compare for example an Erlang Process and a Windows Process: an Erlang Process is very lightweight, often less than 400 Bytes. You can start 10 million processes on a not very recent laptop without any problems. They start up very quickly, they die very quickly and you are expected to be able to use them for very short tasks. Every Erlang Process has its own Garbage Collector associated with it. Erlang Processes can never share memory, ever.
Windows Processes are very heavy, sometimes hundreds of MiBytes. You can start maybe a couple of thousand of them on a beefy server, if you are lucky. They start up and die pretty slowly. Windows Processes are the units of Applications such as IDEs or Text Editors or Word Processors, so they are usually expected to live quite a long time (at least several minutes). They have their own Address Space, but no Garbage Collector. Windows Processes can share memory, although by default they don't.
Threads are a similar matter: an NPTL Linux Thread on x86 can be as small as 4 KiByte and with some tricks you can start 800000+ on a 32 Bit x86 machine. The machine will certainly be useable with thousands, maybe tens of thousands of threads. A .NET CLR Thread has a minimum size of about 1 MiByte, which means that just 4000 of those will eat up your entire address space on a 32 Bit machine. So, while 4000 NPTL Linux Threads is generally not a problem, you can't even start 4000 .NET CLR Threads because you will run out of memory before that.
OS Processes and OS Threads are also implemented very differently between different Operating Systems. The main two approaches are: the kernel knows only about processes. Threads are implemented by a Userspace Library, without any knowledge of the kernel at all. In this case, there are again two approaches: 1:1 (every Thread maps to one Kernel Process) or m:n (m Threads map to n Processes, where usually m > n and often n == #CPUs). This was the early approach taken on many Operating Systems after Threads were invented. However, it is usually deemed inefficient and has been replaced on almost all systems by the second approach: Threads are implemented (at least partially) in the kernel, so that the kernel now knows about two distinct entities, Threads and Processes.
One Operating System that goes a third route, is Linux. In Linux, Threads are neither implemented in Userspace nor in the Kernel. Instead, the Kernel provides an abstraction of both a Thread and a Process (and indeed a couple of more things), called a Task. A Task is a Kernel Scheduled Entity, that carries with it a set of flags that determine which resources it shares with its siblings and which ones are private.
Depending on how you set those flags, you get either a Thread (share pretty much everything) or a Process (share all system resources like the system clock, the filesystem namespace, the networking namespace, the user ID namespace, the process ID namespace, but do not share the Address Space). But you can also get some other pretty interesting things, too. You can trivially get BSD-style jails (basically the same flags as a Process, but don't share the filesystem or the networking namespace). Or you can get what other OSs call a Virtualization Container or Zone (like a jail, but don't share the UID and PID namespaces and system clock). Since a couple of years ago via a technology called KVM (Kernel Virtual Machine) you can even get a full-blown Virtual Machine (share nothing, not even the processor's Page Tables). [The cool thing about this is that you get to reuse the highly-tuned mature Task Scheduler in the kernel for all of these things. One of the things the Xen Virtual Machine has often criticized for, was the poor performance of its scheduler. The KVM developers have a much superior scheduler than Xen, and the best thing is they didn't even have to write a single line of code for it!]
So, on Linux, the performance of Threads and Processes is much closer than on Windows and many other systems, because on Linux, they are actually the same thing. Which means that the usage patterns are very different: on Windows, you typically decide between using a Thread and a Process based on their weight: can I afford a Process or should I use a Thread, even though I actually don't want to share state? On Linux (and usually Unix in general), you decide based on their semantics: do I actually want to share state or not?
One reason why Processes tend to be lighter on Unix than on Windows, is different usage: on Unix, Processes are the basic unit of both concurrency and functionality. If you want to use concurrency, you use multiple Processes. If your application can be broken down into multiple independent pieces, you use multiple Processes. Every Process does exactly one thing and only that one thing. Even a simple one-line shell script often involves dozens or hundreds of Processes. Applications usually consist of many, often short-lived Processes.
On Windows, Threads are the basic units of concurrency and COM components or .NET objects are the basic units of functionality. Applications usually consist of a single long-running Process.
Again, they are used for very different purposes and have very different design goals. It's not that one or the other is better or worse, it's just that they are so different that the common characteristics can only be described very abstractly.
Pretty much the only few things you can say about Threads and Processes are that:
Threads belong to Processes
Threads are lighter than Processes
Threads share most state with each other
Processes share significantly less state than Threads (in particular, they generally share no memory, unless specifically requested)
I would say that :
A process has a memory space, opened files,..., and one or more threads.
A thread is an instruction stream that can be scheduled by the system on a processor.
Have a look at the detailed answer I gave previously here on SO. It gives an insight into a toy kernel structure responsible for maintaining processes and the threads...
Hope this helps,
Best regards,
Tom.
We have discussed this very issue a number of times here. Perhaps you will find some helpful information here:
What is the difference between a process and a thread
Process vs Thread
Thread and Process
A process is a container for a set of resources used while executing a program.
A process includes the following:
Private virtual address space
A program.
A list of handles.
An access token.
A unique process ID.
At least one thread.
A pointer to the parent process, whether or not the process still exists or not.
That being said, a process can contain multiple threads.
Processes themselves can be grouped into jobs, which are containers for processes and are executed as single units.
A thread is what windows uses to schedule execution of instructions on the CPU. Every process has at least one.
I have a couple of pages on my wiki you could take a look at:
Process
Thread
Threads are memory structures in the scheduler of the operating system, as you say. Threads point to the start of some instructions in memory and process these when the scheduler decides they should be. While the thread is executing, the hardware timer will run. Once it hits the desired time, an interrupt will be invoked. After this, the hardware will then stop execution of the current program, and will invoke the registered interrupt handler function, which will be part of the scheduler, to inform that the current thread has finished execution.
Physically:
Process is a structure that maintains the owning credentials, the thread list, and an open handle list
A Thread is a structure containing a context (i.e. a saved register set + a location to execute), a set of PTEs describing what pages are mapped into the process's Virtual Address space, and an owner.
This is of course an extremely simplified explanation, but it gets the important bits. The fundamental unit of execution on both Linux and Windows is the Thread - the kernel scheduler doesn't care about processes (much). This is why on Linux, a thread is just a process who happens to share PTEs with another process.
A process is a area in memory managed by the OS to run an application. Thread is a small area in memory within a process to run a dedicated task.
Processes and Threads are abstractions - there is nothing physical about them, or any other part of an
operating system for that matter. That is why we call it software.
If you view a computer in physical terms you end up with a jumble of
electronics that emulate what a Turing Machine does.
Trying to do anything useful with a raw Truing Machine would turn your brain to Jell-O in
five minutes flat. To avoid
that unpleasant experience, computer folks developed a set of abstractions to compartmentalize
various aspects of computing. This lets you focus on the level of abstraction that
interests you without having to worry about all the other stuff supporting it.
Some things have been cast into circuitry (eg. adders and the like) which makes them physical but the
vast majority of what we work with is based on a set abstractions. As a general rule, the abstractions
we use have some form of mathematical underpinning to them. This is why stacks,
queues and "state" play such an important role in computing - there is a well founded
set of mathematics around these abstractions that let us build upon and reason about
their manipulation.
The key is realizing that software is always based on a
composite of abstract models of "things". Those "things" don't always relate to
anything physical, more likely they relate some other abstraction. This is why
you cannot find a satisfactory "physical" basis for Processes and Threads
anywhere in your text books.
Several other people have posted links to and explanations about what threads and
processes are, none of them point to anything "physical" though. As you guessed, they
are really just a set of data structures and rules that live within the larger
context of an operating system (which in turn is just more data structures and rules...)
Software is like an onion, layers on layers on layers, once you peal all the layers
(abstractions) away, nothing much is left! But the onion is still very real.
It's kind of hard to give a short answer which does this question justice.
And at the risk of getting this horribly wrong and simplying things, you can say threads & processes are an operating-system/platform concept; and under-the-hood, you can define a single-threaded process by,
Low-level CPU instructions (aka, the program).
State of execution--meaning instruction pointer (really, a special register), register values, and stack
The heap (aka, general purpose memory).
In modern operating systems, each process has its own memory space. Aside shared memory (only some OS support this) the operating system forbids one process from writing in the memory space of another. In Windows, you'll see a general protection fault if a process tries.
So you can say a multi-threaded process is the whole package. And each thread is basically nothing more than state of execution.
So when a thread is pre-empted for another (say, on a uni-processor system), all the operating system has to do in principle is save the state of execution of the thread (not sure if it has to do anything special for the stack) and load in another.
Pre-empting an entire process, on the other hand, is more expensive as you can imagine.
Edit: The ideas apply in abstracted platforms like Java as well.
They are not physical pieces of string, if that's what you're asking. ;)
As I understand it, pretty much everything inside the operating system is just data. Modern operating systems depend on a few hardware requirements: virtual memory address translation, interrupts, and memory protection (There's a lot of fuzzy hardware/software magic that happens during boot, but I'm not very familiar with that process). Once those physical requirements are in place, everything else is up to the operating system designer. It's all just chunks of data.
The reason they only are mentioned in an abstract way is that they are concepts, while they will be implemented as data structures there is no universal rule how they have to be implemented.
This is at least true for the threads/processes on their own, they wont do much good without a scheduler and an interrupt timer.
The scheduler is the algorithm by which the operating system chooses the next thread to run for a limited amount of time and the interrupt timer is a piece of hardware which periodically interrupts the execution of the current thread and hands control back to the scheduler.
Forgot something: the above is not true if you only have cooperative threading, cooperative threads have to actively yield control to the next thread, which can get ugly with one thread polling for results of an other thread, which waits for the first to yield.
These are even more lightweight than other threads as they don't require support of the underlying operating system to work.
I had seen many of the answers but most of them are not clear enough for an OS beginner.
In any modern day operating system, one process has a virtual CPU, virtual Memory, Virtual I/O.
Virtual CPU : if you have multiple cores the process might be assigned one or more of the cores for processing by the scheduler.
Virtual I/O : I/O might be shared between various processes. Like for an example keyboard that can be shared by multiple processes. So when you type in a notepad you see the text changing while a key logger running as daemon is storing all the keystrokes. So the process is sharing an I/O resource.
Virtual Memory : http://en.wikipedia.org/wiki/Virtual_memory you can go through the link.
So when a process is taken out of the state of execution by the scheduler it's state containing the values stored in the registers, its stack and heap and much more are saved into a data structure.
So now when we compare a process with a thread, threads started by a process shares the Virtual I/O and Virtual Memory assigned to the process which started it but not the Virtual CPU.
So there might be multiple thread being started by a process all sharing the same virtual Memory and Virtual I/O bu but having different Virtual CPUs.
So you understand the need for locking the resource of a process be it statically allocated (stack) or dynamically allocated(heap) as the virtual memory space is shared between threads of a process.
Also each thread having its own Virtual CPU can run in parallel in different cores and significantly reduce the completion time of a process(reduction will be observable only if you have managed the memory wisely and there are multiple cores).
A thread is controlled by a process, a process is controlled by the operating system
Process doesn't share memory between each other - since it works in so called "protected flat model", on other hand threads shares the same memory.
With the Windows, at least once you get past Win 3.1, the operating system (OS) contains multiple process each with its own memory space and can't interact with other processes without the OS.
Each process has one or more threads that share the same memory space and do not need the OS to interact with other threads.
Process is a container of threads.
Well, I haven't seen an answer to "What are they physically", yet. So I give it a try.
Processes and Thread are nothing phyical. They are a feature of the operating system. Usally any physical component of a computer does not know about them. The CPU does only process a sequential stream of opcodes. These opcodes might belong to a thread. Then the OS uses traps and interrupts regain control, decide which code to excecute and switch to another thread.
Process is one complete entity e.g. and exe file or one jvm. There can be a child process of a parent process where the exe file run again in a separate space. Thread is a separate path of execution in the same process where the process is controlling which thread to execute, halt etc.
Trying to answer this question relating to Java world.
A process is an execution of a program but a thread is a single execution sequence within the process. A process can contain multiple threads. A thread is sometimes called a lightweight process.
For example:
Example 1:
A JVM runs in a single process and threads in a JVM share the heap belonging to that process. That is why several threads may access the same object. Threads share the heap and have their own stack space. This is how one thread’s invocation of a method and its local variables are kept thread safe from other threads. But the heap is not thread-safe and must be synchronized for thread safety.
Example 2:
A program might not be able to draw pictures by reading keystrokes. The program must give its full attention to the keyboard input and lacking the ability to handle more than one event at a time will lead to trouble. The ideal solution to this problem is the seamless execution of two or more sections of a program at the same time. Threads allows us to do this. Here Drawing picture is a process and reading keystroke is sub process (thread).

How do I tell a multi-core / multi-CPU machine to process function calls in a loop in parallel?

I am currently designing an application that has one module which will load large amounts of data from a database and reduce it to a much smaller set by various calculations depending on the circumstances.
Many of the more intensive operations behave deterministically and would lend themselves to parallel processing.
Provided I have a loop that iterates over a large number of data chunks arriving from the db and for each one call a deterministic function without side effects, how would I make it so that the program does not wait for the function to return but rather sets the next calls going, so they could be processed in parallel? A naive approach to demonstrate the principle would do me for now.
I have read Google's MapReduce paper and while I could use the overall principle in a number of places, I won't, for now, target large clusters, rather it's going to be a single multi-core or multi-CPU machine for version 1.0. So currently, I'm not sure if I can actually use the library or would have to roll a dumbed-down basic version myself.
I am at an early stage of the design process and so far I am targeting C-something (for the speed critical bits) and Python (for the productivity critical bits) as my languages. If there are compelling reasons, I might switch, but so far I am contented with my choice.
Please note that I'm aware of the fact that it might take longer to retrieve the next chunk from the database than to process the current one and the whole process would then be I/O-bound. I would, however, assume for now that it isn't and in practice use a db cluster or memory caching or something else to be not I/O-bound at this point.
Well, if .net is an option, they have put a lot of effort into Parallel Computing.
If you still plan on using Python, you might want to have a look at Processing. It uses processes rather than threads for parallel computing (due to the Python GIL) and provides classes for distributing "work items" onto several processes. Using the pool class, you can write code like the following:
import processing
def worker(i):
return i*i
num_workers = 2
pool = processing.Pool(num_workers)
result = pool.imap(worker, range(100000))
This is a parallel version of itertools.imap, which distributes calls over to processes. You can also use the apply_async methods of the pool and store lazy result objects in a list:
results = []
for i in range(10000):
results.append(pool.apply_async(worker, i))
For further reference, see the documentation of the Pool class.
Gotchas:
processing uses fork(), so you have to be careful on Win32
objects transferred between processes need to be pickleable
if the workers are relatively fast, you can tweak chunksize, i.e.
the number of work items send to a worker process in one batch
processing.Pool uses a background thread
You can implement the algorithm from Google's MapReduce without having physically separate machines. Just consider each of those "machines" to be "threads." Threads are automatically distributed on multi-core machines.
I might be missing something here, but this this seems fairly straight forward using pthreads.
Set up a small threadpool with N threads in it and have one thread to control them all.
The master thread simply sits in a loop doing something like:
Get data chunk from DB
Find next free thread If no thread is free then wait
Hand over chunk to worker thread
Go back and get next chunk from DB
In the meantime the worker threads they sit and do:
Mark myself as free
Wait for the mast thread to give me a chunk of data
Process the chunk of data
Mark myself as free again
The method by which you implement this can be as simple as two mutex controlled arrays. One has the worked threads in it (the threadpool) and the other indicated if each corresponding thread is free or busy.
Tweak N to your liking ...
If you're working with a compiler that will support it, I would suggest taking a look at http://www.openmp.org for a way of annotating your code in such a way that
certain loops will be parallelized.
It does a lot more as well, and you might find it very helpful.
Their web page reports that gcc4.2 will support openmp, for example.
The same thread pool is used in java. But the threads in threadpools are serialisable and sent to other computers and deserialised to run.
I have developed a MapReduce library for multi-threaded/multi-core use on a single server. Everything is taken care of by the library, and the user just has to implement Map and Reduce. It is positioned as a Boost library, but not yet accepted as a formal lib. Check out http://www.craighenderson.co.uk/mapreduce
You may be interested in examining the code of libdispatch, which is the open source implementation of Apple's Grand Central Dispatch.
Intel's TBB or boost::mpi might be of interest to you also.