Parallel Thread Execution to achieve performance - c++

I am little bit confused in multithreading. Actually we create multiple threads for breaking the main process to subprocess for achieving responsiveness and for removing waiting time.
But Here I got a situation where I have to execute the same task using multiple threads parallel.
And My processor can execute 4 threads parallel and so Will it improve the performance if I create more that 4 threads(10 or more). When I put this question to my colleague he is telling that nothing will happen we are already executing many threads in many other applications like browser threads, kernel threads, etc so he is telling to create multiple threads for the same task.
But if I create more than 4 threads that will execute parallel will not create more context switch and decrease the performance.
Or even though we create multiple thread for executing parallely the will execute one after the other so the performance will be the same.
So what to do in the above situations and are these correct?
edit
1 thread worked. time to process 120 seconds.
2 threads worked. time to process is about 60 seconds.
3 threads created. time to process is about 60 seconds.(not change to the time of 2 threads.)
Is it because, my hardware can only create 2 threads(for being dual)?
software thread=piece of code
Hardware thread=core(processor) for running software thread.
So my CPU support only 2 concurrent threads so if I purchase a AMD CPU which is having 8 cores or 12 cores can I achieve higher performance?

Multi-Tasking is pretty complex and performance gains usually depend a lot on the problem itself:
Only a part of the application can be worked in parallel (there is always a first part that splits up the work into multiple tasks). So the first question is: How much of the work can be done in parallel and how much of it needs to be synchronized (in some cases, you can stop here because so little can be done in parallel that the whole work isn't worth it).
Multiple tasks may depend on each other (one task may need the result of another task). These tasks cannot be executed in parallel.
Multiple tasks may work on the same data/resources (read/write situation). Here we need to synchronize access to this data/resources. If all tasks needs write access to the same object during the WHOLE process, then we cannot work in parallel.
Basically this means that without the exact definition of the problem (dependencies between tasks, dependencies on data, amount of parallel tasks, ...) it's very hard to tell how much performance you'll gain by using multiple threads (and if it's really worth it).

http://en.wikipedia.org/wiki/Amdahl%27s_law
Amdahl's states in a nutshell that the performance boost you receive from parallel execution is limited by your code that must run sequentially.
Without knowing your problem space here are some general things you should look at:
Refactor to eliminate mutex/locks. By definition they force code to run sequentially.
Reduce context switch overhead by pinning threads to physical cores. This becomes more complicated when threads must wait for work (ie blocking on IO) but in general you want to keep your core as busy as possible running your program not switching out threads.
Unless you absolutely need to use threads and sync primitives try use a task scheduler or parallel algorithms library to parallelize your work. Examples would be Intel TBB, Thrust or Apple's libDispatch.

Related

How does incorporating multi-threading into C++ benefit the performance, and why?

I've learnt that multi-threading is when two or more parts of a program can run concurrently but what is the actual purpose of using multi-threading? Why and how does it benefit the performance of our program?
If I have a list of say 10000 items that require an operation on each. If I use a single thread for this operation on 10000 items, lets say it will take 8 seconds. If I use multithreading for this operation on a 4 core CPU, This means that I can divide the operation among 4 threads and the cost of the operation will now be the cost of running one 4th of the item which is 2500. The time taken now will be approximately 2 seconds. since each thread is running independently on 2500 of the items making 10000 items. In this way, multithreading can speed up your computation. I am not taking into consideration the cost of setting up the threads.
Another use of multithreading is to avoid thread blocking calls that take a long time to return. For example accept and connect in TCP socket programming. You can spin a new thread to handle that not for the sake of speed but to ensure that the main thread is not blocked.
The actual purpose of multithreading is nothing more than what you said: It is a way to let two or more parts of a program run concurrently.
There are many reasons why you might want two or more parts of the same program to run concurrently. One of those reasons is, it's a way to exploit the several CPUs of a multi-processor computer system: Different concurrent threads can execute in parallel with each other if they are assigned to different CPUs.

10 threads in a single program or 1 thread program ran 10 times (C++)?

I am wondering whether there is any difference in performance in running a single program (exe) with 10 different threads or running the program with a single thread 10 times in parallel (starting it from a .bat file) assuming the work done is the same and only the number of threads spawned by the program change?
I am developing a client/server communication program and want to test it for throughput. I'm currently learning about parallel programming and threading as wasn't sure how Windows would handle the above scenario. Will the scheduler schedule work the same way for both scenarios? Will there be a performance difference?
The machine the program is running on has 4 threads.
Threads are slightly lighter weight than processes as there are many things a process gets it's own copy of. Especially when you compare the time it takes to start a new thread, vs starting a new process (from scratch, fork where available also avoids a lot of costs). Although in either case you can generally get even better performance using a worker pool where possible rather than starting and stopping fresh processes/threads.
The other major difference is that threads by default all share the same memory while processes get their own and need to communicate through more explicit means (which may include blocks of shared memory). This might make it easier for a threaded solution to avoid copying data, but this is also one of the dangers of multithreaded programming when care is not taken in how they use the shared memory/objects.
Also there may be API's that are more orientated around a single process. For example on Windows there is IO Completion Ports which basically works on the idea of having many in-progress IO operations for different files, sockets, etc. with multiple threads (but generally far less than the number of files/sockets) handling the results as they become available through a GetQueuedCompletionStatus loop.

Handling of Creation / Reuse of threads in C++

I am working a c++ hobby project that requires lots of processing several times a second. Splitting up my work into multiple threads could improve the completion speed. When the Threads are done should I keep the Threads until I have more work for them or should I throw the threads away and just make new ones when I need them again?
If it's just several times a second (e.g. 10 times a second) then keep it simple and throw the thread away when it's done.
When you get to hundreds or thousands of threads, then you should start thinking about a thread pool.
All that is assuming you're working on a typical machine and not a weak CPU like a microcontroller.
When the Threads are done should I keep the Threads until I have more work for them or should I throw the threads away and just make new ones when I need them again?
It makes little sense paying for thread creation many times if you can pay the price just once (Greta would tell you "how dare you?!"). Idle threads in a (blocking) thread pool do not consume any CPU time and are ready to execute your tasks with the shortest possible delay and overhead because all the necessary resources for the thread were allocated when the thread was spawned.
I would recommend using Intel TBB task scheduler, see a tutorial. It allows for an efficient modern programming paradigm where you partition your problem into stages/tasks, where some of them can be executed in parallel. I cannot recommend enough watching Plain Threads are the GOTO of todays computing - Hartmut Kaiser - Keynote Meeting C++ 2014.

What is the point of being able to spawn dozens of processes efficiently if only very few of them can be executed in parallel?

Erlang is very efficient in spawning new processes, but what is the point, if the CPU can only execute only e.g. 4 of them in parallel?
Therefore the rest should wait for the Erlang-"context switch".
Do you get more things done faster if you have for example 10k processes, than you would by using Java/C#/C++?
There are many reasons:
Conceptually, processes are easy to reason about. Asynchronous callbacks and promises in languages like JavaScript are harder to reason about because the code in the callbacks can change the values of variables used by other code in the thread.
Processes provide isolation for the code running inside them. A process can only affect other processes by placing messages in their mailboxes. A process cannot meddle with the state of other processes.
Processes are granular. This means:
If you have 400 processes on a 4 core machine the scheduler will make sure to distribute them across the threads in such a way as to fully utilize the 4 cores. One core is always going to be handling OS stuff, so the scheduler would likely end up giving the thread running on that core less work than the other 3 threads. But it adapts, so in any situation the scheduler will do it's best to make sure processes wait as little as possible and threads always have a queue of processes waiting for CPU time.
Moving to better hardware with more cores doesn't require changes to the code or architecture of the application. Moving your Erlang application from a 4 core machine to a 64 core machine will mean your application will run about ~16 times faster without any changes, assuming your application is structured in such a way that it can take advantage of the extra cores (usually this means making sure tasks that could be done in parallel are executed in separate processes).
Processes are very lightweight, so there is very little overhead. In most applications the benefits provided by processes and the scheduler far outweigh the small overhead from running thousands of processes. Commodity hardware can easily handle hundreds of thousands of processes.
So in closing, whether or not processes execute in parallel isn't that important. The other benefits they provide are enough to justify their usage.

Benefits of a multi thread program in a unicore system [duplicate]

This question already has answers here:
How can multithreading speed up an application (when threads can't run concurrently)?
(9 answers)
Closed 9 years ago.
My professor causally mentioned that we should program multi-thread programs even if we are using a unicore processor however because of the lack of time , he did not elaborate on it .
I would like to know what are the benefits of a multi-thread program in a unicore processor ??
It won't be as significant as a multi-core system but it can still provide some benefits.
Mainly all the benefits that you are going to get will be regarding to the context switch that will happen after a input miss to the already executing thread. Executing thread may be waiting for anything such as a hardware resource or a branch mis-prediction or even data transfer after a cache miss.
At this point the waiting thread can be executed to benefit from this "waiting time". But of course context switch will take some time. Also managing threads inside the code rather than sequential computation can create some extra complexity to your program. And as it has been said, some applications needs to be multi-threaded so there is no escape from the context switch in some cases.
Some applications need to be multi-threaded. Multi-threading isn't just about improving performance by using more cores, it's also about performing multiple tasks at once.
Take Skype for example - The GUI needs to be able to accept the text you're entering, display it on the screen, listen for new messages coming from the user you're talking to, and display them. This wouldn't be a trivial task in a single threaded application.
Even if there's only one core available, the OS thread scheduler will give you the illusion of parallelism.
Usually it is about not blocking. Running many threads on a single core still gives the illusion of concurrency. So you can have, say, a thread doing IO while another one does user interactions. The user interaction thread is not blocked while the other does IO, so the user is free to carry on interacting.
Benefits could be different.
One of the widely used examples is the application with GUI, which supposed to perform some kind of computations. If you will have a single thread - the user will have to wait the result before dealing something else with the application, but if you start it in the separate thread - user interface could be still available for user during the computation process. So, multi-thread program could emulate multi-task environment even on a unicore system. That's one of the points.
As others have already mentioned, not blocking is one application. Another one is separation of logic for unrelated tasks that are to be executed simultaneously. Using threads for that leaves handling of scheduling these tasks to the OS.
However, note that it may also be possible to implement similar behavior using asynchronous operations in a single thread. "Future" and boost::asio provide ways of doing non-blocking stuff without necessarily resorting to multiple threads.
I think it depends a bit on how exactly you design your threads and which logic is actually in the thread. Some benefits you can even get on a single core:
A thread can wrap a blocking/long-during call you can't circumvent otherwise. For some operations there are polling mechanisms, but not for all.
A thread can wrap an almost standalone part of your application that has virtually no interaction with other code. For example background polling for updates, monitoring some resource (e.g. free storage), checking internet connectivity. If you keep them in a separate thread you can keep the code relatively simple in its own 'runtime' without caring too much about the impact on the main program, the sole communication with the main logic is usually a single 'event'.
In some environments you might get more processing time. This mainly depends on how your OS scheduling system works, but if this allocates time per thread, the more threads you have the more your app will be scheduled.
Some benefits long-term:
Where it's not hard to do you benefit if your hardware evolves. You never know what's going to happen, today your app runs on a single-core embedded device, tomorrow that embedded device gets a quad core. Programming threaded from the beginning improves your future scalability.
One example is an environment where you can deterministically assign work to a thread, e.g. based on some hash all related operations end up in the same thread. The advantage for single cores is 'small' but it's not hard to do as you need little synchronization primitives so the overhead stays small.
That said, I think there are situations where it's very ill advise:
As soon as your required synchronization mechanism with other threads becomes complex (e.g. multiple locks, lots of critical sections, ...). It might still be then that multi-threading gives you a benefit when effectively moving to multiple CPUs, but the overhead is huge both for your single core and your programming time.
For instance think about operations that block because of slow peripheral devices (harddisk access etc.). While these are waiting, even the single core can do other things asyncronously.
In a lot of applications the bottleneck is not CPU processing power. So when the program flow is waiting for completion of IO requests (user input, network/disk IO), critical resources to be available, or any sort of asynchroneously triggered events, the CPU can be scheduled to do other work instead of just blocking.
In this case you don't necessarily need multiple threads that can actually run in parallel. Cooperative multi-tasking concepts like asynchroneous IO, coroutines, or fibers come into mind.
If however the application's bottleneck is CPU processing power (constantly 100% CPU usage), then it makes sense to increase the number of CPUs available to the application. At that point it is easier to scale the application up to use more CPUs if it was designed to run in parallel upfront.
As far as I can see, one answer was not yet given:
You will have to write multithreaded applications in the future!
The average number of cores will double every 18 months in the future. People have learned single-threaded programming for 50 years now, and now they are confronted with devices that have multiple cores. The programming style in a multi-threaded environment differs significantly from single-threaded programming. This refers to low-level aspects like avoiding race conditions and proper synchronization, as well as the high-level aspects like the general algorithm design.
So in addition to the points already mentioned, it's also about writing future-proof software, scalability and the development of the skills that are required to achieve these goals.