Types of thread in C++ [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What are the different types of thread in C++ ?
I already know multiprocessing and multi threading . I know how to create threads in normal C++ , VC++ but not sure what is mean by different types of thread.

From the software point of view, there are no "different types of threads". A thread is a thread. Are there different types of pixels on the screen? Nope. It's similar. However, in some contexts, you MAY differentiate threads over their intended purpose. You can have:
os threads (or kernel threads) vs user threads (or application threads)
main thread vs ad-hoc threads vs pooled threads
background threads vs high-priority threads
etc
but a thread is a thread. They are all the same in terms of their basic properties: they run some specified code. The difference is in how they are used and what priority they have (= how often they get some processor time to do their work) or what code they are allowed to run.
...ok, thinking a bit more about the terms used in different contexts, ACTUALLY, there are 2 types of threads, and both are just called 'threads':
software threads
hardware threads
The difference is that the former one is what the operating system's scheduler manages (creates/wakes/puts to sleep/kills/etc). The number of those is virtually limited only by the available memory. You may have 100-1000-10000 software threads, no problem.. The latter refers to the actual electronic structures that execute them. There's always a much lower limit there. Not long ago each CPU could execute just a single thread. You wanted to run 8 threads? have a 8-cpu motherboard. Today, most CPUs have multiple "cores" and can each can execute several (usually 4-16) threads.
However, in my region, when someone says "a thread", they mean a "a software thread", and when someone wants to refer to the latter, they say explicitly "a hardware thread". That's why I didn't think of this at first, probably I'm more of a software guy, why in a hardware team they may "thread"="hardware thread" by default.

In general, there are two types of multitasking: process-based and thread-based.
Process-based multitasking handles the concurrent execution of programs which is something like two people doing same tasks or the first person doing the task & the second person doing the sub-task of the same task.
Thread-based multitasking deals with the concurrent execution of pieces of the same program which is something like you are using different parts of your body for some work ( or say, multitasking ).
I don't know if I my above analogies are correct or not ( in reference to your understanding ).
For further information, you can follow this link.

Related

Limit or throttle running threads in C++ Linux [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
I am using Pagmo, a C+ API for my optimization problem. Pagmo launches a new thread when a new optimization is launched, via invocation of island.evolve(). MY point is that I don't have fine-grained control here over the type of thread that's launched "under the hood" of Pagmo. I can query Pagmo threads on their status - as to whether they've completed their run. I have a machine with 28 physical cores and I'm thinking that the optimum number of my running threads would be on the order of 28. Right now, my code just dumps the whole lot of threads onto the machine and that's substantially more than the number of cores - and hence, likely very inefficient.
I'm thinking of using a std::counting_semaphore (C++ 20) and setting the counter to 28. each time I launch a thread in Pagmo, I would decrement the semaphore counter. When it hits 0, the remaining threads would block and wait in the queue until the counter was incremented.
I'm thinking I could run a loop which queried the Pagmo threads as to their statuses and increment the std::counting_semaphore's counter each time a thread went idle (meaning its task was completed). Of course, the Pagmo threads are ultimately joined. Each time the counter goes above 0, a new thread is allowed to start - so I believe.
My questions are:
is the the best practice as to how to limit/throttle the number of running threads using modern C++?
Is there a better way I haven't thought of?
Is there a way to query Linux in real time, as to the number of running threads?
Thanks in advance!
Phil
I had tried a simple loop to launch and throttle theads creation but it didn't prove to work well and threads were launched too quickly.
First of all, your post could use some editing and even perhaps provide a code snippet that would help us understand the problem more. Right now I'm only going through the documentation based on a wild guess of what you are doing there.
I've quickly checked what Pagmo is about and I would at first advise to be careful when limiting any library that is designed for parallel computation, from outside of the library.
I will try to answer your questions:
I do not think that this is the best way to throttle threads created by an external library
Yes, first of all I've checked the Pagmo API documentation and if I understand you correctly you are using an island class - based on what they state in their documentation the default class that inherits island and is constructed by the default ctor is thread_island (at least on non-POSIX systems - which may not be your case). However thread_island can be constructed via the thread_island(bool use_pool) ctor which indicates that you can specify to these island classes a common thread pool which they can use. And if it is done for non-POSIX systems, it is most likely done for POSIX systems as well. So if you wish to limit the number of threads I would do this via a thread pool.
You can limit the maximum number of threads running on linux via /proc/sys/kernel/threads-max, you can also instruct a Linux system to treat some processes with less importance via niceness
Hope it helps!
EDIT: As a foot note I will also mention that the documentation actually encourages the use of thread_island even on POSIX systems. See this link here
EDIT2: In case that you must use fork_island due to their mentioned issues when thread_safety cannot be guaranteed. Then another option would be to limit available resources via setrlimit see this link right here - you are interested in setting RLIMIT_NPROC

Thread basic..Help Required [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
what is thread
the difference between cases with using mutex and without using mutex
difference between using join() method and without using join()
which low-level functions is called when you create thread with std::thread class constructor and with using pthread.
I have read the material on the internet and still I am asking the question just for further strengthen my ideas.
Thanks in advance
1) A thread allows for parallel execution of your program. Using multiple threads in your program allows multiple processor cores to execute your code and thus (usually) speeding up the program.
2) Because threads allows parellel execution of code it can happen that thread #1 is reading data while thread #2 is modifying this data, this can result in some funky cases you don't want to happen. Mutexes stop this behaviour by making threads wait their turn in these particular critical sections.
3) using thread.join() makes the current thread wait for the completion of thread object that's been called join() upon.
4) This is really OS specific. For example, Unix based systems use pthread as the underlying thread class when creating a std::thread. The compiler vendor implements this.
If you would like to learn multi threading using with C++ standard library then please refer C++ concurrency IN Action(Author Williams). It is very good book also referred on The Definitive C++ Book Guide and List
what is thread
Thread is an execution unit which consists of its own program counter, a stack, and a set of registers. Thread are implemented in application to improve the performance and effective use of CPU's
The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel.
Refer https://en.wikipedia.org/wiki/Thread_%28computing%29
the difference between cases with using mutex and without using mutex
Imagine for a moment that you’re sharing an apartment with a friend. There’s
only one kitchen and only one bathroom. Unless you’re particularly friendly, you can’t both use the bathroom at the same time, and if your roommate occupies the bathroom for a long time, it can be frustrating if you need to use it.
Likewise, though it might be possible to both cook meals at the same time, if you have a combined oven and grill, it’s just not going to end well if one of you tries to grill some sausages at the same time as the other is baking a cake. Furthermore, we all know the frustration of sharing a space and getting halfway through a task only to find that someone has borrowed something we need or changed something from the way we left it.
It’s the same with threads. If you’re sharing data between threads, you need to have rules for which thread can access which bit of data when, and how any updates
When you have a multi-threaded application, the different threads sometimes share a common resource, such as a global variables, file handler or any.
Mutex can be used with single resource for synchronization. Other synchronization methods(like semaphore) available to synchronize between multiple threads and process.
The concept is called "mutual exclusion" (short Mutex), and is a way to ensure that only one thread is allowed inside critical area, using that resource etc.
difference between using join() method and without using join()
calling thread waits for the thread specified by thread to terminate. If that thread has already terminated, then join() returns immediately. The thread specified by thread must be joinable.
By default thread are joinable if you not changed its attribute.
which low-level functions is called when you create thread with std::thread class constructor and with using pthread.
pthread_create is called in case of linux. std thread library is platform independent. So it call different thread API specific to the underlying operating system.

C++ Pthreads - Multithreading slower than single-threading [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am developing a C++ application, using pthreads library. Every thread in the program accesses a common unordered_map. The program runs slower with 4 threads than 1. I commented all the code in thread, and left only the part that tokenizes a string. Still the single-threading execution is faster, so I came to the conclusion that the map wasn't the problem.
After that I printed to the screen the threads' Ids, and they seemed to execute sequentially.
In the function that calls the threads, I have a while loop, which creates threads in an array, which size is the number of threads (let's say 'tn'). And every time tn threads are created, I execute a for loop to join them. (pthread_join). While runs many times(not only 4).
What may be wrong?
If you are running a small trivial program this tends to be the case because the work to start the threads, schedule priority, run, context switch, then sync could actually take more time then running it as a single process.
The point here is that when dealing with trivial problems it can run slower. BUT another factor might be how many cores you actually have in your CPU.
When you run a multitthreaded program, each thread will be processed sequentially according to the given CPU clock.
You will only have true multithreading if you have multiple cores. And in such scenario the only multithreading will be 1 thread /core.
Now, given the fact that you (most likely) have both threads on one core, try to keep in mind the overhead generated to the CPU for :
allocating different clock time for each thread
synchronizing thread accesses to various internal CPU operations
other thread priority operations
So in other words, for a simple application, multithreading is actually a downgrade in terms of performance.
Multithreading comes in handy when you need a asynchronous operation (meaning you don't want to wait for a rezult, such as loading an image from an url or streaming geomtery from HDD which is slower then ram) .
In such scenarios, applying multithreading will lead to better user experience, because your program won't hung up when a slow operation occurrs.
Without seeing the code it's difficult to tell for sure, but there could be a number of issues.
Your threads might not be doing enough work to justify their creation. Creating and running threads is expensive, so if your workload is too small, they won't pay for themselves.
Execution time could be spent mostly doing memory accesses on the map, in which case mutually excluding the threads means that you aren't really doing much parallel work in practice (Amdahl's Law).
If most of your code is running under a mutex that it will run serially and not in parllel

When to use multithreading in C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am C++ programmer (intermediate) and learning multi-threading now. I found it quite confusing when to use multi-threading in C++? How will i come to know that i need to use multi-threading in which part of section?
When to use multithreading in C++?
When you have resource intensive task like huge mathematical calculation , or I/O intensive task like reading or writing to file, use should your multithreading.
Purpose should be, you can be able to run multiple things (tasks) together, so that it will increase performance and responsiveness of your application. Also, learn about synchronization before implementing multithreading in your application.
When to use multithreading in C++?`
Well - the general rule of thumb is: use it when it can speed up your application. The answer isn't really language-dependant.
If you want to get an in-depth answer, then you have to consider a few things:
Is multithreading possible to implement inside your code? Do you have fragments which can be calulated at the same time and are intependent of other calculations?
Is multithreading worth implementing? Does your program run slow even when you did all you could to make it as fast as possible?
Will your code be run on machines that support multithreading (so have multiple processing units)? If you're designing code for some kind of machine with only one core, using multithreading is a waste of time.
Is there a different option? A better algorithm, cleaning the code, etc? If so - maybe it's better to use that instead of multithreading?
Do you have to handle things that are hard to predict in time, while the whole application has to constantly run? For example - receiving some information from a server in a game?
This is a slightly subjective subject... But I tend to use multi-threading in one of two situations.
1 - In a performance critical situation where the utmost power is needed (and the algorithm of course supports parallelism), for me, matrix multiplications.
2 - Rarely where it may be easier to have a thread managing something fairly independent. The classic is networking, perhaps have a thread blocking waiting for connections and spawning threads to manage each thread as it comes in. This is useful as the threads can block and respond in a timely manner. Say you have a server, one request might need disk access which is slow, another thread can jump in an field a different request while the first is waiting for its data.
As has been said by others, only when you need to should you think about doing it, it gets complicated fast and can be difficult to debug.
Multithreading is a specialized form of multitasking and a multitasking is the feature that allows your computer to run two or more programs concurrently.
I think this link can help you.
http://www.tutorialspoint.com/cplusplus/cpp_multithreading.htm
Mostly when you want things to be done at the same time. For instance, you may want a window to still respond to user input when a level is loading in a game or when you're downloading multiple files at once, etc. It's for things that really can't wait until other processing is done. Of course, both probably go slower as a result, but it really gives the illusion of multiple things happening at once.
Use multithreading when you can speed up your algorithms by doing things in parallel. Use it in opposition to multiprocessing when the threads need access to the parent process's resources.
My two cents.
Use cases:
Integrate your application in a lib/app that already runs a loop. You would need a thread of your own to run your code concurrently if you cannot integrate into the other app.
Task splitting. It makes sense to organize disjoint tasks in threads sometimes, such as in separating sound from image processing, for example.
Performance. When you want to improve the throghput of some task.
Recommendations:
In the general case, don't do multithreading if a single threaded solution will suffice. It adds complexity.
When needed, start with higher-order primitives, such as std::future and std::async.
When possible, avoid data sharing, which is the source of contention.
When going to lower level abstractions, such as mutexes and so on, encapsulate it in some pattern. You can take a look at these slides.
Decouple your functions from threading and compose the threading into the functions at a later point. Namely, don't embed thread creation into the logic of your code.

Benefits of a multi thread program in a unicore system [duplicate]

This question already has answers here:
How can multithreading speed up an application (when threads can't run concurrently)?
(9 answers)
Closed 9 years ago.
My professor causally mentioned that we should program multi-thread programs even if we are using a unicore processor however because of the lack of time , he did not elaborate on it .
I would like to know what are the benefits of a multi-thread program in a unicore processor ??
It won't be as significant as a multi-core system but it can still provide some benefits.
Mainly all the benefits that you are going to get will be regarding to the context switch that will happen after a input miss to the already executing thread. Executing thread may be waiting for anything such as a hardware resource or a branch mis-prediction or even data transfer after a cache miss.
At this point the waiting thread can be executed to benefit from this "waiting time". But of course context switch will take some time. Also managing threads inside the code rather than sequential computation can create some extra complexity to your program. And as it has been said, some applications needs to be multi-threaded so there is no escape from the context switch in some cases.
Some applications need to be multi-threaded. Multi-threading isn't just about improving performance by using more cores, it's also about performing multiple tasks at once.
Take Skype for example - The GUI needs to be able to accept the text you're entering, display it on the screen, listen for new messages coming from the user you're talking to, and display them. This wouldn't be a trivial task in a single threaded application.
Even if there's only one core available, the OS thread scheduler will give you the illusion of parallelism.
Usually it is about not blocking. Running many threads on a single core still gives the illusion of concurrency. So you can have, say, a thread doing IO while another one does user interactions. The user interaction thread is not blocked while the other does IO, so the user is free to carry on interacting.
Benefits could be different.
One of the widely used examples is the application with GUI, which supposed to perform some kind of computations. If you will have a single thread - the user will have to wait the result before dealing something else with the application, but if you start it in the separate thread - user interface could be still available for user during the computation process. So, multi-thread program could emulate multi-task environment even on a unicore system. That's one of the points.
As others have already mentioned, not blocking is one application. Another one is separation of logic for unrelated tasks that are to be executed simultaneously. Using threads for that leaves handling of scheduling these tasks to the OS.
However, note that it may also be possible to implement similar behavior using asynchronous operations in a single thread. "Future" and boost::asio provide ways of doing non-blocking stuff without necessarily resorting to multiple threads.
I think it depends a bit on how exactly you design your threads and which logic is actually in the thread. Some benefits you can even get on a single core:
A thread can wrap a blocking/long-during call you can't circumvent otherwise. For some operations there are polling mechanisms, but not for all.
A thread can wrap an almost standalone part of your application that has virtually no interaction with other code. For example background polling for updates, monitoring some resource (e.g. free storage), checking internet connectivity. If you keep them in a separate thread you can keep the code relatively simple in its own 'runtime' without caring too much about the impact on the main program, the sole communication with the main logic is usually a single 'event'.
In some environments you might get more processing time. This mainly depends on how your OS scheduling system works, but if this allocates time per thread, the more threads you have the more your app will be scheduled.
Some benefits long-term:
Where it's not hard to do you benefit if your hardware evolves. You never know what's going to happen, today your app runs on a single-core embedded device, tomorrow that embedded device gets a quad core. Programming threaded from the beginning improves your future scalability.
One example is an environment where you can deterministically assign work to a thread, e.g. based on some hash all related operations end up in the same thread. The advantage for single cores is 'small' but it's not hard to do as you need little synchronization primitives so the overhead stays small.
That said, I think there are situations where it's very ill advise:
As soon as your required synchronization mechanism with other threads becomes complex (e.g. multiple locks, lots of critical sections, ...). It might still be then that multi-threading gives you a benefit when effectively moving to multiple CPUs, but the overhead is huge both for your single core and your programming time.
For instance think about operations that block because of slow peripheral devices (harddisk access etc.). While these are waiting, even the single core can do other things asyncronously.
In a lot of applications the bottleneck is not CPU processing power. So when the program flow is waiting for completion of IO requests (user input, network/disk IO), critical resources to be available, or any sort of asynchroneously triggered events, the CPU can be scheduled to do other work instead of just blocking.
In this case you don't necessarily need multiple threads that can actually run in parallel. Cooperative multi-tasking concepts like asynchroneous IO, coroutines, or fibers come into mind.
If however the application's bottleneck is CPU processing power (constantly 100% CPU usage), then it makes sense to increase the number of CPUs available to the application. At that point it is easier to scale the application up to use more CPUs if it was designed to run in parallel upfront.
As far as I can see, one answer was not yet given:
You will have to write multithreaded applications in the future!
The average number of cores will double every 18 months in the future. People have learned single-threaded programming for 50 years now, and now they are confronted with devices that have multiple cores. The programming style in a multi-threaded environment differs significantly from single-threaded programming. This refers to low-level aspects like avoiding race conditions and proper synchronization, as well as the high-level aspects like the general algorithm design.
So in addition to the points already mentioned, it's also about writing future-proof software, scalability and the development of the skills that are required to achieve these goals.