This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What are the thread limitations when working on Linux compared to processes for network/IO-bound apps?
What is meant by context switches in threads or processes ? Also which is better : Threads or Processes ? I mean which is more space efficient and which is more time efficient ?
Take a look at this answer.
I don't think you can say one is better than the other - it depends what you are doing.
There is a detailed overview of context switches here: http://en.wikipedia.org/wiki/Context_switch
In the most basic sense a context switch is when the processor pauses execution of a task to do another task.
Also, you have to remember that threads may be your only choice if your OS is embedded and doesn't support multiple processes ;)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 months ago.
Improve this question
I am using Pagmo, a C+ API for my optimization problem. Pagmo launches a new thread when a new optimization is launched, via invocation of island.evolve(). MY point is that I don't have fine-grained control here over the type of thread that's launched "under the hood" of Pagmo. I can query Pagmo threads on their status - as to whether they've completed their run. I have a machine with 28 physical cores and I'm thinking that the optimum number of my running threads would be on the order of 28. Right now, my code just dumps the whole lot of threads onto the machine and that's substantially more than the number of cores - and hence, likely very inefficient.
I'm thinking of using a std::counting_semaphore (C++ 20) and setting the counter to 28. each time I launch a thread in Pagmo, I would decrement the semaphore counter. When it hits 0, the remaining threads would block and wait in the queue until the counter was incremented.
I'm thinking I could run a loop which queried the Pagmo threads as to their statuses and increment the std::counting_semaphore's counter each time a thread went idle (meaning its task was completed). Of course, the Pagmo threads are ultimately joined. Each time the counter goes above 0, a new thread is allowed to start - so I believe.
My questions are:
is the the best practice as to how to limit/throttle the number of running threads using modern C++?
Is there a better way I haven't thought of?
Is there a way to query Linux in real time, as to the number of running threads?
Thanks in advance!
Phil
I had tried a simple loop to launch and throttle theads creation but it didn't prove to work well and threads were launched too quickly.
First of all, your post could use some editing and even perhaps provide a code snippet that would help us understand the problem more. Right now I'm only going through the documentation based on a wild guess of what you are doing there.
I've quickly checked what Pagmo is about and I would at first advise to be careful when limiting any library that is designed for parallel computation, from outside of the library.
I will try to answer your questions:
I do not think that this is the best way to throttle threads created by an external library
Yes, first of all I've checked the Pagmo API documentation and if I understand you correctly you are using an island class - based on what they state in their documentation the default class that inherits island and is constructed by the default ctor is thread_island (at least on non-POSIX systems - which may not be your case). However thread_island can be constructed via the thread_island(bool use_pool) ctor which indicates that you can specify to these island classes a common thread pool which they can use. And if it is done for non-POSIX systems, it is most likely done for POSIX systems as well. So if you wish to limit the number of threads I would do this via a thread pool.
You can limit the maximum number of threads running on linux via /proc/sys/kernel/threads-max, you can also instruct a Linux system to treat some processes with less importance via niceness
Hope it helps!
EDIT: As a foot note I will also mention that the documentation actually encourages the use of thread_island even on POSIX systems. See this link here
EDIT2: In case that you must use fork_island due to their mentioned issues when thread_safety cannot be guaranteed. Then another option would be to limit available resources via setrlimit see this link right here - you are interested in setting RLIMIT_NPROC
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What are the different types of thread in C++ ?
I already know multiprocessing and multi threading . I know how to create threads in normal C++ , VC++ but not sure what is mean by different types of thread.
From the software point of view, there are no "different types of threads". A thread is a thread. Are there different types of pixels on the screen? Nope. It's similar. However, in some contexts, you MAY differentiate threads over their intended purpose. You can have:
os threads (or kernel threads) vs user threads (or application threads)
main thread vs ad-hoc threads vs pooled threads
background threads vs high-priority threads
etc
but a thread is a thread. They are all the same in terms of their basic properties: they run some specified code. The difference is in how they are used and what priority they have (= how often they get some processor time to do their work) or what code they are allowed to run.
...ok, thinking a bit more about the terms used in different contexts, ACTUALLY, there are 2 types of threads, and both are just called 'threads':
software threads
hardware threads
The difference is that the former one is what the operating system's scheduler manages (creates/wakes/puts to sleep/kills/etc). The number of those is virtually limited only by the available memory. You may have 100-1000-10000 software threads, no problem.. The latter refers to the actual electronic structures that execute them. There's always a much lower limit there. Not long ago each CPU could execute just a single thread. You wanted to run 8 threads? have a 8-cpu motherboard. Today, most CPUs have multiple "cores" and can each can execute several (usually 4-16) threads.
However, in my region, when someone says "a thread", they mean a "a software thread", and when someone wants to refer to the latter, they say explicitly "a hardware thread". That's why I didn't think of this at first, probably I'm more of a software guy, why in a hardware team they may "thread"="hardware thread" by default.
In general, there are two types of multitasking: process-based and thread-based.
Process-based multitasking handles the concurrent execution of programs which is something like two people doing same tasks or the first person doing the task & the second person doing the sub-task of the same task.
Thread-based multitasking deals with the concurrent execution of pieces of the same program which is something like you are using different parts of your body for some work ( or say, multitasking ).
I don't know if I my above analogies are correct or not ( in reference to your understanding ).
For further information, you can follow this link.
Sorry, my first question here. I'm not sure to be the first to ask this, but I could not find answers anywhere.
Modern CPU are heavily multi-threaded/cored but Linux does not garantee processes/threads to physically run in the same time (time sharing).
I'd like my (C++) programs to take advantage of this hardware: spawn small tasks (to update a Hash, copy a data) while going on with the main thread. The goal is to run the program faster. As it does not make sense to spawn a 500ns task and to wait 1ms for its execution I'd like to be (almost) sure that the task will be really executed in the same time as the main thread.
I could not find any paper or discussion on this subject, but I'm not sure to search properly, I just don't know how this thing would be named.
Could someone tell me:
- what's the name of such parallel (same time) executions ?
- is this possible on Linux (or which kind of OS offer such service) ?
Thanks
I realized that my question was more OS than programming oriented, and that I should ask it on a more appropriated Programmers site, here:
https://softwareengineering.stackexchange.com/questions/325257/possiblity-to-request-several-linux-threads-scheduled-together-in-the-same-time
Thanks for answers, they made me advance and better define what I'm looking for.
What you are looking for is cpu thread affinity and cgroups under Linux. There is a lot of complexity to it and you will need to experiment with your particular requirements.
A common strategy in low latency applications is to assign a CPU resource solely to a particular process or thread. The thread runs 'hot' thereby never releasing the CPU to any other process including the kernel.
The Reactor pattern is useful for this model. A queue is setup on a hot thread with other threads feeding the queue. The queue itself can be a bottleneck but thats a whole other discussion. So in your example, the main (hot?) thread will be writing events into the queue of other hot worker threads. Some sort of rendezvous event will indicate to the main thread that the worker threads are finished.
This strategy is only useful for CPU bound applications. If your application is I/O bound then the kernel will likely do a better job than a custom algorithm.
To directly answer your question, yes this is possible in Linux with C/C++ but it is not trivial.
I have some old articles on my blog that may be of interest.
http://matthewericfisher.tumblr.com/post/6462629082/low-latency-highly-scalable-robust-systems
--Matt
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am C++ programmer (intermediate) and learning multi-threading now. I found it quite confusing when to use multi-threading in C++? How will i come to know that i need to use multi-threading in which part of section?
When to use multithreading in C++?
When you have resource intensive task like huge mathematical calculation , or I/O intensive task like reading or writing to file, use should your multithreading.
Purpose should be, you can be able to run multiple things (tasks) together, so that it will increase performance and responsiveness of your application. Also, learn about synchronization before implementing multithreading in your application.
When to use multithreading in C++?`
Well - the general rule of thumb is: use it when it can speed up your application. The answer isn't really language-dependant.
If you want to get an in-depth answer, then you have to consider a few things:
Is multithreading possible to implement inside your code? Do you have fragments which can be calulated at the same time and are intependent of other calculations?
Is multithreading worth implementing? Does your program run slow even when you did all you could to make it as fast as possible?
Will your code be run on machines that support multithreading (so have multiple processing units)? If you're designing code for some kind of machine with only one core, using multithreading is a waste of time.
Is there a different option? A better algorithm, cleaning the code, etc? If so - maybe it's better to use that instead of multithreading?
Do you have to handle things that are hard to predict in time, while the whole application has to constantly run? For example - receiving some information from a server in a game?
This is a slightly subjective subject... But I tend to use multi-threading in one of two situations.
1 - In a performance critical situation where the utmost power is needed (and the algorithm of course supports parallelism), for me, matrix multiplications.
2 - Rarely where it may be easier to have a thread managing something fairly independent. The classic is networking, perhaps have a thread blocking waiting for connections and spawning threads to manage each thread as it comes in. This is useful as the threads can block and respond in a timely manner. Say you have a server, one request might need disk access which is slow, another thread can jump in an field a different request while the first is waiting for its data.
As has been said by others, only when you need to should you think about doing it, it gets complicated fast and can be difficult to debug.
Multithreading is a specialized form of multitasking and a multitasking is the feature that allows your computer to run two or more programs concurrently.
I think this link can help you.
http://www.tutorialspoint.com/cplusplus/cpp_multithreading.htm
Mostly when you want things to be done at the same time. For instance, you may want a window to still respond to user input when a level is loading in a game or when you're downloading multiple files at once, etc. It's for things that really can't wait until other processing is done. Of course, both probably go slower as a result, but it really gives the illusion of multiple things happening at once.
Use multithreading when you can speed up your algorithms by doing things in parallel. Use it in opposition to multiprocessing when the threads need access to the parent process's resources.
My two cents.
Use cases:
Integrate your application in a lib/app that already runs a loop. You would need a thread of your own to run your code concurrently if you cannot integrate into the other app.
Task splitting. It makes sense to organize disjoint tasks in threads sometimes, such as in separating sound from image processing, for example.
Performance. When you want to improve the throghput of some task.
Recommendations:
In the general case, don't do multithreading if a single threaded solution will suffice. It adds complexity.
When needed, start with higher-order primitives, such as std::future and std::async.
When possible, avoid data sharing, which is the source of contention.
When going to lower level abstractions, such as mutexes and so on, encapsulate it in some pattern. You can take a look at these slides.
Decouple your functions from threading and compose the threading into the functions at a later point. Namely, don't embed thread creation into the logic of your code.
This question already has answers here:
How to set the name of a thread in Linux pthreads?
(3 answers)
Closed 6 years ago.
I have a multithreaded Linux application written in C/C++. I have chosen names for my threads. To aid debugging, I would like these names to be visible in GDB, "top", etc. Is this possible, and if so how?
(There are plenty of reasons to know the thread name. Right now I want to know which thread is taking up 50% CPU (as reported by 'top'). And when debugging I often need to switch to a different thread - currently I have to do "thread apply all bt" then look through pages of backtrace output to find the right thread).
The Windows solution is here; what's the Linux one?
Posix Threads?
This evidently won't compile, but it will give you an idea of where to go hunting. I'm not even sure its the right PR_ command, but i think it is. It's been a while...
#include <sys/prctl.h>
prctl(PR_SET_NAME,"<null> terminated string",0,0,0)
If you are using a library like ACE the Thread has a way to specify the thread name when creating a new thread.
BSD Unix has also a pthread_set_name_np call.
Otherwise you can use prctl as mentioned by Fusspawn.