Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
what is thread
the difference between cases with using mutex and without using mutex
difference between using join() method and without using join()
which low-level functions is called when you create thread with std::thread class constructor and with using pthread.
I have read the material on the internet and still I am asking the question just for further strengthen my ideas.
Thanks in advance
1) A thread allows for parallel execution of your program. Using multiple threads in your program allows multiple processor cores to execute your code and thus (usually) speeding up the program.
2) Because threads allows parellel execution of code it can happen that thread #1 is reading data while thread #2 is modifying this data, this can result in some funky cases you don't want to happen. Mutexes stop this behaviour by making threads wait their turn in these particular critical sections.
3) using thread.join() makes the current thread wait for the completion of thread object that's been called join() upon.
4) This is really OS specific. For example, Unix based systems use pthread as the underlying thread class when creating a std::thread. The compiler vendor implements this.
If you would like to learn multi threading using with C++ standard library then please refer C++ concurrency IN Action(Author Williams). It is very good book also referred on The Definitive C++ Book Guide and List
what is thread
Thread is an execution unit which consists of its own program counter, a stack, and a set of registers. Thread are implemented in application to improve the performance and effective use of CPU's
The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel.
Refer https://en.wikipedia.org/wiki/Thread_%28computing%29
the difference between cases with using mutex and without using mutex
Imagine for a moment that you’re sharing an apartment with a friend. There’s
only one kitchen and only one bathroom. Unless you’re particularly friendly, you can’t both use the bathroom at the same time, and if your roommate occupies the bathroom for a long time, it can be frustrating if you need to use it.
Likewise, though it might be possible to both cook meals at the same time, if you have a combined oven and grill, it’s just not going to end well if one of you tries to grill some sausages at the same time as the other is baking a cake. Furthermore, we all know the frustration of sharing a space and getting halfway through a task only to find that someone has borrowed something we need or changed something from the way we left it.
It’s the same with threads. If you’re sharing data between threads, you need to have rules for which thread can access which bit of data when, and how any updates
When you have a multi-threaded application, the different threads sometimes share a common resource, such as a global variables, file handler or any.
Mutex can be used with single resource for synchronization. Other synchronization methods(like semaphore) available to synchronize between multiple threads and process.
The concept is called "mutual exclusion" (short Mutex), and is a way to ensure that only one thread is allowed inside critical area, using that resource etc.
difference between using join() method and without using join()
calling thread waits for the thread specified by thread to terminate. If that thread has already terminated, then join() returns immediately. The thread specified by thread must be joinable.
By default thread are joinable if you not changed its attribute.
which low-level functions is called when you create thread with std::thread class constructor and with using pthread.
pthread_create is called in case of linux. std thread library is platform independent. So it call different thread API specific to the underlying operating system.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
What are the different types of thread in C++ ?
I already know multiprocessing and multi threading . I know how to create threads in normal C++ , VC++ but not sure what is mean by different types of thread.
From the software point of view, there are no "different types of threads". A thread is a thread. Are there different types of pixels on the screen? Nope. It's similar. However, in some contexts, you MAY differentiate threads over their intended purpose. You can have:
os threads (or kernel threads) vs user threads (or application threads)
main thread vs ad-hoc threads vs pooled threads
background threads vs high-priority threads
etc
but a thread is a thread. They are all the same in terms of their basic properties: they run some specified code. The difference is in how they are used and what priority they have (= how often they get some processor time to do their work) or what code they are allowed to run.
...ok, thinking a bit more about the terms used in different contexts, ACTUALLY, there are 2 types of threads, and both are just called 'threads':
software threads
hardware threads
The difference is that the former one is what the operating system's scheduler manages (creates/wakes/puts to sleep/kills/etc). The number of those is virtually limited only by the available memory. You may have 100-1000-10000 software threads, no problem.. The latter refers to the actual electronic structures that execute them. There's always a much lower limit there. Not long ago each CPU could execute just a single thread. You wanted to run 8 threads? have a 8-cpu motherboard. Today, most CPUs have multiple "cores" and can each can execute several (usually 4-16) threads.
However, in my region, when someone says "a thread", they mean a "a software thread", and when someone wants to refer to the latter, they say explicitly "a hardware thread". That's why I didn't think of this at first, probably I'm more of a software guy, why in a hardware team they may "thread"="hardware thread" by default.
In general, there are two types of multitasking: process-based and thread-based.
Process-based multitasking handles the concurrent execution of programs which is something like two people doing same tasks or the first person doing the task & the second person doing the sub-task of the same task.
Thread-based multitasking deals with the concurrent execution of pieces of the same program which is something like you are using different parts of your body for some work ( or say, multitasking ).
I don't know if I my above analogies are correct or not ( in reference to your understanding ).
For further information, you can follow this link.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Good day everyone,
I have some questions about the mutex (the subject is already specific). So, I need to be sure to not have missconceptions ( https://en.cppreference.com/w/cpp/thread/mutex ) :
1) I would like to be sure that a std::mutex cannot be shared between 2 threads at the same time. Is it true ?
2) What happens , if randomly, two independant threads ask the mutex at the same time ?
3) According to my understanding, when a thread takes the mutex, it prevents any other thread to modify at the same time the global variables. Is it a good understanding ?
Could you correct if for any of those questions, I am not right ?
I would be thankful to you.
you should probably read a bit further about mutex (mutexes? mutices?), because it is not just a concept of c++, but of computer science in general. To answer your questions:
Yes, it's true, that's the whole point about a mutex. At any point in time only one thread can own the mutex lock.
One of the threads will get the mutex, the other won't. The implementation will take care of this, even if the access is happening on mulitple physical cores at the same physical time.
Not quite, you can always choose to ignore the mutex and still change those variables. It is up to you how you solve concurrency problems.
Edit
I think some languages provide containers that wrap variables in such a way that they are only readable by one thread at a time. I think they are called Monitors in Java.
General concept:
std::mutex m;
int globalVar;
void foo()
{
//Acquire lock or wait, if another thread already acquired the lock.
mutex.lock();
//At any given time this code will be executed by one thread only (or none)
globalVar = bar();
mutex.unlock();
}
//However you can choose to ignore the mutex...
void evilFoo()
{
//This can be executed by multiple threads at the same time (even parallel to foo())
globalVar = bar();
}
1) It should be shared. How otherwise you will use it???
Edit: Ok, it seems that question is somewhat misleading. What do you mean by "shared" in this case?
Edit2: If by "shared" you mean that one mutex can be held by more than one thread, then answer is: this can't happen.
2) Even if it happen exactly at same physical time on two different cores there will be some arbitrating mechanism that will give mutex to one or other thread.
3) No. When other thread takes mutex you know that you can't modify variables protected by this mutex without consequences, and should write code such that there will be no such modifications. But mutex itself in no any way prevents such modifications. And of course you shouldn't read such variables without holding mutex also.
Let assume Alice gives 5 things to Bob. You will have code like.
alice -= 5;
bob +=5;
We do not want a situation to arise where we remove 5 from Alice but have not given the five to Bob.
std::thread uses the operating system support for pre-emptive multithreading. This means the operating may interrupt a thread at any point in time and schedule anther. On multi-core machines, they may even run concurrently. This means a thread might see inconsistent data.
A Mutex is an operating system object whose job it is to ensure that only a single thread can access a critical section of code at a time.
So when thread 1 enters the mutex, it increments the mutex's access count. When the next thread tries to enter the mutex it detects that the access count is not zero. The operating system then suspends the thread.
When the first thread releases the mutex, the other threads then become runnable again. This means the operating system may schedule them to run now or a bit later depending on its priority.
The newly runnable thread may then attempt to enter the mutex with the same rules applied as above.
This means that provided a common mutex is used, no two threads may enter code protected by the mutex at the same time.
1) They can be shared but the mutex is used to prevent simultaneous access from threads , to the resources you want to be accessed and modified only by one thread at once.
2) The semantics of a mutex are that two threads cannot lock the same mutex at the same time.
3) The mutex prevents any other thread to modify at the same time the resources that are processed after the lock of the mutex, until you unlock the mutex. The global variables are one of them.
Suppose I have a multi-threaded program in C++11, in which each thread controls the behavior of something displayed to the user.
I want to ensure that for every time period T during which one of the threads of the given program have run, each thread gets a chance to execute for at least time t, so that the display looks as if all threads are executing simultaneously. The idea is to have a mechanism for round robin scheduling with time sharing based on some information stored in the thread, forcing a thread to wait after its time slice is over, instead of relying on the operating system scheduler.
Preferably, I would also like to ensure that each thread is scheduled in real time.
In case there is no way other than relying on the operating system, is there any solution for Linux?
Is it possible to do this? How?
No that's not cross-platform possible with C++11 threads. How often and how long a thread is called isn't up to the application. It's up to the operating system you're using.
However, there are still functions with which you can flag the os that a special thread/process is really important and so you can influence this time fuzzy for your purposes.
You can acquire the platform dependent thread handle to use OS functions.
native_handle_type std::thread::native_handle //(since C++11)
Returns the implementation defined underlying thread handle.
I just want to claim again, this requires a implementation which is different for each platform!
Microsoft Windows
According to the Microsoft documentation:
SetThreadPriority function
Sets the priority value for the specified thread. This value, together
with the priority class of the thread's process determines the
thread's base priority level.
Linux/Unix
For Linux things are more difficult because there are different systems how threads can be scheduled. Under Microsoft Windows it's using a priority system but on Linux this doesn't seem to be the default scheduling.
For more information, please take a look on this stackoverflow question(Should be the same for std::thread because of this).
I want to ensure that for every time period T during which one of the threads of the given program have run, each thread gets a chance to execute for at least time t, so that the display looks as if all threads are executing simultaneously.
You are using threads to make it seem as though different tasks are executing simultaneously. That is not recommended for the reasons stated in Arthur's answer, to which I really can't add anything.
If instead of having long living threads each doing its own task you can have a single queue of tasks that can be executed without mutual exclusion - you can have a queue of tasks and a thread pool dequeuing and executing tasks.
If you cannot, you might want to look into wait free data structures and algorithms. In a wait free algorithm/data structure, every thread is guaranteed to complete its work in a finite (and even specified) number of steps. I can recommend the book The Art of Multiprocessor Programming where this topic is discussed in length. The gist of it is: every lock free algorithm/data structure can be modified to be wait free by adding communication between threads over which a thread that's about to do work makes sure that no other thread is starved/stalled. Basically, prefer fairness over total throughput of all threads. In my experience this is usually not a good compromise.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What's the correct/best way to communicate from worker thread to the main thread in win32 when working in OOP?
My worker thread runs in a loop, and for certain events including when the thread ends, it needs to tell the main thread, and the main thread do certain things in response.
Currently I am using WM_APP messages from the worker thread to communicate with the main thread.
That doesn't look neat though.
If you are comfortable with communicating via Windows Messages, this is perfectly reasonable and fine. It has the benefit of not requiring synchronization. Additional communication can be done via thread-safe objects (that mostly require locking), shared memory, sockets, ... Check well known C++ libraries in their threading sections for possibilities.
Communicating via Windows Messages is one of the simplest ways. This in itself is a value that should not be underestimated and if you do not require platform independence or a form of communication that gives you more possibilities than Windows messages - stick to it.
I assume the main thread would be the GUI thread. You can take a look at this SO thread on the similar topic.
Basically there is not standard way of communicating worker thread to main thread. You just concentrate on is your program works fine or not that's it. About threads, A background thread or you can say worker threads are basically used for multitasking purpose means you want to do some thing very heavy like, Reading huge file from disc then you can used thread.
Now one very important thing while using thread is synchronisation of your thread how your are synchronising thread there is lots of issue related to resource allocation and all this first understand how your allocating resource to threads while working.
For more information you may read Using Worker Threads
I am wondering whether it is possible to set the processor affinity of a thread obtained from a thread pool. More specifically the thread is obtained through the use of TimerQueue API which I use to implement periodic tasks.
As a sidenote: I found TimerQueues the easiest way to implement periodic tasks but since these are usually longliving tasks might it be more appropriate to use dedicated threads for this purpose? Furthermore it is anticipated that synchronization primites such as semapores and mutexes need to be used to synchronize the various periodic tasks. Are the pooled threads suitable for these?
Thanks!
EDIT1: As Leo has pointed out the above question is actually two only loosely related ones. The first one is related to processor affinity of pooled threads. The second question is related to whether pooled threads obtained from the TimerQueue API are behaving just like manually created threads when it comes to synchronization objects. I will move this second question the a seperate topic.
If you do this, make sure you return things to how they were every time you release a thread back to the pool. Since you don't own those threads and other code which uses them may have other requirements/assumptions.
Are you sure you actually need to do this, though? It's very, very rare to need to set processor affinity. (I don't think I've ever needed to do it in anything I've written.)
Thread affinity can mean two quite different things. (Thanks to bk1e's comment to my original answer for pointing this out. I hadn't realised myself.)
What I would call processor affinity: Where a thread needs to be run consistently on a the same processor. This is what SetThreadAffinityMask deals with and it's very rare for code to care about it. (Usually it's due to very low-level issues like CPU caching in high performance code. Usually the OS will do its best to keep threads on the same CPU and it's usually counterproductive to force it to do otherwise.)
What I would call thread affinity: Where objects use thread-local storage (or some other state tied to the thread they're accessed from) and will go wrong if a sequence of actions is not done on the same thread.
From your question it sounds like you may be confusing #1 with #2. The thread itself will not change while your callback is running. While a thread is running it may jump between CPUs but that is normal and not something you have to worry about (except in very special cases).
Mutexes, semaphores, etc. do not care if a thread jumps between CPUs.
If your callback is executed by the thread pool multiple times, there is (depending on how the pool is used) usually no guarantee that the same thread will be used each time. i.e. Your callback may jump between threads, but not while it is in the middle of running; it may only change threads each time it runs again.
Some synchronization objects will care if your callback code runs on one thread and then, still thinking it holding locks on those objects, runs again on a different thread. (The first thread will still hold the locks, not the second one, although it depends which kind of synchronization object you use. Some don't care.) That isn't a #1, though; that's #2, and not something you'd use SetThreadAffinityMask to deal with.
As an example, Mutexes (CreateMutex) are owned by a thread. If you acquire a mutex on Thread A then any other thread which tries to acquire the mutex will block until you release the mutex on Thread A. (It is also an error for a thread to release a mutex it does not own.) So if your callback acquired a mutex, then exited, then ran again on another thread and released the mutex from there, it would be wrong.
On the other hand, an Event (CreateEvent) does not care which threads create, signal or destroy it. You can signal an event on one thread and then reset it on another and that's fine (normal, in fact).
It'd also be rare to hold a synchronization object between two separate runs of your callback (that would invite deadlocks, although there are certainly situations where you could legitimately want/do such a thing). However, if you created (for example) an apartment-threaded COM object then that would be something you would want to only access from one specific thread.
You shouldn't. You're only supposed to use that thread for the job at hand, on the processor it's running on at that point. Apart from the obvious inefficiency, the threadpool might destroy every thread as soon as you're done, and create a new one for your next job. The affinity masks wouldn't disappear that soon in practice, but it's even harder to debug if they disappear at random.