Utilization of atomic_flag on C++ [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am beginner in concurrent programming on C++. I read about std::atomic_flag and I don't understand where this atomic_flag can be useful. Maybe someone can explain me for which tasks atomic_flag can be useful

cppreference.com has a usage example. It also contains the following explanatory note:
Unlike all specializations of std::atomic, it is guaranteed to be lock-free.
In other words: you’d use it the same way you would use std::atomic<bool> but with the added guarantee of lock-freeness (though on most systems std::atomic<bool> will also be lock-free, i.e. std::atomic<bool>::is_always_lock_free will be true).

Related

Is it possible to implement coroutine with threads? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
All in title. Since coroutine just need a sort of EIP memory, and thread provides that, is it possible to do it? That's way to have a highly portable coroutine library.
You can implement coroutines / generators based on threads or even fibers. But they would be less performant compared to implementations like msvc or gcc provide.
I implemented coroutines that way (no threads, but fibers) in delphi. You need to estimate how much stack-memory do you need, you need to create those threads (which is heavy...).
So you should avoid that.

What is zero overhead principle in C++? Examples? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
When I am reading design goals of C++11, it mentioned about zero-overhead principle without any examples or features which uses this principle. I can understand that it could be there to avoid degrading existing code performance. But,
Can someone explain this concept with some examples?
Approach they made to implement such a feature in the standard?
How they enforce compiler-writers to implement this?

Boost UpgradeLockable Concept [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Why Boost UpgradeLockable Concept (http://www.boost.org/doc/libs/1_57_0/doc/html/thread/synchronization.html#thread.synchronization.mutex_concepts.upgrade_lockable) have unlock_and_lock_* and unlock_upgrade_and_lock_* but don't have unlock_shared_and_lock_? It have try_unlock_shared_and_lock_ but only when BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS is available and I don't want to "try". There's some restriction about doing such operations?
The entire purpose of an upgradeable lock is that you can atomically upgrade it to an exclusive lock. If you could do that with a shared lock, what purpose would upgradeable locks serve?
If you had an unlock_shared_and_lock, what would happen if two threads called it at the same time? Under what circumstances would it be safe to call?
If you might need to atomically upgrade a lock, you need to acquire an upgradeable lock. That's their entire purpose.

Should a class be thread-safe? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Should a thread-safe mechanism be added when a class is developed and it is known that this class will be used in multi-threaded environment (not always however) or leave it to the user?
As a general rule, it's more flexible to leave it to the user. For example, consider a map-type container. Suppose the application needs to atomically move something from one map to another map. In this case, the user needs to lock both maps before the insert-erase sequence.
Having such a scenario be automatically taken care of somehow by your class would probably be inelegant, because it's naturally something that happens across objects, and because there may be many such scenarios, each slightly different.

Is relying on short-circuit evaluation good design? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Are there alternatives that would be more preferred?
Short-circuit evaluation is a crucial feature of most modern programming languages and there's no reason to avoid relying on it. Without it pointer-related tests would be (unnecessarily) much more complicated and less readable.
Of course it's good design, everyone knows to expect it and it beats using nested conditionals.