C++ constexpr thread_local id - c++

Is there any way to get a different value in a constexpr thread_local variable for every thread?
constexpr thread_local someType someVar = ......;
It seems like constexpr thread_local is supported but the thread_local indicator doesnt seem to do anything in this case.

If you think about your question, you yourself can see why this is not possible.
What is constexpr?
According to the informal standard site cppreference:
The constexpr specifier declares that it is possible to evaluate the value of the function or variable at compile time.
The compiler has to resolve the value at compile time and this value should not change throughout the execution of the program.
Thread-local storage
A thread, on the contrary, is a run-time concept. C++11 introduced the thread concept into the language, and thus you could say that a compiler can be "aware" of the thread concept.
But, the compiler can't always predict if a thread is going to be executed (Maybe you run the thread only upon specific configuration), or how many instances are going to be spawn, etc.
Possible implementation
Instead of trying to enforce access to a specific module/method to a single thread using hacks and tricks, why not use a very primitive feature of the language?
You could just as well implement this using simple encapsulation. Just make sure that the only object that "sees" this method you are trying to protect is the thread object itself, for example:
#include <iostream>
#include <thread>
#include <chrono>
using namespace std;
class SpecialWorker
{
public:
void start()
{
m_thread = std::move(std::thread(&SpecialWorker::run, this));
}
void join()
{
m_thread.join();
}
protected:
virtual void run() { protectedTask(); }
private:
void protectedTask()
{
cout << "PROTECT ME!" << endl;
}
std::thread m_thread;
};
int main(int argc, char ** argv)
{
SpecialWorker a;
a.start();
a.join();
return 0;
}
Please note that this example is lacking in error handling and is not production grade code! Make sure to refine it if you intend to use it.

Related

Synchronous destruction through std::shared_ptr<T>::reset()

Consider the following simplified program modelling a real scenario where different users can make concurrent requests to the same resource:
#include <thread>
#include <memory>
#include <mutex>
#include <iostream>
using namespace std;
struct T {
void op() { /* some stuff */ }
~T() noexcept { /* some stuff */ }
};
std::shared_ptr<T> t;
std::mutex mtx;
std::weak_ptr<T> w{t};
enum action { destroy, op};
void request(action a) {
if (a == action::destroy) {
lock_guard<mutex> lk{mtx};
t.reset();
std::cout << "*t certainly destroyed\n";
} else if (a == action::op) {
lock_guard<mutex> lk{mtx};
if (auto l = w.lock()) {
l->op();
}
}
}
int main() {
// At some point in time and different points in the program,
// two different users make two different concurrent requests
std::thread th1{request, destroy}; std::thread th2{request, op};
// ....
th2.join();
th1.join();
}
I am not asking if the program is formally correct - I think it is, but I have never seen this approach for guaranteeing a synchronous destruction of a resource shared via smart pointers. I personally think it is fine and has a valid use.
However, I am wondering if others think the same and, in case, if there are more elegant alternatives apart from the classic synchronization with unique_locks and condition variables and from introducing modifications (e.g. atomic flags) to T.
It would be ideal if I could even get rid of the mtx somehow.
Yes, it's fine. The reference counting in the shared_ptr is atomic and the locked copy stays in scope for the duration of the op, so the object can't be destroyed during the op.
In this case the mutex is not actually protecting the lifetime of T, but sequencing calls to op() and destruction. If you don't mind multiple concurrent calls to op(), or the destruction time being indeterminate (i.e. after the last running op() has completed) then you can do away with it, since std::shared_ptr<>::reset() and std::weak_ptr<>::lock() are both thread-safe.
However, I would advise caution as the author clearly meant for calls to op() to be serialised.

Pointing to thread object from another object: dangerous?

Suppose there are two objects both inherit from a thread parent class "utility thread that uses pthreads".
class Othread1: public thread
{
public:
start() { /* launch thread at 10 Hz */ };
end();
void setvar(float vr) {var2= vr } ;
protected :
float var1;
}
and
class Othread2: public thread
{
start() { /* launch the thread at 1000 Hz */ } ;
end();
float getvar() { return var2 } ;
protected :
float var2;
}
Is there a such thing where we can do this?
void threadManager(thread *th1, thread *th2)
{
float vtemp = th2->getvar();
th1->setvar(vtemp);
}
int main ()
{
thread th1;
thread th2;
threadManager(&th1,&th2);
return 0;
}
Is such interthread data use a safe thing to do? Or do I have to do queues with producer/consumer pattern to exchange the data?
I'm still not entirely sure what you are trying to do but here is an example that will hopefully help you.
If you want to read data in one thread that is concurrently written in another thread you need synchronization or you'll invoke undefined behavior. Whether there is much or little time between the “writing event” and the “reading event” is not important. As far as the language rules are concerned, everything that happens between two synchronization points is “simultaneous”.
The definitive rules for this can be found in § 1.10 [intro.multithreaded] of N4140, the final draft for the C++14 standard. But the language used there can be hard to decipher.
A more informal explanation can be found in § 41.2.4 of The C++ Programming Language (4th edition) by Bjarne Stroustrup.
Two threads have a data race if both can access a memory location simultaneously and at least one of their accesses is a write. Note that defining “simultaneously” precisely is not trivial. If two threads have a data race, no language guarantees hold: the behavior is undefined.
As far as I am concerned, I think that the “can” in the first sentence is bogus and should not be there but I'm quoting the book as-is.
The classic way of protecting mutual access is using mutices and locks. Since C++11 (and only since C++11, C++ has a definition of concurrency at all), the standard library provides std::mutex and std::lock_guard (both defined in the <mutex> header) for this purpose.
If you have simple types like integers, using locks is overkill, however. Modern hardware supports atomic operations on such simple types. The standard library provides the std::atomic class template for this (defined in the <atomic> header). You can use it on any trivially copyable types.
Here is a rather useless example where we have two threads that execute a function writer and reader respectively. The writer has a pseudo-random number generator and periodically asks it to produce a new random integer that it stores atomically in the global variable value. The reader periodically loads the value of value atomically and advances its own pseudo-random number generator until it catches up. A second global atomic variable done is used by the main thread to signal to the two threads when they should stop. Note that I have replaced your hertz with kilohertz so it is less boring to wait for the program to execute.
#include <atomic>
#include <chrono>
#include <random>
#include <thread>
namespace /* anonymous */
{
std::atomic<bool> done {};
std::atomic<int> value {};
void
writer(const std::chrono::microseconds period)
{
auto rndeng = std::default_random_engine {};
auto rnddst = std::uniform_int_distribution<int> {};
while (!done.load())
{
const auto next = rnddst(rndeng);
value.store(next);
std::this_thread::sleep_for(period);
}
}
void
reader(const std::chrono::microseconds period)
{
auto rndeng = std::default_random_engine {};
auto rnddst = std::uniform_int_distribution<int> {};
auto last = 0;
while (!done.load())
{
const auto next = value.load();
while (last != next)
last = rnddst(rndeng);
std::this_thread::sleep_for(period);
}
}
}
int
main()
{
using namespace std::chrono_literals;
std::thread writer_thread {writer, 100us}; // 10 kHz
std::thread reader_thread {reader, 10us}; // 100 kHz
std::this_thread::sleep_for(3s);
done.store(true);
writer_thread.join();
reader_thread.join();
}
If you have a modern GCC or Clang, you can (and probably should) compile your debug builds with the -fsanitize=thread switch. If you run a thusly compiled binary and it executes a data race, the special instrumentations added by the compiler will output a helpful error message. Try replacing the std::atomic<int> value in the above program with an ordinary int value and see what the tool will report.
If you don't have C++14 yet, you cannot use the literal suffixes but have to spell out std::chrono::microseconds {10} and so forth.

std::function With Member Function For Timer C++

I have A timer class that I have set up to be able to bind to a free floating function using the std::function template. I would Like to modify the class to be able to support using both free floating functions and class member functions. I know that std::function can bind to a member function using std::bind but I am not sure how to set this up with the code I have:
#include <iostream>
#include <chrono>
#include <thread>
#include <functional>
#include <atomic>
namespace Engine {
template<class return_type,class...arguments>
class Timer{
typedef std::function<return_type(arguments...)> _function_t;
public:
Timer(size_t interval,bool autoRun,_function_t function,arguments...args){
_function = function;
_interval = interval;
if (autoRun) {
Enable(args...);
}
}
~Timer(){
if (Running()) {
Disable();
}
}
void Enable(arguments...args){
if (!Running()) {
_running=true;
enable(_interval, _function, args...);
}
}
void Disable(){
if (Running()) {
_running=false;
}
}
std::atomic_bool const& Running()const{
return _running;
}
protected:
void enable(size_t interval,_function_t func,arguments...args){
_thread =std::thread([&,func,interval,args...](){
std::chrono::duration<long long,std::nano> inter(interval);
auto __interval = std::chrono::microseconds(interval);
auto deadline = std::chrono::steady_clock::now();
while (_running) {
func(args...);
std::this_thread::sleep_until(deadline+=__interval);
}
});
_thread.detach();
}
protected:
_function_t _function;
std::atomic_bool _running;
size_t _interval;
std::thread _thread;
};
}
Any suggestions would be great. Let me know if I need to clarify anything.
Thanks
To pass a member function to this, pass a pointer to the unbound member function (&Engine::SceneManager::Update), and then the first parameter is a pointer to the object who should have the member called (a pointer to a SceneManager object, this is the "hidden" this pointer). This is how bind works, so no changes are needed to your code. As a simple alternative, pass a lambda.
http://coliru.stacked-crooked.com/a/7c6335d4f94b9f93 (though it isn't running as expected and I don't know why)
Also, I'm confused by the fact your code takes interal as a size_t, then converts it to nanoseconds, then converts that to microseconds, and then uses it. Why not just use microseconds the whole way through?
Your destructor has a race condition. Disable should stall until the thread has finished executing. I haven't used std::thread much, but I'd guess one place to start is if (_thread.is_joinable()) _thread.join(); As part of this, it might be useful to have the thread only sleep for 100ms at a time or so, and periodically check if it's supposed to be shutting down.
Enable should stop the existing thread, before starting a new one. Better yet, reuse the same thread. Unfortunately, there's no easy way to have an existing thread switch tasks, so it's easiest to simply Disable and then keep your existing code.

Is possible to get a thread-locking mechanism in C++ with a std::atomic_flag?

Using MS Visual C++2012
A class has a member of type std::atomic_flag
class A {
public:
...
std::atomic_flag lockFlag;
A () { std::atomic_flag_clear (&lockFlag); }
};
There is an object of type A
A object;
who can be accessed by two (Boost) threads
void thr1(A* objPtr) { ... }
void thr2(A* objPtr) { ... }
The idea is wait the thread if the object is being accessed by the other thread.
The question is: do it is possible construct such mechanism with an atomic_flag object? Not to say that for the moment, I want some lightweight that a boost::mutex.
By the way the process involved in one of the threads is very long query to a dBase who get many rows, and I only need suspend it in a certain zone of code where the collision occurs (when processing each row) and I can't wait the entire thread to finish join().
I've tryed in each thread some as:
thr1 (A* objPtr) {
...
while (std::atomic_flag_test_and_set_explicit (&objPtr->lockFlag, std::memory_order_acquire)) {
boost::this_thread::sleep(boost::posix_time::millisec(100));
}
... /* Zone to portect */
std::atomic_flag_clear_explicit (&objPtr->lockFlag, std::memory_order_release);
... /* the process continues */
}
But with no success, because the second thread hangs. In fact, I don't completely understand the mechanism involved in the atomic_flag_test_and_set_explicit function. Neither if such function returns inmediately or can delay until the flag can be locked.
Also it is a mistery to me how to get a lock mechanism with such a function who always set the value, and return the previous value. with no option to only read the actual setting.
Any suggestion are welcome.
By the way the process involved in one of the threads is very long query to a dBase who get many rows, and I only need suspend it in a certain zone of code where the collision occurs (when processing each row) and I can't wait the entire thread to finish join().
Such a zone is known as the critical section. The simplest way to work with a critical section is to lock by mutual exclusion.
The mutex solution suggested is indeed the way to go, unless you can prove that this is a hotspot and the lock contention is a performance problem. Lock-free programming using just atomic and intrinsics is enormously complex and cannot be recommended at this level.
Here's a simple example showing how you could do this (live on http://liveworkspace.org/code/6af945eda5132a5221db823fa6bde49a):
#include <iostream>
#include <thread>
#include <mutex>
struct A
{
std::mutex mux;
int x;
A() : x(0) {}
};
void threadf(A* data)
{
for(int i=0; i<10; ++i)
{
std::lock_guard<std::mutex> lock(data->mux);
data->x++;
}
}
int main(int argc, const char *argv[])
{
A instance;
auto t1 = std::thread(threadf, &instance);
auto t2 = std::thread(threadf, &instance);
t1.join();
t2.join();
std::cout << instance.x << std::endl;
return 0;
}
It looks like you're trying to write a spinlock. Yes, you can do that with std::atomic_flag, but you are better off using std::mutex instead. Don't use atomics unless you really know what you're doing.
To actually answer the question asked: Yes, you can use std::atomic_flag to create a thread locking object called a spinlock.
#include <atomic>
class atomic_lock
{
public:
atomic_lock()
: lock_( ATOMIC_FLAG_INIT )
{}
void lock()
{
while ( lock_.test_and_set() ) { } // Spin until the lock is acquired.
}
void unlock()
{
lock_.clear();
}
private:
std::atomic_flag lock_;
};

class member mutex assertion failed

I'm trying to implement what I think is a fairly simple design. I have a bunch of objects, each containing a std::map and there will be multiple processes accessing them. I want to make sure that there is only one insert/erase to each of these maps at a time.
So I've been reading about boost::thread and class member mutexes and using bind to pass to class member which are all new things to me. I started with a simple example from a Dr. Dobbs article and tried modifying that. I was getting all kinds of compiler errors due to my Threaded object having to be noncopyable. After reading up on that, I decided I can avoid the hassle by keeping a pointer to a mutex instead. So now I have code that compiles but results in the following error:
/usr/include/boost/shared_ptr.hpp:419:
T* boost::shared_ptr< <template-parameter-1-1> >::operator->() const
[with T = boost::mutex]: Assertion `px != 0' failed. Abort
Now I'm really stuck and would really appreciate help with the code as well as comments on where I'm going wrong conceptually. I realize there are some answered questions around these issues here already but I guess I'm still missing something.
#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
#include <map>
using namespace std;
class Threaded {
public:
std::map<int,int> _tsMap;
void count(int id) {
for (int i = 0; i < 100; ++i) {
_mx->lock();
//std::cout << id << ": " << i << std::endl;
_tsMap[i] ++;
_mx->unlock();
}
}
private:
boost::shared_ptr<boost::mutex> _mx;
};
int main(int argc, char* argv[]) {
Threaded th;
int i = 1;
boost::thread thrd1(boost::bind(&Threaded::count, &th, 1));
//boost::thread thrd2(boost::bind(&th.count, 2));
thrd1.join();
//thrd2.join();
return 0;
}
It looks like you're missing a constructor in your Threaded class that creates the mutex that _mx is intended to point at. In its current state (assuming you ran this code just as it is), the default constructor for Threaded calls the default constructor for shared_ptr, resulting in a null pointer (which is then dereferenced in your count() function.
You should add a constructor along the following lines:
Threaded::Threaded(int id)
: _mx(new boost::mutex())
, _mID(id)
{
}
Then you could remove the argument from your count function as well.
A mutex is non-copyable for good reasons. Trying to outsmart the compiler by using a pointer to a mutex is a really bad idea. If you succeed, the compiler will fail to notice the problems, but they will still be there and will turn round and bite you at runtime.
There are two solutions
store the mutex in your class as a static
store the mutex outside your class.
There are advantages for both - I prefer the second.
For some more discussion of this, see my answer here mutexes with objects
Conceptually, I think you do have a problem. Copying a std::shared_ptr will just increase its reference count, and the different objects will all use the same underlying mutex - meaning that whenever one of your objects is used, none of the rest of them can be used.
You, on the other hand, need each object to get its own mutex guard which is unrelated to other objects mutex guards.
What you need is to keep the mutex defined in the class private section as it is - but ensure that your copy constructor and copy assignment operator are overloaded to create a new one from scratch - one bearing no relation to the mutex in the object being copied/assigned from.