I have a long computation in a loop, which I need to end prematurely if allowed compute time expires (and return a partially computed result). I plan to do it via SIGALARM handler and a timer:
// Alarm handler will set it to true.
bool expired = false;
int compute ()
{
int result;
// Computation loop:
for (...) {
// Computation here.
if (expired)
break;
}
return result;
}
My question is: how to correctly define the expired variable (volatile bool or std::atomic<bool>, or std::sig_atomic_t, etc), how to set it true in the signal handler (just an assignment or atomic operation), and how to check its value in the compute function?
This is a single-threaded C++17 code...
If you aren't using multiple threads, you don't need an atomic operation. Just set the global variable expired = true in the signal handler.
EDIT: as #Frank demonstrated below, the compiler might optimize it out. You can avoid this by declaring expired as volatile bool expired = false;
Unless one iteration takes a considerable amount of time, I would suggest that you don't bother with signals and simply check the gettimeofday at each iteration.
Related
I want the while loop in the thread to run , wait a second, then run again, so on and so on., but this don't seem to work, how would I fix it?
main(){
bool flag = true;
pthread = CreateThread(NULL, 0, ThreadFun, this, 0, &ThreadIP);
}
ThreadFun(){
while(flag == true)
WaitForSingleObject(pthread,1000);
}
This is one way to do it, I prefer using condition variables over sleeps since they are more responsive and std::async over std::thread (mainly because std::async returns a future which can send information back the the starting thread. Even if that feature is not used in this example).
#include <iostream>
#include <chrono>
#include <future>
#include <condition_variable>
// A very useful primitive to communicate between threads is the condition_variable
// despite its name it isn't a variable perse. It is more of an interthread signal
// saying, hey wake up thread something may have changed that's interesting to you.
// They come with some conditions of their own
// - always use with a lock
// - never wait without a predicate
// (https://www.modernescpp.com/index.php/c-core-guidelines-be-aware-of-the-traps-of-condition-variables)
// - have some state to observe (in this case just a bool)
//
// Since these three things go together I usually pack them in a class
// in this case signal_t which will be used to let thread signal each other
class signal_t
{
public:
// wait for boolean to become true, or until a certain time period has passed
// then return the value of the boolean.
bool wait_for(const std::chrono::steady_clock::duration& duration)
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait_for(lock, duration, [&] { return m_signal; });
return m_signal;
}
// wiat until the boolean becomes true, wait infinitely long if needed
void wait()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_cv.wait(lock, [&] {return m_signal; });
}
// set the signal
void set()
{
std::unique_lock<std::mutex> lock{ m_mtx };
m_signal = true;
m_cv.notify_all();
}
private:
bool m_signal { false };
std::mutex m_mtx;
std::condition_variable m_cv;
};
int main()
{
// create two signals to let mainthread and loopthread communicate
signal_t started; // indicates that loop has really started
signal_t stop; // lets mainthread communicate a stop signal to the loop thread.
// in this example I use a lambda to implement the loop
auto future = std::async(std::launch::async, [&]
{
// signal this thread has been scheduled and has started.
started.set();
do
{
std::cout << ".";
// the stop_wait_for will either wait 500 ms and return false
// or stop immediately when stop signal is set and then return true
// the wait with condition variables is much more responsive
// then implementing a loop with sleep (which will only
// check stop condition every 500ms)
} while (!stop.wait_for(std::chrono::milliseconds(500)));
});
// wait for loop to have started
started.wait();
// give the thread some time to run
std::this_thread::sleep_for(std::chrono::seconds(3));
// then signal the loop to stop
stop.set();
// synchronize with thread stop
future.get();
return 0;
}
While the other answer is a possible way to do it, my answer will mostly answer from a different angle trying to see what could be wrong with your code...
Well, if you don't care to wait up to one second when flag is set to false and you want a delay of at least 1000 ms, then a loop with Sleep could work but you need
an atomic variable (for ex. std::atomic)
or function (for ex. InterlockedCompareExchange)
or a MemoryBarrier
or some other mean of synchronisation to check the flag.
Without proper synchronisation, there is no guarantee that the compiler would read the value from memory and not the cache or a register.
Also using Sleep or similar function from a UI thread would also be suspicious.
For a console application, you could wait some time in the main thread if the purpose of you application is really to works for a given duration. But usually, you probably want to wait until processing is completed. In most cases, you should usually wait that threads you have started have completed.
Another problem with Sleep function is that the thread always has to wake up every few seconds even if there is nothing to do. This can be bad if you want to optimize battery usage. However, on the other hand having a relatively long timeout on function that wait on some signal (handle) might make your code a bit more robust against missed wakeup if your code has some bugs in it.
You also need a delay in some cases where you don't really have anything to wait on but you need to pull some data at regular interval.
A large timeout could also be useful as a kind of watch dog timer. For example, if you expect to have something to do and receive nothing for an extended period, you could somehow report a warning so that user could check if something is not working properly.
I highly recommand you to read a book on multithreading like Concurrency in Action before writing multithread code code.
Without proper understanding of multithreading, it is almost 100% certain that anyone code is bugged. You need to properly understand the C++ memory model (https://en.cppreference.com/w/cpp/language/memory_model) to write correct code.
A thread waiting on itself make no sense. When you wait a thread, you are waiting that it has terminated and obviously if it has terminated, then it cannot be executing your code. You main thread should wait for the background thread to terminate.
I also usually recommand to use C++ threading function over the API as they:
Make your code portable to other system.
Are usually higher level construct (std::async, std::future, std::condition_variable...) than corresponding Win32 API code.
Is there a canonical pattern for a thread to check if it should stop working?
The scenario is that a thread is spinning a tight working loop but it should stop if another thread tells it to. I was thinking of checking an atomic bool in the loop condition but I'm not sure if that is an unnecessary performance hit or not. E.g.
std::atomic<bool> stop{false};
while(!stop){
//...
}
You have to make it atomic (or volatile in old C/C++) to ensure that the compiler doesn't optimize it away and only test stop once.
If you call a function in the loop that cannot be inlined (like reading sockets) you might be safe with a non-atomic bool, but why risk it - especially as the atomic read is unlikely to be a performance issue in that case?
To have the least effect you could do something like:
std::atomic<bool> stop;
void rx_thread() {
// ...
while(!stop.load(std::memory_order_relaxed)){
..
}
}
I don't see any reason why the bool should be atomic. There isn't really a potential for a race condition. When you want to stop the thread, you first set the variable to true and then issue a call that wakes up the blocking function inside the loop (however you do that, depends on the blocking call).
bool stop = false;
void rx_thread() {
// ...
while(!stop){
// ...
blocking_call();
// ...
}
}
void stop_thread() {
stop = true;
wakeup_rx_thread();
}
If the blocking call happens to wake up between setting stop to true and calling wakeup_rx_thread(), then the loop will finish anyway. The call to wakeup_rx_thread() will be needless, but that doesn't matter.
I'm trying to implement a protected variable that does not use locks in C++11. I have read a little about optimistic concurrency, but I can't understand how can it be implemented neither in C++ nor in any language.
The way I'm trying to implement the optimistic concurrency is by using a 'last modification id'. The process I'm doing is:
Take a copy of the last modification id.
Modify the protected value.
Compare the local copy of the modification id with the current one.
If the above comparison is true, commit the changes.
The problem I see is that, after comparing the 'last modification ids' (local copy and current one) and before commiting the changes, there is no way to assure that no other threads have modified the value of the protected variable.
Below there is a example of code. Lets suppose that are many threads executing that code and sharing the variable var.
/**
* This struct is pretended to implement a protected variable,
* but using optimistic concurrency instead of locks.
*/
struct ProtectedVariable final {
ProtectedVariable() : var(0), lastModificationId(0){ }
int getValue() const {
return var.load();
}
void setValue(int val) {
// This method is not atomic, other thread could change the value
// of val before being able to increment the 'last modification id'.
var.store(val);
lastModificationId.store(lastModificationId.load() + 1);
}
size_t getLastModificationId() const {
return lastModificationId.load();
}
private:
std::atomic<int> var;
std::atomic<size_t> lastModificationId;
};
ProtectedVariable var;
/**
* Suppose this method writes a value in some sort of database.
*/
int commitChanges(int val){
// Now, if nobody has changed the value of 'var', commit its value,
// retry the transaction otherwise.
if(var.getLastModificationId() == currModifId) {
// Here is one of the problems. After comparing the value of both Ids, other
// thread could modify the value of 'var', hence I would be
// performing the commit with a corrupted value.
var.setValue(val);
// Again, the same problem as above.
writeToDatabase(val);
// Return 'ok' in case of everything has gone ok.
return 0;
} else {
// If someone has changed the value of var while trying to
// calculating and commiting it, return error;
return -1;
}
}
/**
* This method is pretended to be atomic, but without using locks.
*/
void modifyVar(){
// Get the modification id for checking whether or not some
// thread has modified the value of 'var' after commiting it.
size_t currModifId = lastModificationId.load();
// Get a local copy of 'var'.
int currVal = var.getValue();
// Perform some operations basing on the current value of
// 'var'.
int newVal = currVal + 1 * 2 / 3;
if(commitChanges(newVal) != 0){
// If someone has changed the value of var while trying to
// calculating and commiting it, retry the transaction.
modifyVar();
}
}
I know that the above code is buggy, but I don't understand how to implement something like the above in a correct way, without bugs.
Optimistic concurrency doesn't mean that you don't use the locks, it merely means that you don't keep the locks during most of the operation.
The idea is that you split your modification into three parts:
Initialization, like getting the lastModificationId. This part may need locks, but not necessarily.
Actual computation. All expensive or blocking code goes here (including any disk writes or network code). The results are written in such a way that they not obscure previous version. The likely way it works is by storing the new values next to the old ones, indexed by not-yet-commited version.
Atomic commit. This part is locked, and must be short, simple, and non blocking. The likely way it works is that it just bumps the version number - after confirming, that there was no other version commited in the meantime. No database writes at this stage.
The main assumption here is that computation part is much more expensive that the commit part. If your modification is trivial and the computation cheap, then you can just use a lock, which is much simpler.
Some example code structured into these 3 parts could look like this:
struct Data {
...
}
...
std::mutex lock;
volatile const Data* value; // The protected data
volatile int current_value_version = 0;
...
bool modifyProtectedValue() {
// Initialize.
int version_on_entry = current_value_version;
// Compute the new value, using the current value.
// We don't have any lock here, so it's fine to make heavy
// computations or block on I/O.
Data* new_value = new Data;
compute_new_value(value, new_value);
// Commit or fail.
bool success;
lock.lock();
if (current_value_version == version_on_entry) {
value = new_value;
current_value_version++;
success = true;
} else {
success = false;
}
lock.unlock();
// Roll back in case of failure.
if (!success) {
delete new_value;
}
// Inform caller about success or failure.
return success;
}
// It's cleaner to keep retry logic separately.
bool retryModification(int retries = 5) {
for (int i = 0; i < retries; ++i) {
if (modifyProtectedValue()) {
return true;
}
}
return false;
}
This is a very basic approach, and especially the rollback is trivial. In real world example re-creating the whole Data object (or it's counterpart) would be likely infeasible, so the versioning would have to be done somewhere inside, and the rollback could be much more complex. But I hope it shows the general idea.
The key here is acquire-release semantics and test-and-increment. Acquire-release semantics are how you enforce an order of operations. Test-and-increment is how you choose which thread wins in case of a race.
Your problem therefore is the .store(lastModificationId+1). You'll need .fetch_add(1). It returns the old value. If that's not the expected value (from before your read), then you lost the race and retry.
If I understand your question, you mean to make sure var and lastModificationId are either both changed, or neither is.
Why not use std::atomic<T> where T would be structure that hold both the int and the size_t?
struct VarWithModificationId {
int var;
size_t lastModificationId;
};
class ProtectedVariable {
private std::atomic<VarWithModificationId> protectedVar;
// Add your public setter/getter methods here
// You should be guaranteed that if two threads access protectedVar, they'll each get a 'consistent' view of that variable, but the setter will need to use a lock
};
Оptimistic concurrency is used in database engines when it's expected that different users will access the same data rarely. It could go like this:
First user reads data and timestamp. Users handles the data for some time, user checks if the timestamp in the DB hasn't changes since he read the data, if it doesn't then user updates the data and the timestamp.
But, internally DB-engine uses locks for update anyway, during this lock it checks if timestamp has been changed and if it hasn't been, engine updates the data. Just time for which data is locked smaller than with pessimistic concurrency. And you also need to use some kind of locking.
I am having a shared vector which gets accessed by two threads.
A function from thread A pushs into the vector and a function from thread B swaps the vector completely for processing.
MovetoVec(PInfo* pInfo)
{
while(1)
{
if(GetSwitch())
{
swapBucket->push_back(pInfo);
toggles = true;
break;
}
else if(pInfo->tryMove == 5)
{
delete pInfo;
break;
}
pInfo->tryMove++;
Sleep(25);
}
}
The thread A tries to get atomic boolean toggles to true and pushes into vector.(the above MoveToVec function will be called by many number of threads). The function GetSwitch is defined as
GetSwitch()
{
if(toggles)
{
toggles = false;
return TRUE;
}
else
return FALSE;
}
toggles here is atomic_bool.And the another function from thread B that swaps the vector is
GetClassObj(vector<ProfiledInfo*>* toSwaps)
{
if(GetSwitch())
{
toSwaps->swap(*swapBucket);
toggles = true;
}
}
If GetSwitch returns false then the threadB does nothing. Here i dint use any locking. It works in most of the cases. But some time one of the pInfo objects in swapBucket is NULL. I got to know it is because of poor synchronization.
I followed this type of GetSwitch() logic just to neglect the overhead caused by locking. Should i drop this out and go back to mutex or critical section stuffs?
Your GetSwitch implementation is wrong. It is possible for multiple threads to acquire the switch simultaneously.
An example of such a scenario with just two threads:
Thread 1 | Thread 2
--------------------------|--------------------------
if (toggles) |
| if (toggles)
toggles = false; |
| toggles = false;
The if-test and assignment are not an atomic operation and therefore cannot be used to synchronize threads on their own.
If you want to use an atomic boolean as a means of synchronization, you need to compare and exchange the value in one atomic operation. Luckily, C++ provides such an operation called std::compare_exchange, which is available in a weak and strong flavor (the weak one may spuriously fail but is cheaper when called in a loop).
Using this operation, your GetSwitch method would become:
bool GetSwitch()
{
bool expected = true; // The value we expect 'toggles' to have
bool desired = false; // The value we want 'toggles' to get
// Check if 'toggles' is as expected, and if it is, update it to the desired value
bool result = toggles.compare_exchange_strong(&expected, desired);
// The result of the compare_exchange is true if the value was updated and false if it was not
return result;
}
This will ensure that comparing and updating the value happens atomically.
Note that the C++ standard does not guarantee an atomic boolean to be lock-free. In your case, you could also use std::atomic_flag which is guaranteed to be lock-free by the standard! Carefully read the example though, it works a tad bit different than atomic variables.
Writing lock-free code, as you are attempting to do, is quite complex and error-prone.
My advice would be to write the code with locks first and ensure it is 100% correct. Mutexes are actually surprisingly fast, so performance should be okay in most cases. A good read on lock performance: http://preshing.com/20111118/locks-arent-slow-lock-contention-is
Only once you have profiled your code, and convinced yourself that the locks are impacting performance, you should attempt to write the code lock-free. Then profile again because lock-free code is not necessarily faster.
Suppose I have like :
static int write_log = 0;
void *logger__run(void *arg){
// logger thread execution.
while(1){
// get log message from shared queue.
if(write_log){
// just checking write_log value.
// write logs till write_log is true.
}
// destroy log message.
}
}
void logger__set_logging(int p_write_log){
// other threads can start / stop logging by logger thread.
// just assigning value.
write_log = p_write_log;
}
int logger__is_logging(void){
// other threads can check whether logger thread is logging or not.
// just returning value.
return write_log;
}
The function logger__run() will be executed by logger thread. Other threads can start / stop logging by logger thread by setting write_log shared variable. Other threads can also check if logger is logging or not.
As you can see, there are only single statements like: assignment or returning value or checking in while loop. So, do we need this write_log access to be protected with locks?
I would say yes. logger__run may read an incorrect value for write_log because the assignment in logger__set_logging is not atomic (thus, some bytes of the int value may have been written and others may have yet to come within this single assignment).