I am trying to run a function at the start of every minute mm:00.000
So, I want to run a function (perfomance is very importnant) every time this condition is true:
(std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now().time_since_epoch()).count()) % 6000 == 0
Any idea how to do this?
Start a separate thread.
The thread checks std::chrono_system_clock, and computes the absolute time for the next minute boundary. There are several ways to make the thread sleep until the prescribed time arrives. One way would be for thread to create a private mutex and condition variable, lock the mutex, and call wait_until() the absolute time for the next minute boundary. Since nothing else will notify the condition variable, the thread will simply sleep until the prescribed time arrives, then your thread can invoke the given function.
A separate thread is strictly not necessary. This could all be done as part of your main execution thread, if your main execution thread has nothing to do, otherwise.
Related
I have a loop in C++ that I would like to run for a few seconds. Although the amount of work on every iteration is different, from a few microseconds to seconds, it is ok to stop between iterations. It is high-performance code so I would like to avoid calculating time difference on each iteration:
while (!status.has_value())
{
// do something
// this adds extra delays that I would like to avoid
if (duration_cast<seconds>(system_clock::now() - started).count() >= limit)
status = CompletedBy::duration;
}
What I'm thinking is maybe there is a way to schedule signal and then stop the loop when it happens instead of checking the time difference on every iteration.
BTW, the loop may exit before the signal.
I have done something similar, but in Java. The general idea is to use a separate thread to manage a sentinel value, making your loop look like...
okayToLoop = true;
// code to launch thread that will wait N milliseconds, and then negate okayToLoop
while((!status.hasValue()) AND (okayToLoop)) {
// loop code
}
The "cautionary note" is that many sleep() functions for threads employ "sleep at least" semantics, so if it is really important to only sleep N milliseconds, you'll need to address that in your thread implementation. But, this avoids constantly checking the duration for each iteration of the loop.
Note that this will also allow the current iteration of the loop to finish, before the sentinel value is checked. I have also implemented this approach where the "control thread" actually interrupts the thread on which the loop is executing, interrupting the iteration. When I've done this, I've actually put the loop into a worker thread.
Any form of inter-thread communication is going to be way slower than a simple query of a high performance clock.
Now, steady_clock::now() might be too slow in the loop.
Using OS specific APIs, bind your thread to have ridiculous priority and affinity for a specific CPU. Or use rdtsc, after taking into account everything that can go wrong. Calculate what value you'd expect to get if (a) something went wrong, or (b) you have passed the time threshold.
When that happens, check steady_clock::now(), see if you are close enough to being done, and if so finish. If not, calculate a new high performance clock target and loop again.
I'm trying to implement a gather function that waits for N processes to continue.
struct sembuf operations[2];
operaciones[0].sem_num = 0;
operaciones[0].sem_op = -1; // wait() or p()
operaciones[1].sem_num = 0;
operaciones[1].sem_op = 0; // wait until it becomes 0
semop ( this->id,operations,2 );
Initially, the value of the semaphore is N.
The problem is that it freezes even when all processes have executed the semop function. I think it is related to the fact that the operations are executed atomically (but I don't know exactly what it means). But I don't understand why it doesn't work.
Does the code subtract 1 from the semaphore and then block the process if it's not the last or is the code supposed to act in a different way?
It's hard to see what the code does without the whole function and algorithm.
By the looks of it, you apply 2 action in a single atomic action: subtract 1 from the semaphore and wait for 0.
There could be several issues if all processes freeze; the semaphore is not a shared between all processes, you got the number of processes wrong when initiating the semaphore or one process leaves the barrier, at a later point increases the semaphore and returns to the barrier.
I suggest debugging to see that all processes are actually in barrier, and maybe even printing each time you do any action on the semaphore (preferably on the same console).
As for what is an atomic action is; it is a single or sequence of operation that guarantied not to be interrupted while being executed. This means no other process/thread will interfere the action.
Prove or Disprove the correctness of the following semaphore.
Here are my thoughts on this.
Well, if someone implements it so wait runs first before signal, there will be a deadlock. The program will call wait, decrement count, enter the count < 0 condition and wait at gate. Because it is waiting at gate, it cannot proceed to the signal that is right after the wait. So in that case, this might imply that the semaphore is incorrect.
However, if we assume that two processes are running, one running wait first and the other running signal first, then if the first process run waits and blocks at wait(gate), then the other process can run signal and release the process that was blocked. Thus, continuing on this scheme would make the algorithm valid and not result in a dead lock.
Given implementation follows these principles:
Binary semaphore S protect count variable from concurrent access.
If non-negative, count reflect number of free resources for general semaphore. Otherwise, absolute value of count reflect number of threads which wait (p5) or ready-to-wait (between p4 and p5) on binary semaphore gate.
Every signal() call increments count and, if its previous value is negative, signals binary semaphore gate.
But, because of possibility of ready-to-wait state, given implementation is incorrect:
Assume thread#1 calls wait(), and currently is in ready-to-wait state. Assume another thread#2 also calls wait(), and currently is in ready-to-wait state too.
Assume thread#3 calls signal() at this moment. Because count is negative (-2), the thread performs all operations including p10 (signal(gate)). Because gate is not waited at the moment, it becomes in free state.
Assume another thread#4 calls signal() at this moment. Because count is still negative (-1), the thread also performs all operations including p10. But now gate is already in free state. So, signal(gate) is no-op here, and we have missed signal event: only one of thread#1 and thread#2 will continue after executing p5 (wait(gate)). Other thread will wait forever.
Without possibility of ready-to-wait state (that is signal(S) and wait(gate) would be executed atomically) implementation would be OK.
I found a bug in my program, that the same thread is awoke twice taking the opportunity for another thread to run, thus causing unintended behaviours. It is required in my program that all threads waiting should run exactly once per turn. This bug happens because I use semaphores to make the threads wait. With a semaphore initialized with count 0, every thread calls down to the semaphore at the start of its infinite loop, and the main thread calls up in a for loop NThreads (the number of threads) times. Occasionally the same thread takes the up call twice and the problem arises.
What is the way to deal with this problem properly? Is using condition variables and broadcasting a way to do this? Will it guarantee that every thread is awoke once and only once? What are other good ways possible?
On windows, you could use WaitForMultipleObjects to select a ready thread from the threads that have not been run in the current Nthread iterations.
Each thread should have a "ready" event to signal when it is ready, and a "wake" event to wait on after it has signaled its "ready" event.
At the start of your main thread loop (1st of NThreads iteration), call WaitForMultipleObjects with an array of your NThreads "ready" events.
Then set the "wake" event of the thread corresonding to the "ready" event returned by WaitForMultipleObjects, and remove it from the array of "ready" handles. That will guaranty that the thread that has already been run won't be returned by WaitForMultipleObjects on the next iteration.
Repeat until the last iteration, where you will call WaitForMultipleObjects with an array of only 1 thread handle (I think this will work as if you called WaitForSingleObject).
Then repopulate the array of NThreads "ready" events for the next new Nthreads iterations.
Well, use an array of semaphores, one for each thread. If you want the array of threads to run once only, send one unit to each semaphore. If you want the threads to all run exactly N times, send N units to each semaphore.
What is the difference between C++11 std::this_thread::yield() and std::this_thread::sleep_for()? How to decide when to use which one?
std::this_thread::yield tells the implementation to reschedule the execution of threads, that should be used in a case where you are in a busy waiting state, like in a thread pool:
...
while(true) {
if(pool.try_get_work()) {
// do work
}
else {
std::this_thread::yield(); // other threads can push work to the queue now
}
}
std::this_thread::sleep_for can be used if you really want to wait for a specific amount of time. This can be used for task, where timing really matters, e.g.: if you really only want to wait for 2 seconds. (Note that the implementation might wait longer than the given time duration)
std::this_thread::sleep_for()
will make your thread sleep for a given time (the thread is stopped for a given time).
(http://en.cppreference.com/w/cpp/thread/sleep_for)
std::this_thread::yield()
will stop the execution of the current thread and give priority to other process/threads (if there are other process/threads waiting in the queue).
The execution of the thread is not stopped. (it just release the CPU).
(http://en.cppreference.com/w/cpp/thread/yield)