I've tried std::this_thread::sleep_for and CreateWaitableTimer.
Both have the same effect: The thread is consistently continued after 16ms by the Windows Task Scheduler or by Billy G. himself or whatever... Losing my mind anyway. When part of a Windows App the time can vary if that App is actively worked on.
Here's the sample code, I've also thrown in some SetThreadPriority for good measure
#include <windows.h>
#include <iostream>
#include <thread>
#include <chrono>
using namespace std::chrono;
using namespace std::chrono_literals;
double GetTiming();
void Wait();
int main()
{
SetPriorityClass( GetCurrentProcess(), REALTIME_PRIORITY_CLASS );
SetThreadPriority( GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL );
while ( true )
{
//DoSomething();
Wait1ms(); // => 16ms
std::this_thread::sleep_for( 1ms ); // => 16ms
std::cout << "dt = " << GetTiming() << "\n";
}
}
void Wait1ms()
{
HANDLE hTimer = NULL;
LARGE_INTEGER liDueTime;
liDueTime.QuadPart = -10000LL;
hTimer = CreateWaitableTimer(NULL, TRUE, NULL);
SetWaitableTimer( hTimer, &liDueTime, 0, NULL, NULL, 0 );
auto result = ( WaitForSingleObject( hTimer, INFINITE ) != WAIT_OBJECT_0 );
}
double GetTiming()
{
static high_resolution_clock::time_point last;
const auto difference = high_resolution_clock::now() - last;
last = high_resolution_clock::now();
return difference.count() / 1000000.0;
}
The default clock interrupt is 64 times per second. You can change this to 1000 times per second using timeBeginPeriod(1) in timeapi.h.
As this has a significant affect on battery life and system load you should cancel the faster rate as soon as possible using timeEndPeriod(1).
Older Windows and some hardware may not support such a high rate you can use timeGetDevCaps() to read wPeriodMin and use that value instead of 1.
guTimeResolution = 0;
gbUseHiResTimer = false;
if (timeBeginPeriod(1) == TIMERR_NOERROR) {
gbUseHiResTimer = true;
guTimeResolution = 1;
}
else {
// Query capabilities of the timer to find min period
TIMECAPS tc;
timeGetDevCaps(&tc, sizeof(tc));
if (timeBeginPeriod(tc.wPeriodMin) == TIMERR_NOERROR) {
guTimeResolution = tc.wPeriodMin;
gbUseHiResTimer = true;
}
}
Then later
if (gbUseHiResTimer) {
timeEndPeriod(guTimeResolution);
gbUseHiResTimer = false;
}
NOTE: It seems that behaviour has changed somewhat on Windows 11 and this may not work if, quote, "a window-owning process becomes fully occluded, minimized, or otherwise invisible or inaudible to the end user, Windows does not guarantee a higher resolution than the default system resolution". Which sort of makes sense as the functions are intended for use with multimedia programs.
Related
Currently I'm setting a separate hardware timer to the system time periodically to trigger timed interrupts. It's working fine but for elegance sake, but I wondered if it was possible to attach an interrupt directly to the system time
The events are pretty fast: one every 260 microseconds
ESP32 has a few clocks used for system time. The default full power clock is an 80 MHz called APB_CLK. But even the slow RTC clock has 6.6667 μs resolution. (Documentation here: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/system/system_time.html)
I have a GPS module that I use to update the system time periodically using adjtime(3). The advantage of that being that it gradually adjusts the system time monotonically. Also system time calls are thread safe
I'm using the Arduino IDE, so my knowledge of accessing registers and interrupts directly is poor. Here's a semi boiled down version of what I'm doing. Bit banging a synchronized digital signal. Rotating in 160 bit pages that are prepped from the other core. It's not all of my code, so something not important might be missing:
#define CPU_SPEED 40
hw_timer_t* timer = NULL;
PageData pages[2];
PageData* timerCurrentPage = &pages[0];
PageData* loopCurrentPage = &pages[1];
TaskHandle_t prepTaskHandle;
volatile int bitCount = 0;
void IRAM_ATTR onTimer() {
int level = timerCurrentPage->data[bitCount];
dac_output_voltage(DAC_CHANNEL_1, level?high:low);
bitCount++;
if(bitCount<160) {
timerAlarmWrite(timer, (timerCurrentPage->startTick+timerCurrentPage->ticksPerPage*bitCount), false);
} else {
if(timerCurrentPage == &pages[0]) timerCurrentPage = &pages[1];
else timerCurrentPage = &pages[0];
bitCount = 0;
timerAlarmWrite(timer, (timerCurrentPage->startTick), false);
vTaskResume(prepTaskHandle);
}
}
uint64_t nowTick() {
timeval timeStruct;
gettimeofday(&timeStruct, NULL);
uint64_t result = (uint64_t)timeStruct.tv_sec*1000000UL + (uint64_t)timeStruct.tv_usec;
return result;
}
void gpsUpdate(uint64_t micros) {
int64_t now = nowTick();
int64_t offset = micros - now;
timeval adjustStruct = {0,offset};
adjtime(&adjustStruct,NULL);
}
void setup() {
setCpuFrequencyMhz(CPU_SPEED);
timer = timerBegin(0, CPU_SPEED, true);
timerWrite(timer, nowTick());
timerAttachInterrupt(timer, &onTimer, true);
setPage(&pages[0]);
xTaskCreatePinnedToCore(
prepLoop, /* Task function. */
"Prep Task", /* name of task. */
10000, /* Stack size of task */
NULL, /* parameter of the task */
1, /* priority of the task */
&prepTaskHandle, /* Task handle to keep track of created task */
1); /* pin task to core 0 */
timerAlarmWrite(timer, (timerCurrentPage->startTick), false);
}
//On Core 1
void prepLoop() {
while(1) {
vTaskSuspend(NULL); //prepTaskHandle
timerWrite(timer, nowTick());
if(loopCurrentPage == &pages[0]) loopCurrentPage = &pages[1];
else loopCurrentPage = &pages[0];
setPage(loopCurrentPage);
}
}
I find on the web this class to implement a callback function that asynchronously do some work while I'm on the Main thread. This is the class:
#include "callbacktimer.h"
CallBackTimer::CallBackTimer()
:_execute(false)
{}
CallBackTimer::~CallBackTimer() {
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
}
void CallBackTimer::stop()
{
_execute.store(false, std::memory_order_release);
if( _thd.joinable() )
_thd.join();
}
void CallBackTimer::start(int interval, std::function<void(void)> func)
{
if( _execute.load(std::memory_order_acquire) ) {
stop();
};
_execute.store(true, std::memory_order_release);
_thd = std::thread([this, interval, func]()
{
while (_execute.load(std::memory_order_acquire)) {
func();
std::this_thread::sleep_for(
std::chrono::milliseconds(interval)
);
}
});
}
bool CallBackTimer::is_running() const noexcept {
return ( _execute.load(std::memory_order_acquire) &&
_thd.joinable() );
}
The problem here is that if I put a job to be done every millisecond I don't know why but it is repeated every 64 milliseconds and not every 1 millisecond, this snippet get an idea:
#include "callbacktimer.h"
int main()
{
CallBackTimer cBT;
int i = 0;
cBT.start(1, [&]()-> void {
i++;
});
while(true)
{
std::cout << i << std::endl;
Sleep(1000);
}
return 0;
}
Here I should see on the Standard Output: 1000, 2000, 3000, and so on. But it doesn't...
It's quite hard to do something on a PC in a 1ms interval. Thread scheduling happens at 1/64s, which is ~16ms.
When you try to sleep for 1 ms, it will likely sleep for 1/64s instead, given that no other thread is scheduled to run. As your main thread sleeps for one second, your callback timer may run up to 64 times during that interval.
See also How often per second does Windows do a thread switch?
You can try multimedia timers which may go down to 1 millisecond.
I'm trying to implement a chronometer in qt which should show also the microsecondo
Well, you can show microseconds, I guess. But your function won't run every microsecond.
I'm using SuspendThread / ResumeThread to modify the RIP register between the calls through GetThreadContext / SetThreadContext. It allows me to execute arbitrary code in a thread in another process.
So this works, but sometimes ResumeThread takes about 60 seconds to resume the target thread.
I understand that I'm somewhat abusing the API through this usage, but is there any way to speed this up? Or something I should look at that might indicate a bad usage?
The target thread is a sample program that loops over itself.
uint64_t blarg = 1;
while (true) {
Sleep(100);
std::cout << blarg << std::endl;
blarg++;
if (blarg == std::numeric_limits<uint64_t>::max()) {
blarg = 0;
}
}
The Suspend / Resume sequence is very simple as well:
void hijackRip(uint64_t targetAddress, DWORD threadId){
HANDLE targetThread = OpenThread(THREAD_ALL_ACCESS, FALSE, threadId);
NTSTATUS suspendResult = SuspendThread(targetThread);
CONTEXT threadContext;
memset(&threadContext, 0, sizeof(threadContext));
threadContext.ContextFlags = CONTEXT_ALL;
BOOL getThreadContextResult = GetThreadContext(targetThread, &threadContext);
threadContext.Rip = targetAddress;
BOOL setThreadContextResult = SetThreadContext(targetThread, &threadContext);
DWORD resumeThreadResult = ResumeThread(targetThread);
}
Again, this works, I can redirect execution correctly, but only 30 / 60 seconds after executing this function.
I'm building a simulator to test student code for a very simple robot. I need to run two functions(to update robot sensors and robot position) on separate threads at regular time intervals. My current implementation is highly processor inefficient because it has a thread dedicated to simply incrementing numbers to keep track of the position in the code. My recent theory is that I may be able to use sleep to give the time delay between updating value of the sensor and robot position. My first question is: is this efficient? Second: Is there any way to do a simple thing but measure clock cycles instead of seconds?
Putting a thread to sleep by waiting on a mutex-like object is generally efficient. A common pattern involves waiting on a mutex with a timeout. When the timeout is reached, the interval is up. When the mutex is releaed, it is the signal for the thread to terminate.
Pseudocode:
void threadMethod() {
for(;;) {
bool signalled = this->mutex.wait(1000);
if(signalled) {
break; // Signalled, owners wants us to terminate
}
// Timeout, meaning our wait time is up
doPeriodicAction();
}
}
void start() {
this->mutex.enter();
this->thread.start(threadMethod);
}
void stop() {
this->mutex.leave();
this->thread.join();
}
On Windows systems, timeouts are generally specified in milliseconds and are accurate to roughly within 16 milliseconds (timeBeginPeriod() may be able to improve this). I do not know of a CPU cycle-triggered synchronization primitive. There are lightweight mutexes called "critical sections" that spin the CPU for a few thousand cycles before delegating to the OS thread scheduler. Within this time they are fairly accurate.
On Linux systems the accuracy may be a bit higher (high frequency timer or tickless kernel) and in addition to mutexes, there are "futexes" (fast mutex) which are similar to Windows' critical sections.
I'm not sure I grasped what you're trying to achieve, but if you want to test student code, you might want to use a virtual clock and control the passing of time yourself. For example by calling a processInputs() and a decideMovements() method that the students have to provide. After each call, 1 time slot is up.
This C++11 code uses std::chrono::high_resolution_clock to measure subsecond timing, and std::thread to run three threads. The std::this_thread::sleep_for() function is used to sleep for a specified time.
#include <iostream>
#include <thread>
#include <vector>
#include <chrono>
void seconds()
{
using namespace std::chrono;
high_resolution_clock::time_point t1, t2;
for (unsigned i=0; i<10; ++i) {
std::cout << i << "\n";
t1 = high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::seconds(1));
t2 = high_resolution_clock::now();
duration<double> elapsed = duration_cast<duration<double> >(t2-t1);
std::cout << "\t( " << elapsed.count() << " seconds )\n";
}
}
int main()
{
std::vector<std::thread> t;
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(3));
std::cout << "awoke after 3\n"; }});
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(7));
std::cout << "awoke after 7\n"; }});
t.push_back(std::thread{seconds});
for (auto &thr : t)
thr.join();
}
It's hard to know whether this meets your needs because there are a lot of details missing from the question. Under Linux, compile with:
g++ -Wall -Wextra -pedantic -std=c++11 timers.cpp -o timers -lpthread
Output on my machine:
0
( 1.00014 seconds)
1
( 1.00014 seconds)
2
awoke after 3
( 1.00009 seconds)
3
( 1.00015 seconds)
4
( 1.00011 seconds)
5
( 1.00013 seconds)
6
awoke after 7
( 1.0001 seconds)
7
( 1.00015 seconds)
8
( 1.00014 seconds)
9
( 1.00013 seconds)
Other C++11 standard features that may be of interest include timed_mutex and promise/future.
Yes your theory is correct. You can use sleep to put some delay between execution of a function by thread. Efficiency depends on how wide you can choose that delay to get desired result. You have to explain details of your implementation. For e.g we don't know whether two threads are dependent ( in that case you have to take care of synchronization which would blow up some cycles ).
Here's the one way to do it. I'm using C++11, thread, atomics and high precision clock. The scheduler will callback a function that takes dt seconds which is time elapsed since last call. The loop can be stopped by calling stop() method of if callback function returns false.
Scheduler code
#include <thread>
#include <chrono>
#include <functional>
#include <atomic>
#include <system_error>
class ScheduledExecutor {
public:
ScheduledExecutor()
{}
ScheduledExecutor(const std::function<bool(double)>& callback, double period)
{
initialize(callback, period);
}
void initialize(const std::function<bool(double)>& callback, double period)
{
callback_ = callback;
period_ = period;
keep_running_ = false;
}
void start()
{
keep_running_ = true;
sleep_time_sum_ = 0;
period_count_ = 0;
th_ = std::thread(&ScheduledExecutor::executorLoop, this);
}
void stop()
{
keep_running_ = false;
try {
th_.join();
}
catch(const std::system_error& /* e */)
{ }
}
double getSleepTimeAvg()
{
//TODO: make this function thread safe by using atomic types
//right now this is not implemented for performance and that
//return of this function is purely informational/debugging purposes
return sleep_time_sum_ / period_count_;
}
unsigned long getPeriodCount()
{
return period_count_;
}
private:
typedef std::chrono::high_resolution_clock clock;
template <typename T>
using duration = std::chrono::duration<T>;
void executorLoop()
{
clock::time_point call_end = clock::now();
while (keep_running_) {
clock::time_point call_start = clock::now();
duration<double> since_last_call = call_start - call_end;
if (period_count_ > 0 && !callback_(since_last_call.count()))
break;
call_end = clock::now();
duration<double> call_duration = call_end - call_start;
double sleep_for = period_ - call_duration.count();
sleep_time_sum_ += sleep_for;
++period_count_;
if (sleep_for > MinSleepTime)
std::this_thread::sleep_for(std::chrono::duration<double>(sleep_for));
}
}
private:
double period_;
std::thread th_;
std::function<bool(double)> callback_;
std::atomic_bool keep_running_;
static constexpr double MinSleepTime = 1E-9;
double sleep_time_sum_;
unsigned long period_count_;
};
Example usage
bool worldUpdator(World& w, double dt)
{
w.update(dt);
return true;
}
void main() {
//create world for your simulator
World w(...);
//start scheduler loop for every 2ms calls
ScheduledExecutor exec;
exec.initialize(
std::bind(worldUpdator, std::ref(w), std::placeholders::_1),
2E-3);
exec.start();
//main thread just checks on the results every now and then
while (true) {
if (exec.getPeriodCount() % 10000 == 0) {
std::cout << exec.getSleepTimeAvg() << std::endl;
}
}
}
There are also other, related questions on SO.
I am doing a real_time simulation using a .cpp source code. I have to take a sample every 0.2 seconds (200 ms) ... There is a while loop that takes a sample every time step... I want to synchronize the execution of this while loop to get a sample every (200 ms) ... How should I modify the while loop ?
while (1){
// get a sample every 200 ms
}
Simple and accurate solution with std::this_thread::sleep_until:
#include "date.h"
#include <chrono>
#include <iostream>
#include <thread>
int
main()
{
using namespace std::chrono;
using namespace date;
auto next = steady_clock::now();
auto prev = next - 200ms;
while (true)
{
// do stuff
auto now = steady_clock::now();
std::cout << round<milliseconds>(now - prev) << '\n';
prev = now;
// delay until time to iterate again
next += 200ms;
std::this_thread::sleep_until(next);
}
}
"date.h" isn't needed for the delay part. It is there to provide the round<duration> function (which is now in C++17), and to make it easier to print out durations. This is all under "do stuff", and doesn't matter for the loop delay.
Just get a chrono::time_point, add your delay to it, and sleep until that time_point. Your loop will on average stay true to your delay, as long as your "stuff" takes less time than your delay. No other thread needed. No timer needed. Just <chrono> and sleep_until.
This example just output for me:
200ms
205ms
200ms
195ms
205ms
198ms
202ms
199ms
196ms
203ms
...
what you are asking is tricky, unless you are using a real-time operating system.
However, Boost has a library that supports what you want. (There is, however, no guarantee that you are going to be called exactly every 200ms.
The Boost ASIO library is probably what you are looking for though, here is code from their tutorial:
//
// timer.cpp
// ~~~~~~~~~
//
// Copyright (c) 2003-2012 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
t.wait();
std::cout << "Hello, world!\n";
return 0;
}
link is here: link to boost asio.
You could take this code, and re-arrange it like this
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
while(1)
{
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
// process your IO here - not sure how long your IO takes, so you may need to adjust your timer
t.wait();
}
return 0;
}
There is also a tutorial for handling the IO asynchronously on the next page(s).
The offered answers show you that there are tools available in Boost to help you accomplish this. My late offering illustrates how to use setitimer(), which is a POSIX facility for iterative timers.
You basically need a change like this:
while (1){
// wait until 200 ms boundary
// get a sample
}
With an iterative timer, the fired signal would interrupt any blocked signal call. So, you could just block on something forever. select will do fine for that:
while (1){
int select_result = select(0, 0, 0, 0, 0);
assert(select_result < 0 && errno == EINTR);
// get a sample
}
To establish an interval timer for every 200 ms, use setitimer(), passing in an appropriate interval. In the code below, we set an interval for 200 ms, where the first one fires 150 ms from now.
struct itimerval it = { { 0, 200000 }, { 0, 150000 } };
if (setitimer(ITIMER_REAL, &it, 0) != 0) {
perror("setitimer");
exit(EXIT_FAILURE);
}
Now, you just need to install a signal handler for SIGALRM that does nothing, and the code is complete.
You can follow the link to see the completed example.
If it is possible for multiple signals to be fired during the program execution, then instead of relying on the interrupted system call, it is better to block on something that the SIGALRM handler can wake up in a deterministic way. One possibility is to have the while loop block on read of the read end of a pipe. The signal handler can then write to the write end of that pipe.
void sigalarm_handler (int)
{
if (write(alarm_pipe[1], "", 1) != 1) {
char msg[] = "write: failed from sigalarm_handler\n";
write(2, msg, sizeof(msg)-1);
abort();
}
}
Follow the link to see the completed example.
#include <thread>
#include <chrono>
#include <iostream>
int main() {
std::thread timer_thread;
while (true) {
timer_thread = std::thread([](){
std::this_thread::sleep_for (std::chrono::seconds(1));
});
// do stuff
std::cout << "Hello World!" << std::endl;
// waits until thread has "slept"
timer_thread.join();
// will loop every second unless the stuff takes longer than that.
}
return 0;
}
To get absolute percision will be nearly impossible - maybe in embedded systems. However, if you require only an approximate frequency, you can get pretty decent performance with a chrono library such as std::chrono (c++11) or boost::chrono. Like so:
while (1){
system_clock::time_point now = system_clock::now();
auto duration = now.time_since_epoch();
auto start_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
//run sample
now = system_clock::now();
duration = now.time_since_epoch();
auto end_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
auto sleep_for = max(0, 200 - (end_millis - start_millis ));
std::this_thread::sleep_for( sleep_for );
}