Let's say I have some function func() in my program, and I need it to be called after some specific delay. So far I have googled it and ended up with folowing code:
#include <stdio.h>
#include <sys/time.h> /* for setitimer */
#include <unistd.h> /* for pause */
#include <signal.h> /* for signal */
void func()
{
printf("func() called\n");
}
bool startTimer(double seconds)
{
itimerval it_val;
double integer, fractional;
integer = (int)seconds;
fractional = seconds - integer;
it_val.it_value.tv_sec = integer;
it_val.it_value.tv_usec = fractional * 1000000;
it_val.it_interval = it_val.it_value;
if (setitimer(ITIMER_REAL, &it_val, NULL) == -1)
return false;
return true;
}
int main()
{
if (signal(SIGALRM, (void(*)(int))func) == SIG_ERR)
{
perror("Unable to catch SIGALRM");
exit(1);
}
startTimer(1.5);
while(1)
pause();
return 0;
}
And it works, but the problem is that settimer() causes func() to be called repeatedly with interval of 1.5 sec. And what I need, is to call func() just once.
Can someone tell me how to do this? Maybe, I need some additional parameters to settimer() ?
Note: time interval should be precise, because this program will play midi music later.
Unless you need the program to be doing other things, you can simply sleep for the time allotted.
If you need to use the alarm, you can install the alarm to be processed once.
From the man page:
struct timeval it_interval
This is the period between successive timer interrupts. If zero, the alarm will only be sent once.
Instead of your code:
it_val.it_interval = it_val.it_value;
I'd set:
it_val.it_interval.tv_sec = 0;
it_val.it_interval.tv_usec = 0;
In addition to it_val.it_value which you already set. What you've done is use the same values for both structures, and that is why you see a repeated interval.
Related
I have some code running on ESP32 microcontroller with arduino core,
In the setup() function I wish to have some code threadPressureCalib run independently in its own thread, so I do the following:
std::unique_ptr<std::thread> sensorCalib;
void setup()
{
sensorCalib.reset(new std::thread(threadPressureCalib));
std::thread* pc = sensorCalib.get();
pc->detach();
}
void loop()
{
...
}
Then, I define threadPressureCalib() as follows:
void threadPressureCalib()
{
float pressure=0;
int count;
for(timestarted = millis();(millis()-timestarted) < 10000;)
{ // THIS ONE BLOCKS SETUP() AND LOOP() CODE EXECUTION
Serial.println("Doing things");
}
Serial.println("Doing other things");
for (count=1; count<= 5;count++)
{ //THIS ONE DOES NOT BLOCK SETUP() and LOOP()
float temp;
while(!timer2.Delay(2000)); //Not sure if this is blocking anything
do{
temp = adc_pressure();
}while(temp>104.0 || temp<70.0); //Catch errors
pressure += temp;
}
changeSetting(pressure/5.0);
return;
}
Problem: During the first for loop, the setup() function's execution is stopped (as well as loop())
During the second for loop, nothing is stopped and the rest of the code runs in parallel (as expected)
Why is it that the first half of this code blocks, and then the second half does not?
Sorry if the question is vague or improperly asked, my first q here.
Explanation of timer2 per request in comments:
timer2 is a custom timer class, timer2.Delay(TIMEOUT) stores timestamp the first time it's called and returns false on every subsequent call until the current time = TIMEOUT, then it returns true and resets itself
NonBlockDelay timer2;
//time delay function (time in seconds to delay)
// Set iTimeout to current millis plus milliseconds to wait for
/**
* Called with milliseconds to delay.
* Return true if timer expired
*
*/
//Borrowed from someone on StackOverflow...
bool NonBlockDelay::Delay (unsigned long t)
{
if(TimingActive)
{
if((millis() >iTimeout)){
TimingActive = 0;
return(1);
}
return(0);
}
iTimeout = millis() + t;
TimingActive = 1;
return(0);
};
// returns true if timer expired
bool NonBlockDelay::Timeout (void)
{
if(TimingActive){
if((millis() >iTimeout)){
TimingActive = 0;
iTimeout = 0;
return(1);
}
}
return(false);
}
// Returns the current timeout value in milliseconds
unsigned long NonBlockDelay::Time(void)
{
return iTimeout;
}
There is not enough information here to tell you the answer but it seems that you have no idea what you are doing.
std::unique_ptr<std::thread> sensorCalib;
void setup(){
sensorCalib.reset(new std::thread(threadPressureCalib));
std::thread* pc = sensorCalib.get();
pc->detach();
}
So here you store a new thread that executes threadPressureCalib then immediately detach it. Once the thread is detached the instance std::thread no longer manages it. So what's the point of even having std::unique_ptr<std::thread> sensorCalib; in the first place if it literally does nothing? Do you realize that normally you need to join the thread if you wish to wait till it's completion? Could it be that you just start a bunch of instances of these threadPressureCalib - as you probably don't verify that they finished execution - and they interfere with each other?
I am on esp8266 module/microcontroller. I have never wrote in C++. Now I am trying to insert my own small "non blocking" function in one file file. My function should wait 5 seconds on background and then print something. But I don't want to delay whole initialization of meInit() for 5 seconds, it should be let's say parallel "non blocking" function. How is this possible please?
void meInit()
{
if (total > 20) total = 20;
value = EEPROM.read(1);
Serial.begin(115200);
Serial.setTimeout(10);
loadSettings(true);
buildMe();
initFirst();
//here I need to call "non-blocking" function with no delay and process immediatelly further
call5sFunct();
...do other functions here immediatelly without 5s delay...
}
void call5sFunct()
{
Sleep(5000);
DEBUG_PRINTLN("I am back again");
}
P.S. short sample is highly appreciated :) thx
Use std::thread to launch call5sFunct(); in other thread, like this:
//...
initFirst();
//here I need to call "non-blocking" function with no delay and process immediatelly further
std::thread t1(call5sFunct);
t1.detach();
...do other functions here immediatelly without 5s delay...
//...
You need to include #include <thread>
You must not sleep at all, but just call your function after 5 seconds have passed, in the loop function. Something like this (untested):
unsigned long start_time = 0;
bool call5sFunct_executed = false;
void meInit()
{
if (total > 20) total = 20;
value = EEPROM.read(1);
Serial.begin(115200);
Serial.setTimeout(10);
loadSettings(true);
buildMe();
initFirst();
// You cannot call it here, but in loop()
// call5sFunct();
// ...do other functions here immediatelly without 5s delay...
}
void call5sFunct()
{
DEBUG_PRINTLN("I am back again");
}
void loop()
{
unsigned long loop_time = millis();
if (!call5sFunct_executed && (loop_time - start_time >= 5000))
{
call5sFunct();
call5sFunct_executed = true;
}
// .... the rest of your loop function ...
}
However, this template must be used extensively programming microcontrollers. It would be really coumbersome and error-prone to write production code like this - but it's important you get the point.
There are many libraries that make it easy to implement asynchronous operations on arduino, hiding this mechanism. For example take a look to TaskScheduler.
Google for "arduino asynchronous functions" and you will find a lot of alternatives.
I'm building a simulator to test student code for a very simple robot. I need to run two functions(to update robot sensors and robot position) on separate threads at regular time intervals. My current implementation is highly processor inefficient because it has a thread dedicated to simply incrementing numbers to keep track of the position in the code. My recent theory is that I may be able to use sleep to give the time delay between updating value of the sensor and robot position. My first question is: is this efficient? Second: Is there any way to do a simple thing but measure clock cycles instead of seconds?
Putting a thread to sleep by waiting on a mutex-like object is generally efficient. A common pattern involves waiting on a mutex with a timeout. When the timeout is reached, the interval is up. When the mutex is releaed, it is the signal for the thread to terminate.
Pseudocode:
void threadMethod() {
for(;;) {
bool signalled = this->mutex.wait(1000);
if(signalled) {
break; // Signalled, owners wants us to terminate
}
// Timeout, meaning our wait time is up
doPeriodicAction();
}
}
void start() {
this->mutex.enter();
this->thread.start(threadMethod);
}
void stop() {
this->mutex.leave();
this->thread.join();
}
On Windows systems, timeouts are generally specified in milliseconds and are accurate to roughly within 16 milliseconds (timeBeginPeriod() may be able to improve this). I do not know of a CPU cycle-triggered synchronization primitive. There are lightweight mutexes called "critical sections" that spin the CPU for a few thousand cycles before delegating to the OS thread scheduler. Within this time they are fairly accurate.
On Linux systems the accuracy may be a bit higher (high frequency timer or tickless kernel) and in addition to mutexes, there are "futexes" (fast mutex) which are similar to Windows' critical sections.
I'm not sure I grasped what you're trying to achieve, but if you want to test student code, you might want to use a virtual clock and control the passing of time yourself. For example by calling a processInputs() and a decideMovements() method that the students have to provide. After each call, 1 time slot is up.
This C++11 code uses std::chrono::high_resolution_clock to measure subsecond timing, and std::thread to run three threads. The std::this_thread::sleep_for() function is used to sleep for a specified time.
#include <iostream>
#include <thread>
#include <vector>
#include <chrono>
void seconds()
{
using namespace std::chrono;
high_resolution_clock::time_point t1, t2;
for (unsigned i=0; i<10; ++i) {
std::cout << i << "\n";
t1 = high_resolution_clock::now();
std::this_thread::sleep_for(std::chrono::seconds(1));
t2 = high_resolution_clock::now();
duration<double> elapsed = duration_cast<duration<double> >(t2-t1);
std::cout << "\t( " << elapsed.count() << " seconds )\n";
}
}
int main()
{
std::vector<std::thread> t;
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(3));
std::cout << "awoke after 3\n"; }});
t.push_back(std::thread{[](){
std::this_thread::sleep_for(std::chrono::seconds(7));
std::cout << "awoke after 7\n"; }});
t.push_back(std::thread{seconds});
for (auto &thr : t)
thr.join();
}
It's hard to know whether this meets your needs because there are a lot of details missing from the question. Under Linux, compile with:
g++ -Wall -Wextra -pedantic -std=c++11 timers.cpp -o timers -lpthread
Output on my machine:
0
( 1.00014 seconds)
1
( 1.00014 seconds)
2
awoke after 3
( 1.00009 seconds)
3
( 1.00015 seconds)
4
( 1.00011 seconds)
5
( 1.00013 seconds)
6
awoke after 7
( 1.0001 seconds)
7
( 1.00015 seconds)
8
( 1.00014 seconds)
9
( 1.00013 seconds)
Other C++11 standard features that may be of interest include timed_mutex and promise/future.
Yes your theory is correct. You can use sleep to put some delay between execution of a function by thread. Efficiency depends on how wide you can choose that delay to get desired result. You have to explain details of your implementation. For e.g we don't know whether two threads are dependent ( in that case you have to take care of synchronization which would blow up some cycles ).
Here's the one way to do it. I'm using C++11, thread, atomics and high precision clock. The scheduler will callback a function that takes dt seconds which is time elapsed since last call. The loop can be stopped by calling stop() method of if callback function returns false.
Scheduler code
#include <thread>
#include <chrono>
#include <functional>
#include <atomic>
#include <system_error>
class ScheduledExecutor {
public:
ScheduledExecutor()
{}
ScheduledExecutor(const std::function<bool(double)>& callback, double period)
{
initialize(callback, period);
}
void initialize(const std::function<bool(double)>& callback, double period)
{
callback_ = callback;
period_ = period;
keep_running_ = false;
}
void start()
{
keep_running_ = true;
sleep_time_sum_ = 0;
period_count_ = 0;
th_ = std::thread(&ScheduledExecutor::executorLoop, this);
}
void stop()
{
keep_running_ = false;
try {
th_.join();
}
catch(const std::system_error& /* e */)
{ }
}
double getSleepTimeAvg()
{
//TODO: make this function thread safe by using atomic types
//right now this is not implemented for performance and that
//return of this function is purely informational/debugging purposes
return sleep_time_sum_ / period_count_;
}
unsigned long getPeriodCount()
{
return period_count_;
}
private:
typedef std::chrono::high_resolution_clock clock;
template <typename T>
using duration = std::chrono::duration<T>;
void executorLoop()
{
clock::time_point call_end = clock::now();
while (keep_running_) {
clock::time_point call_start = clock::now();
duration<double> since_last_call = call_start - call_end;
if (period_count_ > 0 && !callback_(since_last_call.count()))
break;
call_end = clock::now();
duration<double> call_duration = call_end - call_start;
double sleep_for = period_ - call_duration.count();
sleep_time_sum_ += sleep_for;
++period_count_;
if (sleep_for > MinSleepTime)
std::this_thread::sleep_for(std::chrono::duration<double>(sleep_for));
}
}
private:
double period_;
std::thread th_;
std::function<bool(double)> callback_;
std::atomic_bool keep_running_;
static constexpr double MinSleepTime = 1E-9;
double sleep_time_sum_;
unsigned long period_count_;
};
Example usage
bool worldUpdator(World& w, double dt)
{
w.update(dt);
return true;
}
void main() {
//create world for your simulator
World w(...);
//start scheduler loop for every 2ms calls
ScheduledExecutor exec;
exec.initialize(
std::bind(worldUpdator, std::ref(w), std::placeholders::_1),
2E-3);
exec.start();
//main thread just checks on the results every now and then
while (true) {
if (exec.getPeriodCount() % 10000 == 0) {
std::cout << exec.getSleepTimeAvg() << std::endl;
}
}
}
There are also other, related questions on SO.
I am doing a real_time simulation using a .cpp source code. I have to take a sample every 0.2 seconds (200 ms) ... There is a while loop that takes a sample every time step... I want to synchronize the execution of this while loop to get a sample every (200 ms) ... How should I modify the while loop ?
while (1){
// get a sample every 200 ms
}
Simple and accurate solution with std::this_thread::sleep_until:
#include "date.h"
#include <chrono>
#include <iostream>
#include <thread>
int
main()
{
using namespace std::chrono;
using namespace date;
auto next = steady_clock::now();
auto prev = next - 200ms;
while (true)
{
// do stuff
auto now = steady_clock::now();
std::cout << round<milliseconds>(now - prev) << '\n';
prev = now;
// delay until time to iterate again
next += 200ms;
std::this_thread::sleep_until(next);
}
}
"date.h" isn't needed for the delay part. It is there to provide the round<duration> function (which is now in C++17), and to make it easier to print out durations. This is all under "do stuff", and doesn't matter for the loop delay.
Just get a chrono::time_point, add your delay to it, and sleep until that time_point. Your loop will on average stay true to your delay, as long as your "stuff" takes less time than your delay. No other thread needed. No timer needed. Just <chrono> and sleep_until.
This example just output for me:
200ms
205ms
200ms
195ms
205ms
198ms
202ms
199ms
196ms
203ms
...
what you are asking is tricky, unless you are using a real-time operating system.
However, Boost has a library that supports what you want. (There is, however, no guarantee that you are going to be called exactly every 200ms.
The Boost ASIO library is probably what you are looking for though, here is code from their tutorial:
//
// timer.cpp
// ~~~~~~~~~
//
// Copyright (c) 2003-2012 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
t.wait();
std::cout << "Hello, world!\n";
return 0;
}
link is here: link to boost asio.
You could take this code, and re-arrange it like this
#include <iostream>
#include <boost/asio.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
int main()
{
boost::asio::io_service io;
while(1)
{
boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
// process your IO here - not sure how long your IO takes, so you may need to adjust your timer
t.wait();
}
return 0;
}
There is also a tutorial for handling the IO asynchronously on the next page(s).
The offered answers show you that there are tools available in Boost to help you accomplish this. My late offering illustrates how to use setitimer(), which is a POSIX facility for iterative timers.
You basically need a change like this:
while (1){
// wait until 200 ms boundary
// get a sample
}
With an iterative timer, the fired signal would interrupt any blocked signal call. So, you could just block on something forever. select will do fine for that:
while (1){
int select_result = select(0, 0, 0, 0, 0);
assert(select_result < 0 && errno == EINTR);
// get a sample
}
To establish an interval timer for every 200 ms, use setitimer(), passing in an appropriate interval. In the code below, we set an interval for 200 ms, where the first one fires 150 ms from now.
struct itimerval it = { { 0, 200000 }, { 0, 150000 } };
if (setitimer(ITIMER_REAL, &it, 0) != 0) {
perror("setitimer");
exit(EXIT_FAILURE);
}
Now, you just need to install a signal handler for SIGALRM that does nothing, and the code is complete.
You can follow the link to see the completed example.
If it is possible for multiple signals to be fired during the program execution, then instead of relying on the interrupted system call, it is better to block on something that the SIGALRM handler can wake up in a deterministic way. One possibility is to have the while loop block on read of the read end of a pipe. The signal handler can then write to the write end of that pipe.
void sigalarm_handler (int)
{
if (write(alarm_pipe[1], "", 1) != 1) {
char msg[] = "write: failed from sigalarm_handler\n";
write(2, msg, sizeof(msg)-1);
abort();
}
}
Follow the link to see the completed example.
#include <thread>
#include <chrono>
#include <iostream>
int main() {
std::thread timer_thread;
while (true) {
timer_thread = std::thread([](){
std::this_thread::sleep_for (std::chrono::seconds(1));
});
// do stuff
std::cout << "Hello World!" << std::endl;
// waits until thread has "slept"
timer_thread.join();
// will loop every second unless the stuff takes longer than that.
}
return 0;
}
To get absolute percision will be nearly impossible - maybe in embedded systems. However, if you require only an approximate frequency, you can get pretty decent performance with a chrono library such as std::chrono (c++11) or boost::chrono. Like so:
while (1){
system_clock::time_point now = system_clock::now();
auto duration = now.time_since_epoch();
auto start_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
//run sample
now = system_clock::now();
duration = now.time_since_epoch();
auto end_millis = std::chrono::duration_cast<std::chrono::milliseconds>(duration).count();
auto sleep_for = max(0, 200 - (end_millis - start_millis ));
std::this_thread::sleep_for( sleep_for );
}
We have a qthreads-based workflow engine where worker threads pick up bundles of input as they are placed on a queue, then place their output on another queue for other worker threads to run the next stage; and so on until all the input has been consumed and all the output has been generated.
Typically, several threads will be running the same task and others will be running other tasks at the same time. We want to benchmark performance of these threaded tasks in order to target optimization efforts.
It's easy to get the real (elapsed) time that a given thread, running a given task, has taken. We just look at the difference between the return values of the POSIX times() function at the start and end of the thread's run() procedure. However, I cannot figure out how to get the corresponding user and system time. Getting these from the struct tms that you pass to times() doesn't work, because this structure gives total user and system times of all threads running while the thread in question is active.
Assuming this is on Linux how about getrusage() with RUSAGE_THREAD? Solaris also offers RUSAGE_LWP which is similar and I guess there's probably equivalents for other POSIX-like systems.
Crude example:
#define _GNU_SOURCE
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
#include <pthread.h>
#include <assert.h>
#include <unistd.h>
struct tinfo {
pthread_t thread;
int id;
struct rusage start;
struct rusage end;
};
static void *
thread_start(void *arg)
{
struct tinfo *inf = arg;
getrusage(RUSAGE_THREAD, &inf->start);
if (inf->id) {
sleep(10);
}
else {
const time_t start = time(NULL);
while (time(NULL) - start < 10); // Waste CPU time!
}
getrusage(RUSAGE_THREAD, &inf->end);
return 0;
}
int main() {
static const int nrthr = 2;
struct tinfo status[nrthr];
for (int i = 0; i < nrthr; ++i) {
status[i].id = i;
const int s = pthread_create(&status[i].thread,
NULL, &thread_start,
&status[i]);
assert(!s);
}
for (int i = 0; i < nrthr; ++i) {
const int s = pthread_join(status[i].thread, NULL);
assert(!s);
// Sub-second timing is available too
printf("Thread %d done: %ld (s) user, %ld (s) system\n", status[i].id,
status[i].end.ru_utime.tv_sec - status[i].start.ru_utime.tv_sec,
status[i].end.ru_stime.tv_sec - status[i].start.ru_stime.tv_sec);
}
}
I think something similar is possible on windows using GetProcessTimes()