I am writing a timer app. In unit testing how do I wait for few seconds to test if my timer is working properly?
// I want something like this.
test("Testing timer", () {
int startTime = timer.seconds;
timer.start();
// do something to wait for 2 seconds
expect(timer.seconds, startTime - 2);
});
You can use awaitFuture.delayed(...)`:
test("Testing timer", () async {
int startTime = timer.seconds;
timer.start();
// do something to wait for 2 seconds
await Future.delayed(const Duration(seconds: 2), (){});
expect(timer.seconds, startTime - 2);
});
An alternative would be fake_async with https://pub.dartlang.org/packages/clock to be able to freely manipulate the time used in the test.
The accepted answer is not optimal, as not all delays are just a few seconds long.
What if you had to test a 10-minute delay?
A better approach would be to use the fake_async package.
Here's an example from their docs that you can adjust for your use case:
import 'dart:async';
import 'package:fake_async/fake_async.dart';
import 'package:test/test.dart';
void main() {
test("Future.timeout() throws an error once the timeout is up", () {
// Any code run within [fakeAsync] is run within the context of the
// [FakeAsync] object passed to the callback.
fakeAsync((async) {
// All asynchronous features that rely on timing are automatically
// controlled by [fakeAsync].
expect(Completer().future.timeout(Duration(seconds: 5)),
throwsA(isA<TimeoutException>()));
// This will cause the timeout above to fire immediately, without waiting
// 5 seconds of real time.
async.elapse(Duration(seconds: 5));
});
});
}
The above code also works with timers.
Related
There is a task to set a timer that will work in a couple of months. I ran into a problem with QTimer::start(int msec) time is specified in int and also in milliseconds. It turns out I can specify 2147483647 milliseconds, which is a little less than a month. I used to use crontab, but I had to abandon it.
Code example:
uint sec_prediction, sec_now, answer;
QDateTime now = QDateTime::currentDateTime();
sec_now = now.toTime_t();
QLocale mylocale(QLocale::English);
qDebug() << mylocale.toString(now, "MMM d hh:mm:ss") << sec_now;
QDateTime payment = QDateTime::currentDateTime();
payment = payment.addMonths(3);
sec_prediction = payment.toTime_t();
qDebug() << mylocale.toString(payment, "MMM d hh:mm:ss") << sec_prediction;
answer = sec_prediction - sec_now;
qDebug() << answer << " - After this time, the timer should start. \n";
QTimer timer;
timer.setSingleShot(true);
connect(&timer, SIGNAL(timeout()), this, SLOT(processQueue()));
timer.start(answer * 1000);
Console output:
"Jan 7 00:42:38" 1673041358
"Apr 7 00:42:38" 1680817358
7776000 - After this time, the timer should start.
QObject::startTimer: Timers cannot have negative intervals
I am using Qt 4.8
Grateful for help
Since you are using c++98 you can use libuv (C library) which supports timers
uv_timer_t timer_req; // Timers invoke the registered callback after a certain
// time has elapsed since the timer was started.
uv_timer_init(loop, &timer_req); // Docs show how to create an event loop
// This call is non blocking but YOU have to make
// sure your program still runs after a month.
uv_timer_start(&timer_req, callback, 5000, 2000);
// here put a month's worth of secs ^^ ^^
// optionally set this to repeat ^^
// After waiting for "interval sesc" your callback runs
I see that Qt4.8 doesn't have the convenient QTimer::CallOnTimeout method, but it provides a QTimer class which is wired a bit differently:
// Create a QTimer, connect its timeout() signal to the
// appropriate slots, and call start(). From then on it
// will emit the timeout() signal at constant intervals.
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(update()));
timer->start(1000); // 1000 milliseconds timer.
Shameless plug of my C++ chrono-scheduler library. Using it would look like this:
#include "scheduler.h"
{
bool compensate = true; // a README explains
unsigned nWorkers = 1; // these values in the repo.
// 1. Create a scheduler.
ttt::CallScheduler plan(compensate, nWorkers);
// 2. Add your task.
auto myTask = []{
// You may want to repeat in another month
return ttt::Result::Finish;
};
auto token = plan.add(
myTask, // User tasks are std::function<ttt::Result()>
24h * 30, // Interval for execution or repetition
false); // Whether to immediately queue the task for execution
// 3. Wait a month ....
}
Essentially the scheduler has a pool of threads that:
Register tasks
Execute tasks at the specified intervals
Potentially re-register the tasks if they need re-running
This is the most general solution to the problem of adding ad-hoc as many tasks as you want, with millisecond (and even less) granularity without blocking the caller or the execution context. But the logic found within can be extracted and simplified for your case.
I have a job that should be ran with minimum interval of 5 seconds. Trigger that starts this job can be executed in any moment and in any frequency.
What is the best way to solve such a case in RTOS environment?
I want to make a function that creates a task if it does not exist. Existing task should wait for minimum interval to pass before doing anything. While it is waiting, function that should create it should skip the creation of a new task.
What is the right way to check if task was created but didn't finish yet?
Should I use tasks at all in this case?
Code example below:
#define CONFIG_MIN_INTERVAL 5000
uint32_t last_execution_timestamp = 0;
TaskHandle_t *task_handle = NULL;
bool task_done = true;
static void report_task(void *context)
{
if (esp_timer_get_time() / 1000 < last_execution_timestamp + CONFIG_MIN_INTERVAL)
{
ESP_LOGI(stateTAG, "need to wait for for right time");
int time_to_wait = last_execution_timestamp + CONFIG_MIN_INTERVAL - esp_timer_get_time() / 1000;
vTaskDelay(time_to_wait / portTICK_PERIOD_MS);
}
// do something...
task_done = true;
vTaskDelete(task_handle);
}
void init_report_task(uint32_t context)
{
if (!task_done)
{
ESP_LOGI(stateTAG, "TASK already exists");
}
else
{
ESP_LOGI(stateTAG, "Creating task");
xTaskCreate(&report_task, "report_task", 8192, (void *)context, 4, task_handle);
task_done = false;
}
}
eTaskGetState can be used to check if a task is already running, but such a solution can be susceptible to races. For example your task is technically still "running" when it's in fact "finishing", i.e. setting task_done = true; and preparing for exit.
A better solution could be to use a queue (or a semaphore) and have the task run continuously, waiting for the messages to arrive and processing them in a loop.
Using a semaphore, you can do xSemaphoreTake(sem, 5000 / portTICK_PERIOD_MS); to wait for either a wake-up condition or a timeout of 5 seconds, whichever comes first.
== EDIT ==
if there is no events task should wait. Only if event happens it should run the job. It should run it immediately if there was no execution in past 5 seconds. If there was an execution it should wait until 5 seconds since last execution and only then run it
You can achieve that by carefully managing the semaphore's ticks to wait. Something like this (untested):
TickType_t nextDelay = portMAX_DELAY;
TickType_t lastWakeup = 0;
const TickType_t minDelay = 5000 / portTICK_PERIOD_MS;
for (;;) {
bool signalled = xSemaphoreTake(sem, nextDelay);
TickType_t now = (TickType_t)(esp_timer_get_time() / (portTICK_PERIOD_MS * 1000));
if (signalled) {
TickType_t ticksSinceLastWakeup = now - lastWakeup;
if (ticksSinceLastWakeup < minDelay) {
// wakeup too soon - schedule next wakeup and go back to sleep
nextDelay = minDelay - ticksSinceLastWakeup;
continue;
}
}
lastWakeup = now;
nextDelay = portMAX_DELAY;
// do work ...
}
How to generate async function calls like in JS (you fire it and it goes to work independently)?
while(true)
{
if(s == "123")
//fire an async function to do some work and continue the cycle immediately
}
I do not need to wait, just start an average function with params to run when the system has time, but the cycle must not to be paused and waiting. It will run on Linux.
EDIT:
If I use it like this:
while(true)
{
if(s == "123")
std::future<std::string> resultFromDB = std::async(std::launch::async, fetchDataFromDB, "Data");
}
, will it be a problem, that resultFromDB gets another async assigned in each cycle run?
I am on esp8266 module/microcontroller. I have never wrote in C++. Now I am trying to insert my own small "non blocking" function in one file file. My function should wait 5 seconds on background and then print something. But I don't want to delay whole initialization of meInit() for 5 seconds, it should be let's say parallel "non blocking" function. How is this possible please?
void meInit()
{
if (total > 20) total = 20;
value = EEPROM.read(1);
Serial.begin(115200);
Serial.setTimeout(10);
loadSettings(true);
buildMe();
initFirst();
//here I need to call "non-blocking" function with no delay and process immediatelly further
call5sFunct();
...do other functions here immediatelly without 5s delay...
}
void call5sFunct()
{
Sleep(5000);
DEBUG_PRINTLN("I am back again");
}
P.S. short sample is highly appreciated :) thx
Use std::thread to launch call5sFunct(); in other thread, like this:
//...
initFirst();
//here I need to call "non-blocking" function with no delay and process immediatelly further
std::thread t1(call5sFunct);
t1.detach();
...do other functions here immediatelly without 5s delay...
//...
You need to include #include <thread>
You must not sleep at all, but just call your function after 5 seconds have passed, in the loop function. Something like this (untested):
unsigned long start_time = 0;
bool call5sFunct_executed = false;
void meInit()
{
if (total > 20) total = 20;
value = EEPROM.read(1);
Serial.begin(115200);
Serial.setTimeout(10);
loadSettings(true);
buildMe();
initFirst();
// You cannot call it here, but in loop()
// call5sFunct();
// ...do other functions here immediatelly without 5s delay...
}
void call5sFunct()
{
DEBUG_PRINTLN("I am back again");
}
void loop()
{
unsigned long loop_time = millis();
if (!call5sFunct_executed && (loop_time - start_time >= 5000))
{
call5sFunct();
call5sFunct_executed = true;
}
// .... the rest of your loop function ...
}
However, this template must be used extensively programming microcontrollers. It would be really coumbersome and error-prone to write production code like this - but it's important you get the point.
There are many libraries that make it easy to implement asynchronous operations on arduino, hiding this mechanism. For example take a look to TaskScheduler.
Google for "arduino asynchronous functions" and you will find a lot of alternatives.
I have a problem while using deadline_timer and io_service::post as below:
#include "boost/asio.hpp"
#include "boost/thread.hpp"
int main()
{
boost::asio::io_service io_service;
boost::asio::deadline_timer timer1(io_service);
boost::asio::deadline_timer timer2(io_service);
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
boost::this_thread::sleep(boost::posix_time::seconds(5));
printf("1 ");
});
timer2.expires_from_now(boost::posix_time::seconds(2));
timer2.async_wait([](const boost::system::error_code& error) {
printf("2 ");
});
boost::thread t([&io_service]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([]() {
printf("4 ");
});
});
io_service.run();
t.join();
getchar();
return 0;
}
I thougth that the result is "1 2 3 4" but the result is "1 3 4 2". Anyone can show me how to the callback of timer2(print "2") is performed before as the result "1 2 3 4" with boost library (and don't change the expire time of timer1 and timer2).
Thanks very much!
This is actually a pretty complicated example.
The io_service will run on the main thread. Here is the order of operations
Main Thread:
Request Timer at T0 + 1
Request Timer at T0 + 2
Spawn thread
Execute all pending io (io_service.run())
Secondary Thread:
Sleep 5 seconds
Request Timer
Request Timer
First of all, nothing will execute in the io_service until io_service.run() is called.
Once io_service.run() is called, a timer for 1 second in the future is scheduled. When that timer fires, it first sleeps for 5 seconds before printing 1.
While that thread is executing, the secondary thread also comes up, and sleeps for 5 seconds. This thread is setup and scheduled before the timer executing in the handler for timer1 is completed. Since both of these threads sleep for 5 seconds, '2' and '3' are immediately posted to the io_service.
Now things get a bit tricky. It seems likely that the timeout for timer2 should have expired by now (it being at least 5 seconds in the future), but there were two commands directly posted to the io_service while it was handling timer1.
It seems that in the implementation details, boost gives priority to directly posted actions over deadline timer actions.
the first timer expiration blocks the io (main) thread from running, in the mean time the other thread posts a couple of items to asio work queue, once timer 1's callback completes, the second timers expiration is processed which causes the callback to be queued but not executed. since "3" & "4" where already queued (while "1" was blocking the main thread), they go ahead of "2"
The point of asio is to not block. By putting long running work in the first timers callback (the sleep) you have prevented the io thread from running in a timely manner. You should offload that work into a dedicated thread, and post its completion back to asio.
The io_service makes no guarantees about the invocation order of handlers. In theory, the handlers could be invoked in any order, with some permutations being significantly unlikely.
If handlers need to be invoked in a very specific order, then consider restructing the asynchronous call chains in a manner that enforces the desired handler chain. Additionally, one may find it necessary to use the guaranteed order of handler invocation that strand provides. Consider not trying to control complex handler invocations through with brittle sleeps and timers.
Your first problem is that you're trying to block inside a handler:
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
boost::this_thread::sleep(boost::posix_time::seconds(5)); // <--- HERE
printf("1 ");
});
What happens in the above code is that after timer1 waits for one second, it posts the callback to the io_service. Inside the io_service::run function this callback is executed but this execution happens inside the main thread, so it halts for five seconds, preventing timer2 from posting its handler for execution into io_service. It does so until the sixth second of the program execution (6 = 5+1).
Meanwhile the thread t gets executed and at fifth second of program execution it posts those two printf("3") and printf("4") to io_service.
boost::thread t([&io_service]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([]() {
printf("4 ");
});
});
Once the handler from timer1 unblocks, it allows timer2 to post its handler to io_service. That again happens at sixth second of program exectuion, that is, once the printf("3") and printf("4") have already been posted!
All in all, I believe what you're looking for is this:
#include "boost/asio.hpp"
#include "boost/thread.hpp"
int main()
{
boost::asio::io_service io_service;
boost::optional<boost::asio::io_service::work> work(io_service);
boost::asio::deadline_timer timer1(io_service);
boost::asio::deadline_timer timer2(io_service);
timer1.expires_from_now(boost::posix_time::seconds(1));
timer1.async_wait([](const boost::system::error_code& error) {
printf("1 ");
});
timer2.expires_from_now(boost::posix_time::seconds(2));
timer2.async_wait([](const boost::system::error_code& error) {
printf("2 ");
});
boost::thread t([&io_service, &work]() {
boost::this_thread::sleep(boost::posix_time::seconds(5));
io_service.post([]() {
printf("3 ");
});
io_service.post([&work]() {
printf("4 ");
work = boost::none;
});
});
io_service.run();
t.join();
return 0;
}