Title says it all. What is the difference between relative and absolute deadline. I mean the deadline is relative to what?
Given a periodic task set with deadline different from periods,
and with all offsets equal to 0 (∀i, ri,0 = 0):
• The best assignment is the Deadline Monotonic assignment
• Shorter relative deadline → higher priority
The question is more related to the meaning of the words relative/absolute than the RTOSs perse.
Relative deadline makes reference to the maximum time to complete the job without jeopardize the execution of the code, that is, from the triggering event until the end of the task.
On the other hand, absolute deadline is the moment in time in which the job must be completed.
So, absolute deadline is the relative deadline plus the time in which the task starts running.
For the completeness sake, relative deadline are quite useful to organize the tasks, and there is a method based on it (most probably your quote comes from there) called Deadline monotonic algorithm, the shorter deadline, the higher priority. Obviously, it is easier and clearer to work with relative deadlines than absolute deadlines, being the last one useful for methods such as Earliest Deadline First.
Related
I have taking an online course on real time systems where FreeRTOS is used to demonstrate the various functionalities of an RTOS. The problem I am facing right now is as follows:
There are two tasks (A and B) created in the main function and the real time scheduler is started.
The task B has a lower priority than task A. Task B needs to be given scheduled every n ms but since the priority of task A is higher than that of B, B does not get scheduled every n ms.
So, we need to write a new function that takes the task handles of A and B and measures the execution time of task B.
If the execution time of B is more than n ms, it increases the priority.
I understood the functionality of the function to increase the priority but could not understand how to measure the task execution time. The problem specifically asks us to use the vApplicationTickHook( void ) function to do so. Any hint would be appreciated. I posted in the course's discussion forum as well but didn't get any reply, hence posted here.
First let me preface my answer with a comment that the requirements of this question are bizarre and not something that I've ever come across in practice. This might be useful as an exercise to get familiar with FreeRTOS. But it's not something I would ever do on a real project. Instead I would design the tasks more sensibly.
vApplicationTickHook() runs periodically, once every tick. It runs from the context of the timer ISR so it preempts even the high-priority task A. Since it runs periodically we can use it to poll for the information we need. FreeRTOS includes many Task Utilities, one of these probably provides some relevant information.
The first thing I found that looks useful is vTaskGetInfo() returns a pointer to TaskStatus_t structure, which contains ulRunTimeCounter, the total run time allotted to the task so far. So, from vApplicationTickHook() you can call vTaskGetInfo() to poll for Task B's total run time. Remember Task B's previous runtime in a local static variable and when its runtime hasn't increased for n ms then you will know that it's time to raise the priority of Task B.
I have written some service using C++, QT and boost.
I need some function run in given time (for maintance purposes).
The only method I have worked as expected is to pool current time in thread.
I've try to use boost method:
boost::asio::io_service io_service;
boost::asio::deadline_timer timer (io_service);
boost::gregorian::date day = boost::gregorian::day_clock::local_day();
boost::posix_time::time_duration time = boost::posix_time::duration_from_string(START_TIME);
boost::posix_time::ptime expirationtime ( day, time );
timer.expires_at (expirationtime);
timer.async_wait (boost::bind(func, param1, param2));
io_service.run();
This works for today if I don't change system time. But if I try to set it for tommorow (or any other day in future), for example, and change system time to test it - it doesn't fire (it's count miliseconds after async_wait call?)
Is there any other methods beside time pooling to start task in given day and time (NOT time interval)?
Instead of periodic polling, you can use std::this_thread::sleep_until.
According to the description on cppreference.com this will take subsequent clock adjustments into account:
Blocks the execution of the current thread until specified sleep_time has been reached.
The clock tied to sleep_time is used, which means that adjustments of the clock are taken into account. Thus, the duration of the block might, but might not, be less or more than sleep_time - Clock::now() at the time of the call, depending on the direction of the adjustment. The function also may block for longer than until after sleep_time has been reached due to scheduling or resource contention delays.
However, this is new as of C++11. If your compiler doesn't support that, you may need to poll. Unless there is other library support depending on your OS. Also, sleep_until might be using polling internally (or it is cleverly waking up and reconfiguring if the clock changes).
As mentioned in comments, your problem is more commonly solved externally (invoking your process on schedule) via cron on Unix-style systems (including Mac OS X), or Task Scheduler on Windows. Such external invocations are more robust when it comes to potential failures of your task.
I used Sleep(500) in my code and I used getTickCount() to test the timing. I found that it has a cost of about 515ms, more than 500. Does somebody know why that is?
Because Win32 API's Sleep isn't a high-precision sleep, and has a maximum granularity.
The best way to get a precision sleep is to sleep a bit less (~50 ms) and do a busy-wait. To find the exact amount of time you need to busywait, get the resolution of the system clock using timeGetDevCaps and multiply by 1.5 or 2 to be safe.
sleep(500) guarantees a sleep of at least 500ms.
But it might sleep for longer than that: the upper limit is not defined.
In your case, there will also be the extra overhead in calling getTickCount().
Your non-standard Sleep function may well behave in a different matter; but I doubt that exactness is guaranteed. To do that, you need special hardware.
As you can read in the documentation, the WinAPI function GetTickCount()
is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.
To get a more accurate time measurement, use the function GetSystemDatePreciseAsFileTime
Also, you can not rely on Sleep(500) to sleep exactly 500 milliseconds. It will suspend the thread for at least 500 milliseconds. The operating system will then continue the thread as soon as it has a timeslot available. When there are many other tasks running on the operating system, there might be a delay.
In general sleeping means that your thread goes to a waiting state and after 500ms it will be in a "runnable" state. Then the OS scheduler chooses to run something according to the priority and number of runnable processes at that time. So if you do have high precision sleep and high precision clock then it is still a sleep for at least 500ms, not exactly 500ms.
Like the other answers have noted, Sleep() has limited accuracy. Actually, no implementation of a Sleep()-like function can be perfectly accurate, for several reasons:
It takes some time to actually call Sleep(). While an implementation aiming for maximal accuracy could attempt to measure and compensate for this overhead, few bother. (And, in any case, the overhead can vary due to many causes, including CPU and memory use.)
Even if the underlying timer used by Sleep() fires at exactly the desired time, there's no guarantee that your process will actually be rescheduled immediately after waking up. Your process might have been swapped out while it was sleeping, or other processes might be hogging the CPU.
It's possible that the OS cannot wake your process up at the requested time, e.g. because the computer is in suspend mode. In such a case, it's quite possible that your 500ms Sleep() call will actually end up taking several hours or days.
Also, even if Sleep() was perfectly accurate, the code you want to run after sleeping will inevitably consume some extra time.
Thus, to perform some action (e.g. redrawing the screen, or updating game logic) at regular intervals, the standard solution is to use a compensated Sleep() loop. That is, you maintain a regularly incrementing time counter indicating when the next action should occur, and compare this target time with the current system time to dynamically adjust your sleep time.
Some extra care needs to be taken to deal with unexpected large time jumps, e.g. if the computer was temporarily suspected or if the tick counter wrapped around, as well as the situation where processing the action ends up taking more time than is available before the next action, causing the loop to lag behind.
Here's a quick example implementation (in pseudocode) that should handle both of these issues:
int interval = 500, giveUpThreshold = 10*interval;
int nextTarget = GetTickCount();
bool active = doAction();
while (active) {
nextTarget += interval;
int delta = nextTarget - GetTickCount();
if (delta > giveUpThreshold || delta < -giveUpThreshold) {
// either we're hopelessly behind schedule, or something
// weird happened; either way, give up and reset the target
nextTarget = GetTickCount();
} else if (delta > 0) {
Sleep(delta);
}
active = doAction();
}
This will ensure that doAction() will be called on average once every interval milliseconds, at least as long as it doesn't consistently consume more time than that, and as long as no large time jumps occur. The exact time between successive calls may vary, but any such variation will be compensated for on the next interation.
Default timer resolution is low, you could increase time resolution if necessary. MSDN
#define TARGET_RESOLUTION 1 // 1-millisecond target resolution
TIMECAPS tc;
UINT wTimerRes;
if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR)
{
// Error; application can't continue.
}
wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes);
There are two general reasons why code might want a function like "sleep":
It has some task which can be performed at any time that is at least some distance in the future.
It has some task which should be performed as near as possible to some moment in time some distance in the future.
In a good system, there should be separate ways of issuing those kinds of requests; Windows makes the first easier than the second.
Suppose there is one CPU and three threads in the system, all doing useful
work until, one second before midnight, one of the threads says it won't have
anything useful to do for at least a second. At that point, the system will
devote execution to the remaining two threads. If, 1ms before midnight,
one of those threads decides it won't have anything useful to do for at least
a second, the system will switch control to the last remaining thread.
When midnight rolls around, the original first thread will become available to
run, but since the presently-executing thread will have only had the CPU for
a millisecond at that point, there's no particular reason the original first
thread should be considered more "worthy" of CPU time than the other thread
which just got control. Since switching threads isn't free, the OS may very
well decide that the thread that presently has the CPU should keep it until
it blocks on something or has used up a whole time slice.
It might be nice if there were a version of "sleep" which were easier to use
than multi-media timers but would request that the system give the thread a
temporary priority boost when it becomes eligible to run again, or better yet
a variation of "sleep" which would specify a minimum time and a "priority-
boost" time, for tasks which need to be performed within a certain time window. I don't know of any systems that can be easily made to work that way, though.
There is easy way to calc duration of any function which described here: How to Calculate Execution Time of a Code Snippet in C++
start_timestamp = get_current_uptime();
// measured algorithm
duration_of_code = get_current_uptime() - start_timestamp;
But, it does not allow to get clear duration cause some time for execution other threads will be included in the measured time.
So question is: how to consider time which code spend in other threads?
OSX code preffer. Although it's great to look to windows or linux code also...
upd: Ideal? concept of code
start_timestamp = get_this_thread_current_uptime();
// measured algorithm
duration_of_code = get_this_thread_current_uptime() - start_timestamp;
I'm sorry to say that in the general case there is no way to do what you want. You are looking for worst-case execution time, and there are several methods to get a good approximation for this, but there is no perfect way as WCET is equivalent to the Halting problem.
If you want to exclude the time spent in other threads then you could disable task context switches upon entering the function that you want to measure. This is RTOS dependent but one possibility is to raise the priority of the current thread to the maximum. If this thread is max priority then other threads won't be able to run. Remember to reset the thread priority again at the end of the function. This measurement may still include the time spent in interrupts, however.
Another idea is to disable interrupts altogether. This could remove other threads and interrupts from your measurement. But with interrupts disabled the timer interrupt may not function properly. So you'll need to setup a hardware timer appropriately and rely on the timer's counter value register (rather than any time value derived from a timer interrupt) to measure the time. Also make sure your function doesn't call any RTOS routines that allow for a context switch. And remember to restore interrupts at the end of your function.
Another idea is to run the function many times and record the shortest duration measured over those many times. Longer durations probably include time spent in other threads but the shortest duration may be just the function with no other threads.
Another idea is to set a GPIO pin upon entry to and clear it upon exit from the function. Then monitor the GPIO pin with an oscilloscope (or logic analyzer). Use the oscilloscope to measure the period for when the GPIO pin is high. In order to remove the time spent in other threads you would need to modify the RTOS scheduler routine that selects the thread to run. Clear the GPIO pin in the scheduler when another thread runs and set it when the scheduler returns to your function's thread. You might also consider clearing the GPIO pin in interrupt handlers.
Your question is entirely OS dependent. The only way you can accomplish this is to somehow get a guarantee from the OS that it won't preempt your process to perform some other task, and to my knowledge this is simply not possible in most consumer OS's.
RTOS often do provide ways to accomplish this though. With Windows CE, anything running at priority 0 (in theory) won't be preempted by another thread unless it makes a function/os api/library call that requires servicing from another thread.
I'm not super familer with OSx, but after glancing at the documentation, OSX is a "soft" realtime operating system. This means that technically what you want can't be guaranteed. The OS may decide that there is "Something" more important than your process that NEEDS to be done.
OSX does however allow you to specify a Real-time process which means the OS will make every effort to honor your request to not be interrupted and will only do so if it deems absolutely necessary.
Mac OS X Scheduling documentation provides examples on how to set up real-time threads
OSX is not an RTOS, so the question is mistitled and mistagged.
In a true RTOS you can lock the scheduler, disable interrupts or raise the task to the highest priority (with round-robin scheduling disabled if other tasks share that priority) to prevent preemption - although only interrupt disable will truly prevent preemption by interrupt handlers. In a GPOS, even if it has a priority scheme, that normally only controls the number of timeslices allowed to a process in what is otherwise round-robin scheduling, and does not prevent preemption.
One approach is to make many repeated tests and take the smallest value obtained, since that is likely to be the one where the fewest pre-emptions occurred. It will help also to set the process to the highest priority in order to minimise the number of preemtions. But bear in mind on a GPOS many interrupts from devices such as the mouse, keyboard, and system clock will occur and consume a small (an possibly negligible) amount of time.
As could be read at:
https://svn.boost.org/trac/boost/ticket/3504
a deadline_timer that timeouts periodically and which is implemented using deadline_timer::expires_at() (like the example in Boost Timer Tutorial, 3th example) will probably fail if the system time is modified (for example, using the date command, if your operating system is Linux).
Is there a simple and appropiate way of performing this operation now, using Boost? I do not want to use deadline_timer::expires_from_now() because I could verify that it is less accurate than "manually" updating the expiry time.
As a temporal solution I decide to, before setting a new expires_at value, calculate the time period between now() and expires_at(). If it is more than double the periodic delay, then I exceptionally use expires_from_now() to resync with the new absolute time.
In Boost 1.49+, Boost.Asio provides steady_timer. This timer uses chrono::steady_clock, a monotonic clocks that is not affected by changes to the system clock.
If you cannot use Boost 1.49+, then checking the timers or clocks for changes is a reasonable alternative solution. While it is an implementation detail, Boost.Asio may limit the amount of time spent waiting on an event in its reactor, so that it can periodically detect changes to system time. For example, the reactor implementation using epoll will wait a maximum of 5 minutes. Thus, without forcing an interrupt on the reactor (such as setting a new expiration time on a timer), it can take Boost.Asio up to 5 minutes before detecting changes to system time.