Scheduler using Timer Queues - c++

I am working on an application where i need to schedule tasks based on the time set by the user. The user may add/modify/delete the schedules. To implement it i am considering using Timer Queues. Initially i though of using WaitableTimers which suite very much for my purpose but i cant make my thread to sleep for competing the APC.
Now with the Timer Queue i am not sure how to set the timer to signal based on Systemtime. I tried the following code but the callback function is never called
SYSTEMTIME st, lt;
GetSystemTime(&st);
FILETIME ft;
SystemTimeToFileTime(&st, &ft);
ULONGLONG qwResult;
// Copy the time into a quadword.
qwResult = (((ULONGLONG) ft.dwHighDateTime) << 32) + ft.dwLowDateTime;
// Add 20 seconds days.
qwResult += 20 * _SECOND;
HANDLE hTimerQueue = CreateTimerQueue();
HANDLE hTimer;
// Set a timer to call the timer routine in 10 seconds.
if (!CreateTimerQueueTimer( &hTimer, hTimerQueue ,(WAITORTIMERCALLBACK)TimerAPCProc, NULL , qwResult, 0, 0))
{
printf("CreateTimerQueueTimer failed (%d)\n", GetLastError());
return 3;
}

The callback routine will be called in qwResult milliseconds, and file time gives you the time in 100 nanoseconds. You do the math. GetSystemTimeAsFileTime Will give you FILETIME right away if that is the path you want to go.
Personally, I would keep a list of structure with times when the routines should be called and pointers to routines and iterate through the list once in a while and if the time of execution is due I would just call the function (or create a thread). That way your users can always review the scheduled tasks and change them.

It needs to be backed by WaitForSingleObject, or entering the thread into waitable state (using SleepEx for example).

You're passing in an absolute time, but the docs say you need to pass in the number of milliseconds from the current time.
If you want the timer to go off in 20 seconds, pass 20000 instead of qwResult

Related

High precision timed operations with multiprocess application on windows/c++

I have multiple processes(which are in different exe files generated by subprojects) created by my main program.
What I want to do is running each process for about 1-2 milliseconds within every 40-50 milliseconds major frame. When I use suspend/resume thread to suspend one process(by suspending all threads it have, but each have only one.) and resuming next, only one switch context(suspend old and resume new) lasts about 60 milliseconds. Which is longer even my major frame. By the way I know that using Sleep is not advised within this manner since the only sleep/wake operation lasts 15-30 ms and I dont use any.
If I change the priority of the running process to lower and next process to higher; is it guaranteed context switch to occur by windows within microseconds?
or what should I consider to achieve an only microsecond sensitive process switch?
And I wonder how long a simple Suspend/ResumeThread operation normally takes?
Currently I can't use threads insted of processes since I need the memory isolation of a process and my processes may spawn and terminate their own threads. Does Waithandlers like syncronization methods give me the high precised time?
Edit: The proposed sync objcets are in the resolution maximum to milliseconds (Like waitable timers, multimedia timers etc. all get parameter as ms and gives you ms). I need to use QueryPerformanceCounter and other ways to achieve high resolution as I mentioned.
As Remy says, you should be doing this with synchronisation objects - that's what they're for. Let's suppose that process A executes first and wants to 'hand over' to process B at some point. It can then do this:
SECURITY_ATTRIBUTES sa = { sizeof (SECURITY_ATTRIBUTES), NULL, TRUE };
HANDLE hHandOffToA = CreateEventW (&sa, TRUE, FALSE, L"HandOffToA");
HANDLE hHandOffToB = CreateEventW (&sa, TRUE, FALSE, L"HandOffToB");
// Start process B
CreateProcess (...);
while (!quit)
{
// Do work, and then:
SetEvent (hHandOffToB);
WaitForSingleObject (hHandOffToA, INFINITE);
}
CloseHandle (hHandOffToA);
CloseHandle (hHandOffToB);
And process B can then do:
HANDLE hHandOffToA = OpenEventW (EVENT_MODIFY_STATE, FALSE, L"HandoffToA");
HANDLE hHandOffToB = OpenEventW (SYNCHRONIZE, FALSE, L"HandoffToB");
while (!quit)
{
WaitForSingleObject (hHandOffToB, INFINITE);
// Do work, and then:
SetEvent (hHandOffToA);
}
CloseHandle (hHandOffToA);
CloseHandle (hHandOffToB);
You should, of course, include proper error checking and I've left it up to you to decide how process A should tell process B to shut down (I guess it could just kill it). Remember also that event names are system-wide so choose them more carefully than I have done.
For very high precision one can use the funciton below:
void get_clock(LONGLONG* SYSTEM_TIME)
{
static REAL64 multiplier = 1.0;
static BOOL alreadyCalculated = FALSE;
if (alreadyCalculated == FALSE)
{
LARGE_INTEGER frequency;
BOOL result = QueryPerformanceFrequency(&frequency);
if (result == TRUE)
{
multiplier = 1000000000.0 / frequency.QuadPart;
}
else
{
DWORD error = GetLastError();
}
alreadyCalculated = TRUE;
}
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
*SYSTEM_TIME = static_cast<SYSTEM_TIME_TYPE>(time.QuadPart * multiplier);
}
In my case sync objects didn't fit very well(however I have used them where time is not critical), instead I have redesigned my logic to put place holders where my thread need to take action and calculated the time using function above.
But still not sure if higher priority task arrives how long does it take windows to take it into cpu and preempt running one.

How to call a method/function 50 time in a second

How to call a method/function 50 time in a second then calculate time spent, If time spent is less than one second then sleep for (1-timespent) seconds.
Below is the pseudo code
while(1)
{
start_time = //find current time
int msg_count=0;
send_msg();
msg_count++;
// Check time after sending 50 messages
if(msg_count%50 == 0)
{
curr_time = //Find current time
int timeSpent = curr_time - start_time ;
int waitingTime;
start_time = curr_time ;
waitingTime = if(start_time < 1 sec) ? (1 sec - timeSpent) : 0;
wait for waitingTime;
}
}
I am new with Timer APIs. Can anyone help me that what are the timer APIs, I have to use to achieve this. I want portable code.
First, read the time(7) man page.
Then you may want to call timer_create(2) to set up a timer. To query about time, use clock_gettime(2)
You probably may want to wait and multiplex on some input and output. poll(2) is useful for this. To sleep for a small amount of time without using the CPU consider nanosleep(2)
If using timer doing signals, read signal(7) and be careful because signal handlers are restricted to async-signal-safe functions (consider having a signal handler which just sets some global volatile sig_atomic_t flag). You may also be interested by the Linux specific timerfd_create(2) (which you could poll or pass to your event loop).
You might want to use some existing event loop library, like libevent or libev (or those from GTK/Glib, Qt, etc...), which are often using poll (or fancier things). The linux specific eventfd(2) and signalfd(2) might be very helpful.
Advanced Linux Programming is also useful to read.
If send_msg is doing network I/O, you probably need to redesign your program around some event loop (perhaps your own, based on poll) - you'll need to multiplex (i.e. poll) both on network sends and network recieves. continuation-passing style is then a useful paradigm.

Restart the timer in select() of socket programming

I want to use select() to receive update from other server and also send out periodic messages. Consider the following set up:
while(1){
select(... timeout = 5 seconds);
// some other code}
If I receive update at t = 2 seconds, then select() will return and corresponding statement will be executed. When the next loop begins, timeout will be set to 5 seconds again. However, it should be 5 - 2 = 3 seconds. Is there a way to update the timer with the right time?
I thought about to manually start a timer righr before select(), however this timer might not be synchronous with the one used in select(). And will cause other potential problems.
According to the select man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behaviour.)
So, you just simply reuse the timeout variable. You only reset its value when you really time-out.
As the warning suggests, relying on this behavior makes for a porting problem, so if you rely on this behavior, make sure you document it so that the right thing is done when porting the code.
Just remember time() in a variable before you call select(), get another time() when select() returns and... in the next while(1) iteration use not 5, but 5 - difference_between_times for timeout value.
Perhaps you'd want to use new_timeout = 5 - difference_between_times % 5, so that if your operation after select returns takes longer than 5 seconds... you still set the timeout in 5 sec interval.
You probably should use not seconds, but some more granular time unit. And think whether above is the behaviour you really want (with modulo). Maybe when difference_between_times > 5, you should wait just for 5 sec. Do as you wish, but you get the idea.
When your app gets a little more complicated, you may have multiple timers with different timeout intervals. We do. Here is how we handle it.
Each timer has a timer object with a time_t of when the timer expires. We store all the timers in a heap data structure, so the soonest timer to expire is at the root of the heap. Before doing a select() we fetch the root of the heap, and subtract the current time from the timer's expiration time and use that delta as the timeout to the select() call.
Timer * t = heap->Root();
time_t now = time(0);
timeval tv;
tv.tv_sec = t->when - now;
tv.tv_usec = 0;
select( ... & tv );

does while loop always take full CPU usage?

I need create a server side game loop, the problem is how to limit the loop cpu usage.
In my experience of programming, a busy loop always take maximal CPU usage it could. But I am reading the code of SDL(Simple DirectMedia Layer), it has a function SDL_Delay(UINT32 ms), and it has a while loop, does it take max cpu usage, if not, why?
https://github.com/eddieringle/SDL/blob/master/src/timer/unix/SDL_systimer.c#L137-158
do {
errno = 0;
#if HAVE_NANOSLEEP
tv.tv_sec = elapsed.tv_sec;
tv.tv_nsec = elapsed.tv_nsec;
was_error = nanosleep(&tv, &elapsed);
#else
/* Calculate the time interval left (in case of interrupt) */
now = SDL_GetTicks();
elapsed = (now - then);
then = now;
if (elapsed >= ms) {
break;
}
ms -= elapsed;
tv.tv_sec = ms / 1000;
tv.tv_usec = (ms % 1000) * 1000;
was_error = select(0, NULL, NULL, NULL, &tv);
#endif /* HAVE_NANOSLEEP */
} while (was_error && (errno == EINTR));
This code uses select for a timeout. select usually takes a file descriptor, and makes the caller wait until an IO event occurs on the fd. It also takes a timeout argument for the maximum time to wait. Here the fd is 0, so no events will occur, and the function will always return when the timeout is reached.
The select(3) that you get from the C library is a wrapper around the select(2) system call, which means calling select(3) eventually gets you in the kernel. The kernel then doesn't schedule the process unless an IO event occurs, or the timeout is reached. So the process is not using the CPU while waiting.
Obviously, the jump into the kernel and process scheduling introduce delays. So if you must have very low latency (nanoseconds) you should use busy waiting.
That loop won't take up all CPU. It utilizes one of two different functions to tell the operating system to pause the thread for a given amount of time and letting another thread utilize the CPU:
// First function call - if HAVE_NANOSLEEP is defined.
was_error = nanosleep(&tv, &elapsed);
// Second function call - fallback without nanosleep.
was_error = select(0, NULL, NULL, NULL, &tv);
While the thread is blocked in SDL_Delay, it yields the CPU to other tasks. If the delay is long enough, the operating system will even put the CPU in an idle or halt mode if there is no other work to do. Note that this won't work well if the delay time isn't at least 20 milliseconds or so.
However, this is usually not the right way to do whatever it is you are trying to do. What is your outer problem? Why doesn't your game loop ever finish doing whatever needs to be done at this time and so then need to wait for something to happen so that it has more work to do? How can it always have an infinite amount of work to do immediately?

What is the cleanest way to create a timeout for a while loop?

Windows API/C/C++
1. ....
2. ....
3. ....
4. while (flag1 != flag2)
5. {
6. SleepEx(100,FALSE);
//waiting for flags to be equal (flags are set from another thread).
7. }
8. .....
9. .....
If the flags don't equal each other after 7 seconds, I would like to continue to line 8.
Any help is appreciated. Thanks.
If you are waiting for a particular flag to be set or a time to be reached, a much cleaner solution may be to use an auto / manual reset event. These are designed for signalling conditions between threads and have very rich APIs designed on top of them. For instance you could use the WaitForMultipleObjects API which takes an explicit timeout value.
Do not poll for the flags to change. Even with a sleep or yield during the loop, this just wastes CPU cycles.
Instead, get the thread which sets the flags to signal you that they've been changed, probably using an event. Your wait on the event takes a timeout, which you can tweak to allow waiting of 7 seconds total.
For example:
Thread1:
flag1 = foo;
SetEvent(hEvent);
Thread2:
DWORD timeOutTotal = 7000; // 7 second timeout to start.
while (flag1 != flag2 && timeOutTotal > 0)
{
// Wait for flags to change
DWORD start = GetTickCount();
WaitForSingleObject(hEvent, timeOutTotal);
DWORD end = GetTickCount();
// Don't let timeOutTotal accidently dip below 0.
if ((end - start) > timeOutTotal)
{
timeOutTotal = 0;
}
else
{
timeOutTotal -= (end - start);
}
}
You can use QueryPerformanceCounter from WinAPI. Check it before while starts, and query if the amount of time has passed. However, this is a high resolution timer. For a lower resolution use GetTickCount (milliseconds).
All depends whether you are actively waiting (doing something) or passively waiting for an external process. If the latter, then the following code using Sleep will be a lot easier:
int count = 0;
while ( flag1 != flag2 && count < 700 )
{
Sleep( 10 ); // wait 10ms
++count;
}
If you don't use Sleep (or Yield) and your app is constantly checking on a condition, then you'll bloat the CPU the app is running on.
If you use WinAPI extensively, you should try out a more native solution, read about WinAPI's Synchronization Functions.
You failed to mention what will happen if the flags are equal.
Also, if you just test them with no memory barriers then you cannot guarantee to see any writes made by the other thread.
Your best bet is to use an Event, and use the WaitForSingleObject function with a 7000 millisecond time out.
Make sure you do a sleep() or yield() in there or you will eat up all the entire CPU (or core) waiting.
If your application does some networking stuff, have a look at the POSIX select() call, especially the timeout functionality!
I would say "check the time and if nothing has happened in seven seconds later, then break the loop.