Debounce Events in Cloud - amazon-web-services

I am looking for a good cloud solution to handle below scenario, where I need to wait for future events within a specific time interval to know whether to process current event. Its kindoff like Debounce (“group” multiple sequential calls within a time period in a single one) but little more complex as the timer needs to be reset when next event is received.
Eg:
I get a request of Event A at X time for a particular User(U1).
a. If I get a similar Event A from same User within 5mins of X time, I need to reset the timer and keep watching again.
b. If 5 mins have passed by, I need to process Event A.

Related

Scheduler Design? Multiple Events on the same timeline starting at different times

I have multiple objects (Object1, Object2 and Object3) which MAY want to utilize a callback. If it is decided that an object wants to be registed
for a periodic callback, they all will use a 30 second reset rate. The object will choose when it registers for a callback (that it would want
at that fixed interval of 30 seconds going forward).
If I wanted to give each object its own internal Timer (such as a timer on a seperate thread) this would be a simple
problem. However each timer would need to be on a seperate thread, which would grow too much as my object count grows.
So for example:
at T=10 seconds into runtime, Object 1 registers for a callback. Since the callback occurs every 30 seconds, its next fire event will
be at T=40, then T=70, T=100 etc.
say 5 seconds later (T=15), Object 2 registers for a callback. Meaning its next call is at T=45, T=75, T=105 etc.
Lastly 1 second after Object 2, Object 3 registers for a callback. Its callback should be invoked at T=46 etc.
A dirty solution I would have for this to for everything to calculate its delta from the first registered Object.
So Object 0 is 0, Object 1 is 10 and Object 3 is 11. Then in a constantly running loop, once the 30 seconds have elapsed, I know
that Object 0's callback can process, and within 10 seconds from that point I can then call object 1's callback etc.
I don't like that in a way that stay busy waits as a while loop must constantly be running. I guess SystemSleep calls may not be as different using semaphores.
Another thought I had was finding the lowest common multiple between the fire events. For example if I kew it was possible every 3 seconds I may have to fire an event, i would keep track of that.
I think essentially what I am trying to make is some sort of simple scheduler? I'm sure I am hardly the first person to do this.
I am trying to come up with a performant solution. a While Loop or a ton of timers on their own threads would make this easy, but that is not a good solution.
Any ideas? Is there a name for this design?
Normally you would use a priority queue, a heap or similar to manage your timed callbacks using a single timer. You check what callback needs to be called next and that is the time you set for the timer to wake you up.
But if all callbacks use a constant 30s repeat then you can just use a queue. New callbacks are added to the end as a pair of callback and (absolute) timestamp and the next callback to call will always be at the front. Every time you call a callback you add it back to the queue with a timestamp 30s increased.

How to efficiently handle incoming delayed events on a single timeline?

I want to implement the algorithm that awaits for some events and handles them after some delay. Each event has it's own predefined delay. The handler may be executed in a separate thread. The issues with the CPU throttling, the host overload, etc. may be ignored - it's not intended to be a precise real-time system.
Example.
At moment N arrives an event with delay 1 second. We want to handle it at moment N + 1 sec.
At moment N + 0.5 sec arrives another event with delay 0.3 seconds. We want to handle it at moment N + 0.8 sec.
Approaches.
The only straightforward approach that comes to my mind is to use a loop with minimal possible delay inbetween iterations, like every 10 ms, and check if any event on our timeline should be handled now. But it's not a good idea since the delays may vary on scale from 10 ms to 10 minutes.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
Also it's possible to use a thread per event and just sleep, but there may be thousands of simultanious events which effectively may lead to running out of threads.
The solution can be language-agnostic, but I prefer the C++ STD library solution.
Another approach is to have a single thread that sleeps between events. But I can't figure out how to forcefully "wake" it when there is a new event that should be handled between now and the next scheduled wake up.
I suppose solution to these problems are, at least on *nix systems, poll or epoll with some help of timer. It allows you to make the thread sleep until some given event. The given event may be something appearing on stdin or timer timeout. Since the question was about a general algorithm/idea of algorithm and the code would take a lot of space I am giving just pseudocode:
epoll = create_epoll();
timers = vector<timer>{};
while(true) {
event = epoll.wait_for_event(timers);
if (event.is_timer_timeout()) {
t = timers.find_timed_out();
t.handle_event();
timers.erase(t);
} else if (event.is_incoming_stdin_data()) {
data = stdin.read();
timers.push_back(create_timer(data));
}
}
Two threads that share a priority queue.
Arrivals thread: Wait for arrival. When event arrives calculate time for handler to run. Add handler to queue with priority of handler time ( the top of the queue will be the next event that is to be handled
Handler thread: Is now equal to time of handler at top of queue then run handler. Sleep for clock resolution.
Note: check if your queue is thread safe. If not, then you will have to use a mutex.
This looks simple, but there a lot of gotchas waiting for the inexperienced. So, I would not recommend coding this from scratch. It is better to use a library. The classic is boost::asio. However, this is beginning to show its age and has way more bells and whistles than are needed. So, personally, I use something more lightweight and coded in C++17 - a non blocking event waiter class I coded that you can get from https://github.com/JamesBremner/await. Notice the sample application using this class which does most of what you require https://github.com/JamesBremner/await/wiki/Event-Server

Multiple Timers in C++ / MySQL

I've got a service system that gets requests from another system. A request contains information that is stored on the service system's MySQL database. Once a request is received, the server should start a timer that will send a FAIL message to the sender if the time has elapsed.
The problem is, it is a dynamic system that can get multiple requests from the same, or various sources. If a request is received from a source with a timeout limit of 5 minutes, and another request comes from the same source after only 2 minutes, it should be able to handle both. Thus, a timer needs to be enabled for every incoming message. The service is a web-service that is programmed in C++ with the information being stored in a MySQL database.
Any ideas how I could do this?
A way I've seen this often done: Use a SINGLE timer, and keep a priority queue (sorted by target time) of every timeout. In this way, you always know the amount of time you need to wait until the next timeout, and you don't have the overhead associated with managing hundreds of timers simultaneously.
Say at time 0 you get a request with a timeout of 100.
Queue: [100]
You set your timer to fire in 100 seconds.
Then at time 10 you get a new request with a timeout of 50.
Queue: [60, 100]
You cancel your timer and set it to fire in 50 seconds.
When it fires, it handles the timeout, removes 60 from the queue, sees that the next time is 100, and sets the timer to fire in 40 seconds. Say you get another request with a timeout of 100, at time 80.
Queue: [100, 180]
In this case, since the head of the queue (100) doesn't change, you don't need to reset the timer. Hopefully this explanation makes the algorithm pretty clear.
Of course, each entry in the queue will need some link to the request associated with the timeout, but I imagine that should be simple.
Note however that this all may be unnecessary, depending on the mechanism you use for your timers. For example, if you're on Windows, you can use CreateTimerQueue, which I imagine uses this same (or very similar) logic internally.

Scheduling Task Based on Date time

I am working for private video network where I have to schedule the
task based on following parameter.There is client Portal, Server and Gateway.
Through portal a user can request Streaming the video.
User can also Schedule Streaming for some future time.Each each task is having a task ID.
Task is scheduled based on following date time parameter.
start time
end time
Repeat (every day,just once, a particular day)
start date
end date
Now at the gateway I need to add logic to Implement schedule task.
I am exploring Waitable Timer Objects and CreateWaitableTimerEe.
I am bit confused whether it is possible to implement the feature using this.
I am using C++, MFC and can't use third party library.
I need Suggestion how to implement this.
There are dozens of ways to design this. It all depends on what you want to do and what the specific requirements are.
In a basic design I'd create an additional field called "next run time" which will be calculated by using start time, frequency and previous (if any) end time. Then I'd dump all the tasks in a queue sorted using this field.
The main scheduling will pick up the first queue item and create a suspended thread for that specific task. Now just calculate the time difference to the first item's 'next run time' and sleep for that time period. When you wake up just resume the thread and pick the next queue item and repeat.
I would just create a timer thread callback loop that checks the time every minute and executes your task on the specified schedule.

SetTimer() pitfalls

I have a windowless timer (no WM_TIMER) which fires a callback function only once when a given time period is elapsed. It is implemented as a SetTimer()/KillTimer(). Time periods are small enough: 100-300 milliseconds.
Is that cheap enough (I mean performance) to call SetTimer()/KillTimer() pair for every such short time interval?
What if I have 100 such timers which periodically call SetTimer()/KillTimer()? How much Window timer objects may exist simultaneously in the system?
That is the question:
Use a bunch of such timer objects and rely on good Windows implementation of timers, or create one Windows timer object that ticks every, say, 30 milliseconds, and subscribe all custom 100-300 milliseconds one-time timers to it.
Thanks
The problem with timer messages as you are trying to use them is that they are low priority messages. Actually they are fake messages. Timers are associated with an underlying kernel timer object - when the message loop detects the kernel timer is signalled it simply marks the current threads message queue with a flag indicating that the next call to GetMessage - WHEN THERE ARE NO OTHER MESSAGES TO PROCESS - should synthesise a WM_TIMER message just in time and return it.
With potentially lots of timer objects, its not at all obvious that the system will fairly signal timer messages for all the timers equally, and any system load can entirely prevent the generation of WM_TIMER messages for long periods of time.
If you are in control of the message loop, you could use maintain your own list of timer events (along with GetTickCount timestamps when they should occur) and MSGWaitForMultipleObject - instead of GetMessage to wait for messages. Use the dwTimeout parameter to provide the smallest interval - from now - until the next timer should be signalled. So it will return from waiting for messages each time you have a timer to process.
And/Or you could use waitable timers - either on a GUI thread with MSGWaitForMultipleObjects, or just on a worker thread, to access the lower level timing functionality directly.
The biggest SetTimer() pitfall is that actually it is USER object (despite the fact it's not listed in MSDN USER objects list) hence it falls under Windows USER objects limitation - by default max 10000 objects per process, max 65535 objects per session (all running processes).
This can be easily proven by simple test - just call SetTimer() (parameters don't care, both windowed and windowless act the same way) and see USER objects count increased in Task Manager.
Also see ReactOS ntuser.h source and this article. Both of them state that TYPE_TIMER is one of USER handle types.
So beware - creating a bunch of timers could exhaust your system resources and make your process crash or even entire system unresponsive.
Here are the details that I feel you're actually after while asking this question:
SetTimer() will first scan the non-kernel timer list (doubly linked list) to see if the timer ID already exists. If the timer exists, it will simply be reset. If not, an HMAllocObject call occurs and creates space for the structure. The timer struct will then be populated and linked to the head of the list.
This will be the total overhead for creating each your 100 timers. That's exactly what the routine does, save for checking against the min and max dwElapsed parameters.
As far as timer expiration goes, the timer list is scanned at (approximately) the duration of the smallest timer duration seen during the last timer list scan. (Actually, what really happens is -- a kernel timer is set to the duration of the smallest user timer found, and this kernel timer wakes up the thread that does the checking for user timer expirations and wakes the respective threads via setting a flag in their message queue status.)
For each timer in the list, the current delta between the last time (in ms) the timer list was scanned and the current time (in ms) is decremented from each timer in the list. When one is due (<= 0 remaining), it's flagged as "ready" in its own struct and and a pointer to the thread info is read from the timer struct and used to wake the respective thread by setting the thread's QS_TIMER flag. It then increments your message queue's CurrentTimersReady counter. That's all timer expiration does. No actual messages are posted.
When your main message pump calls GetMessage(), when no other messages are available, GetMessage() checks for QS_TIMER in your thread's wake bits, and if set -- generates a WM_TIMER message by scanning the full user timer list for the smallest timer in the list flagged READY and that is associated with your thread id. It then decrements your thread CurrentTimersReady count, and if 0, clears the timer wake bit. Your next call to GetMessage() will cause the same thing to occur until all timers are exhausted.
One shot timers stay instantiated. When they expire, they're flagged as WAITING. The next call to SetTimer() with the same timer ID will simply update and re-activate the original. Both one shot and periodic timers reset themselves and only die with KillTimer or when your thread or window are destroyed.
The Windows implementation is very basic, and I think it'd be trivial for you to write a more performant implementation.