I am using alarm[0] in Draw Event.
When you set alarm[0] = 1, The code in alarm event will execute 70 times; But when you set alarm[0] = 2, The code in alarm event will execute 1 time, which is expected.
I cannot figure out why. The game_speed is 60.
Related
I am trying to trigger an alert when an event does not occur within a time window after an event came. I know there are a lot of similar questions out there, but nothing seems to match what I am looking for.
Let's say I have a time window of 20 seconds and I want to be alerted when any of these two conditions are satisfied:
When the rule is enabled(not just created), start a time window of 20 seconds, send an alert if the event did not occur in the first 20 seconds.
An event occurred at 15th second, wait for a time window of 20 sec and send an alert if an event did not occur.
An event occurred at 15th second and at 16th second, start a time window from 16th second and wait for the new event to occur. (Basically, start a time window from the latest event that happened)
I have tried a couple of things like:
from not employees[eid == 'E1234'] for 20 sec select eid, 'MissingEvent' as alert insert
into employees_alerts;
from f1=employees[eid == 'E1234'] -> not employees[f1.eid == eid] for 20 sec select eid,
'MissingEvent' as alert insert into employees_alerts;
There are a couple of problem with this query:
"for" syntax requires a siddhiAppRuntime to restart to work. Hence, when the rule is disable and enabled, the query doesnt work.
This works only for the first time. eg. when my events are generated at these time intervals:
a. emp_event1 at 21st sec
b. emp_event2 at 25th sec after emp_event1
c. emp_event3 at 5th sec after emp_event2
I get an alert at the first 20th second(before emp_event1, which is right) and an alert at 20 seconds after emp_event1(before emp_event2, which is right).
However, I dont get an alert 20 seconds after emp_event3(since no new matching event is sent in a 20 second time window after emp_event3 happened).
How can I change my query to alert this way?
I also tried this by following
https://docs.wso2.com/display/CEP400/Sample+0111+-+Detecting+non-occurrences+with+Patterns and Siddhi check if an event does not arrive within a specified time window?
from employees[eid == 'E1234']#window.time(20 sec)
select * insert expired events into expiredEventsStream;
from every f1 = employees[eid == 'E1234'] -> f2 = employees[f1.eid == eid]
or f3 = expiredEventsStream[f1.eid == eid]
select f1.eid as eid, f2.eid as newEid insert into filter_stream;
from filter_stream[(newEid is null)] select eid, 'Missing event' as alert
insert into employees_alerts
This gives me the same output as above, with the difference that:
It does not use "for" syntax and as a result works while enabling and disabling a query.
The alerting time has now increased and the first alert comes at 40th second since it waits for the expiredEventsStream which worries me for larger time windows like 24 hours.
I have also tried to come up with queries that would involve count() and tried a few other things.
Is there a way to avoid "for" syntax and start a time window after the latest event satisfying an eid condition?
I was able to fix the multiple alert issue by adding the "every" keyword in the query.
So the new query look like:
from not employees[eid == 'E1234'] for 20 sec select eid, 'MissingEvent'
as alert insert into employees_alerts;
from every f1=employees[eid == 'E1234'] -> not employees[f1.eid == eid]
for 20 sec select eid, 'MissingEvent' as alert insert into
employees_alerts;
The Disablement and Enablement of this query now works with a minor issue. With the example given in the question:
At "create", it alerts 3 times(20th sec before event1 arrives, 20th sec after first event/before event2 arrives, 20th sec after event3 arrives) - which is right
However, when the rule is disabled and enabled, I only get 2 alerts (20th sec after event1 arrives and 20th sec after event3). It misses out the alert at the first 20th sec before event1 arrives.
This can however be overcome by not disabling and enabling the query, but by just Creating, Deleting and Creating again (which is a hack).
Would be nice to have a better solution for this though.
When I set up alarm for appsCompleted<12 for 1 out of 1 datapoints it says that the alarm will go off in the following scenario:
This alarm will trigger when the blue line goes below the red line for 1 datapoints within 1 day
But it does not seem to change the threshold by adding up values per day. That is the next days threshold is still 12 and not 24. I suspect that the threshold (red line) is crossed on the first day itself and that is why no alarms are triggered even in case of failure.
State changed to OK at [DATE]. Reason: Threshold Crossed: 1 out of the last 1 datapoints [Timestamp] was not less than the threshold (12.0) (minimum 1 datapoint for ALARM -> OK transition).
If I increase the m value the n value also forced increase by the console and vice verse. How do I set up an alarm that triggers if 12 jobs didn't complete per day.
I have an Akka Stream and I want the stream to send messages down stream approximately every second.
I tried two ways to solve this problem, the first way was to make the producer at the start of the stream only send messages once every second when a Continue messages comes into this actor.
// When receive a Continue message in a ActorPublisher
// do work then...
if (totalDemand > 0) {
import scala.concurrent.duration._
context.system.scheduler.scheduleOnce(1 second, self, Continue)
}
This works for a short while then a flood of Continue messages appear in the ActorPublisher actor, I assume (guess but not sure) from downstream via back-pressure requesting messages as the downstream can consume fast but the upstream is not producing at a fast rate. So this method failed.
The other way I tried was via backpressure control, I used a MaxInFlightRequestStrategy on the ActorSubscriber at the end of the stream to limit the number of messages to 1 per second. This works but messages coming in come in at approximately three or so at a time, not just one at a time. It seems the backpressure control doesn't immediately change the rate of messages coming in OR messages were already queued in the stream and waiting to be processed.
So the problem is, how can I have an Akka Stream which can process one message only per second?
I discovered that MaxInFlightRequestStrategy is a valid way to do it but I should set the batch size to 1, its batch size is default 5, which was causing the problem I found. Also its an over-complicated way to solve the problem now that I am looking at the submitted answer here.
You can either put your elements through the throttling flow, which will back pressure a fast source, or you can use combination of tick and zip.
The first solution would be like this:
val veryFastSource =
Source.fromIterator(() => Iterator.continually(Random.nextLong() % 10000))
val throttlingFlow = Flow[Long].throttle(
// how many elements do you allow
elements = 1,
// in what unit of time
per = 1.second,
maximumBurst = 0,
// you can also set this to Enforcing, but then your
// stream will collapse if exceeding the number of elements / s
mode = ThrottleMode.Shaping
)
veryFastSource.via(throttlingFlow).runWith(Sink.foreach(println))
The second solution would be like this:
val veryFastSource =
Source.fromIterator(() => Iterator.continually(Random.nextLong() % 10000))
val tickingSource = Source.tick(1.second, 1.second, 0)
veryFastSource.zip(tickingSource).map(_._1).runWith(Sink.foreach(println))
I have an app that uses SQS to queue jobs. Ideally I want every job to be completed, but some are going to fail. Sometimes re-running them will work, and sometimes they will just keep failing until the retention period is reached. . I want to keep failing jobs in the queue as long as possible, to give them the maximum possible chance of success, so I don't want to set a maxReceiveCount. But I do want to detect when a job reaches the MessageRetentionPeriod limit, as I need to send an alert when a job fails completely. Currently I have the max retention at 14 days, but some jobs will still not be completed by then.
Is there a way to detect when a job is about to expire, and from there send it to a deadletter queue for additional processing?
Before you follow my advice below and assuming I've done the math for periods correctly, you will be better off enabling a redrive policy on the queue if you check for messages less often than every 20 minutes and 9 seconds.
SQS's "redrive policy" allows you to migrates messages to a dead letter queue after a threshold number of receives. The maximum receives that AWS allows for this is 1000, and over 14 days that works out to about 20 minutes per receive. (For simplicity, that is assuming that your job never misses an attempt to read queue messages. You can tweak the numbers to build in a tolerance for failure.)
If you check more often than that, you'll want to implement the solution below.
You can check for this "cutoff date" (when the job is about to expire) as you process the messages, and send messages to the deadletter queue if they've passed the time when you've given up on them.
Pseudocode to add to your current routine:
Call GetQueueAttributes to get the count, in seconds, of your queue's Message Retention Period.
Call ReceiveMessage to pull messages off of the queue. Make sure to explicitly request that the SentTimestamp is visible.
Foreach message,
Find your message's expiration time by adding the message retention period to the sent timestamp.
Create your cutoff date by subtracting your desired amount of time from the message's expiration time.
Compare the cutoff date with the current time. If the cutoff date has passed:
Call SendMessage to send your message to the Dead Letter queue.
Call DeleteMessage to remove your message from the queue you are processing.
If the cutoff date has not passed:
Process the job as normal.
Here's an example implementation in Powershell:
$queueUrl = "https://sqs.amazonaws.com/0000/my-queue"
$deadLetterQueueUrl = "https://sqs.amazonaws.com/0000/deadletter"
# Get the message retention period in seconds
$messageRetentionPeriod = (Get-SQSQueueAttribute -AttributeNames "MessageRetentionPeriod" -QueueUrl $queueUrl).Attributes.MessageRetentionPeriod
# Receive messages from our queue.
$queueMessages = #(receive-sqsmessage -QueueUrl $queueUrl -WaitTimeSeconds 5 -AttributeNames SentTimestamp)
foreach($message in $queueMessages)
{
# The sent timestamp is in epoch time.
$sentTimestampUnix = $message.Attributes.SentTimestamp
# For powershell, we need to do some quick conversion to get a DateTime.
$sentTimestamp = ([datetime]'1970-01-01 00:00:00').AddMilliseconds($sentTimestampUnix)
# Get the expiration time by adding the retention period to the sent time.
$expirationTime = $sentTimestamp.AddDays($messageRetentionPeriod / 86400 )
# I want my cutoff date to be one hour before the expiration time.
$cutoffDate = $expirationTime.AddHours(-1)
# Check if the cutoff date has passed.
if((Get-Date) -ge $cutoffDate)
{
# Cutoff Date has passed, move to deadletter queue
Send-SQSMessage -QueueUrl $deadLetterQueueUrl -MessageBody $message.Body
remove-sqsmessage -QueueUrl $queueUrl -ReceiptHandle $message.ReceiptHandle -Force
}
else
{
# Cutoff Date has not passed. Retry job?
}
}
This will add some overhead to every message you process. This also assumes that your message handler will receive the message inbetween the cutoff time and the expiration time. Make sure that your application is polling often enough to receive the message.
I have a unit test that tests BufferWithTime. I seem to be getting inconsistent results when values are emitted at the point the buffering will emit a new value.
var scheduler = new TestScheduler();
var source = scheduler.CreateColdObservable(
new Recorded<Notification<int>>(50, new Notification<int>.OnNext(1)),
new Recorded<Notification<int>>(100, new Notification<int>.OnNext(2)),
new Recorded<Notification<int>>(150, new Notification<int>.OnNext(3)),
new Recorded<Notification<int>>(200, new Notification<int>.OnNext(4)),
new Recorded<Notification<int>>(250, new Notification<int>.OnNext(5)),
new Recorded<Notification<int>>(300, new Notification<int>.OnNext(6)),
new Recorded<Notification<int>>(350, new Notification<int>.OnNext(7)),
new Recorded<Notification<int>>(400, new Notification<int>.OnNext(8)),
new Recorded<Notification<int>>(450, new Notification<int>.OnNext(9)),
new Recorded<Notification<int>>(450, new Notification<int>.OnCompleted()));
var results = scheduler.Run(() => source
.BufferWithTime(TimeSpan.FromTicks(150), scheduler));
The results I get back from this are essentially:
results[0] = [1,2]
results[1] = [3,4,5,6]
results[2] = [7,8,9]
My question is, why is there only two items in the first buffer and 4 in the second? I would expect that a source that emits at the same time as buffering is supposed to happen, they either always go in the buffer or are always queued for the next buffer. Have I just stumbled upon a bug?
Based on responses on the MSDN forums this isn't a bug. You can read their answers here.
Basically, when something is scheduled to execute at exactly the same time as something else, it's the order of scheduling that takes precedence i.e. they are queued. When looking at the ordering of the scheduling with the above example you can see why I'm getting the behaviour that I'm getting.
BufferWithTime schedules a window to
open at 0 and close at 150.
The cold Source is then subscribed
to which schedules all other
notifications. At this point, the value to be
emitted at 150 is then queued behind
the closing of the window.
At time 150 the window closes first
(emitting the first buffer of two
values). The next window is opened
and is scheduled to close at 300.
The value that is scheduled to be
emitted at 150 is added to the
second buffer.
At time 300, the value 6 was
scheduled to be emitted first (as it
was scheduled when the source was
subscribed to) so it is added to the
second buffer. BufferWithTime then closes the window (emits the buffer) and opens a new one scheduled to close at 450.
They cycle will then continue consistently.