What are the gamemaker alarm limitations? - alarm

I'm a little confused about this one. Is it a limit of eleven alarms in my game? or in an object? or an instance of an object?
Thanks a million.

Every Instance of an Object in GML has 12 alarms that can be used (starting from 0 and ending at 11).
PGmath's Sggestion:
If you really need more alarms you can make your own by having a variable which you manually decrement by 1 each step, then test if equal to 0 to perform an action.
This information is from the GML Wiki on Alarms. You may also find the associated FAQ for Alarms helpful.

Related

How does Stabilization Window work with other scale down policies in Kubernetes Horizontal Pod Autoscaling?

I was trying to understand Horizontal Pod Autoscaling from the documentation. The initial examples explaining usage of different attributes to configure the scaling behavior was clear.
However, this one section called default behavior has an example showing a very counter-intuitive example for scale down configuration.
Here's just the scale down part of the scaling behavior:
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
and here's their explanation:
For scaling down the stabilization window is 300 seconds (or the value of the --horizontal-pod-autoscaler-downscale-stabilization flag if provided). There is only a single policy for scaling down which allows a 100% of the currently running replicas to be removed which means the scaling target can be scaled down to the minimum allowed replicas.
If I understand correctly stabilization essentially looks at the previously computed desired states from the immediate previous duration which is configured via the stabilization flag. In the meanwhile, the policies attribute right below stabilization, from the code block quoted above, describes a radically different behavior.
So I have some questions:
What exactly is the rolling maximum referred from under the stabilization window section of the documentation?This question stems from the following:
a) From the given explanation in the docs, why does the desired state
have to be inferred? Because isn't the desired state supposed to be
a fixed threshold that the current state either subceeds or exceeds?
b) Why would there be a need to average over anything other than the
previous state (before the current state) or the previously income traffic?
How does stabilization exactly implement a rolling maximum over a period of 300 seconds if at the same time there's another policy which is to scaled down all the way to the minimum allowed replicas within a drastically shorter duration of 15 seconds?
I know my questions might reflect an incomplete understanding but please do help me getting the necessary intuition to work with HPA. Thanks!

How do I query Prometheus for the timeseries that was updated last?

I have 100 instances of a service that use one database. I want them to export a Prometheus metric with the number of rows in a specific table of this database.
To avoid hitting the database with 100 queries at the same time, I periodically elect one of the instances to do the measurement and set a Prometheus gauge to the number obtained. Different instances may be elected at different times. Thus, each of the 100 instances may have its own value of the gauge, but only one of them is “current” at any given time.
What is the best way to pick only this “current” value from the 100 gauges?
My first idea was to export two gauges from each instance: the actual measurement and its timestamp. Then perhaps I could take the max(timestamp), then and it with the actual metric. But I can’t figure out how to do this in PromQL, because max will erase the instance I could and on.
My second idea was to reset the gauge to −1 (some sentinel value) at some time after the measurement. But this looks brittle, because if I don’t synchronize everything tightly, the “current” gauge could be reset before or after the “new” one is set, causing gaps or overlaps. Similar considerations go for explicitly deleting the metric and for exporting it with an explicit timestamp (to induce staleness).
I figured out the first idea (not tested yet):
avg(my_rows_count and on(instance) topk(1, my_rows_count_timestamp))
avg could as well be max or min, it only serves to erase instance from the final result.
last_over_time should do the trick
last_over_time(my_rows_count[1m])
given only one of them is “current” at any given time, like you said.

AWS CloudWatch - Creating a metric from different datapoints in time

Is it possible to define CW metric as the difference between the same metrics, but in two consecutive data points in time?
I need to measure how many objects has been put in an S3 bucket for a given time period, so I would use make the difference NumberOfObjects in this time window. PS: I couldn't find any "New objects" metric (which is not the same of PutRequests).
You can use 'DIFF' function.
It returns the difference between each value in the time series and the preceding value from that time series.
Expression:
DIFF(m1)
I do not have much data to test it but in this example I had added 2 new objects and using that expression shows the new objects added that day.
Reference:
Functions supported for metric math

Monitor that lambda executes in NewRelic

I'm trying to monitor if my Lambda has been executed within the last 25 hours within New Relic. I want to alert if it hasn't.
I have the following NRQL which gives me the graph I want to see:
SELECT sum(`provider.invocations.Sum`) FROM ServerlessSample WHERE provider.resource = 'my_lambda_name'
I then just want to say that if it dips below 1 for 1500 minutes (25 hours) then alert, but NR only allows me to set an alarm for 120 minutes. Any tips on how to get around this?
Interesting question, as I have seen in New Relic discussion page, or Explorers Hub, there might be solution for your task.
Can you please review this link:
https://discuss.newrelic.com/t/relic-solution-extending-the-functionality-of-nrql-alert-conditions-beyond-a-single-minute/75441
If you think about this for a moment, you might see how NRQL queries using percentile or stddev are a lot less useful than they seem, when used in an alert condition. After all, if you calculate the standard deviation over an hour (or 24 hours), that can be meaningful. But stddev(duration), or percentile(duration,95) calculated over only 60 seconds is less meaningful.
I think that limit is 24 hours but I haven't test it yet.
Hope this will help you, I will try to give it a go as well to see will this work.

Why does AWS Cloudwatch use an Evaluation Range when determining alarm state with missing data points?

From the docs:
No matter what value you set for how to treat missing data, when an alarm evaluates whether to change state, CloudWatch attempts to retrieve a higher number of data points than specified by Evaluation Periods. The exact number of data points it attempts to retrieve depends on the length of the alarm period and whether it is based on a metric with standard resolution or high resolution. The timeframe of the data points that it attempts to retrieve is the evaluation range.
The docs go on to give an example of an alarm with 'EvaluationPeriods' and 'DatapointsToAlarm' set to 3. They state that Cloudwatch chooses the 5 most recent datapoints. Part of my question is, Where are they getting 5? It's not clear from the docs.
The second part of my question is, why have this behavior at all (or at least, why have it by default)? If I set my evaluation period to 3, my Datapoints to Alarm to 3, and tell Cloudwatch to 'TreatMissingData' as 'breaching,' I'm going to expect 3 periods of missing data to trigger an alarm state. This doesn't necessarily happen, as illustrated by an example in the docs.
I had the same questions. As best I can tell, the 5 can be explained if I am thinking about standard collection intervals vs standard resolution correctly. In other words, if we assume a standard collection interval of 5 minutes and a standard 1-minute resolution, then within the 5 minutes of the collection interval, 5 separate data points are collected. The example states you need 3 data points over 3 evaluation periods, which is less than the 5 data points CloudWatch has collected. CloudWatch would then have all the data points it needs within the 5-data-point evaluation range defined by a single collection. As an example, if 4 of the 5 expected data points are missing from the collection, you have one defined data point and thus need 2 more within the evaluation range to reach the three needed for alarm evaluation. These 2 (not the 4 that are actually missing from the collection) are considered the "missing" data points in the documentation - I find this confusing. The tables in the AWS documentation provide examples for how the different treatments of the "missing" 2 data points impact the alarm evaluations.
Regardless of whether this is the correct interpretation, this could be better explained in the documentation.
I also agree that this behavior is unexpected, and the fact that you can't configure it is quite frustrating. However, there does seem to be an easy workaround depending on your use case.
I also wanted the same behavior as you specified; i.e. a missing data point is a breaching data point plain and simple:
If I set my evaluation period to 3, my Datapoints to Alarm to 3, and tell Cloudwatch to 'TreatMissingData' as 'breaching,' I'm going to expect 3 periods of missing data to trigger an alarm state.
I had a use case which is basically like a push-style health monitor. We needed a particular on-premises service to report a "healthy" metric daily to CloudWatch, and an alarm in case this report didn't come through due to network issues or anything disruptive. Semantically, missing data is the same as reporting a metric of value 0 (the "healthy" metric is value 1).
So I was able to use metric math's FILL function to replace every missing data point with 0. Setting a 1-out-of-1, alarm on <1 alarm on this new expression provides exactly the needed behavior without involving any kind of "missing data".