Expo Multiple Local Notifications Not Triggering On Time - expo

Expo multiple local notifications aren't triggering on time. I am receiving only first notification on time while the others at times trigger a bit late or simultaneously. Few times they do get trigger on time but why I am facing such inconsistency in results
CPH1911:
Notification scheduled for: Mon Jan 25 2021 20:07:47 GMT+0500 (PKT)
CPH1911:
Notification scheduled for: Mon Jan 25 2021 20:08:47 GMT+0500 (PKT)
CPH1911:
Notification scheduled for: Mon Jan 25 2021 20:09:47 GMT+0500 (PKT)
CPH1911:
Notification received at: Mon Jan 25 2021 20:07:47 GMT+0500 (PKT)
CPH1911:
ExponentPushToken[NmECbXMqBlCq4nyVeqYFxK]
CPH1911:
Notification received at: Mon Jan 25 2021 20:10:01 GMT+0500 (PKT)
2
Following is the snack: (expo sdk version 40, expo-notifications version 0.8.2)
https://snack.expo.io/#rabiarashid/notifications5799

Related

AWS cloudwatch cron expression scheduling issue

I am trying to schedule cloudwatch event to trigger my lambda daily between 13:45 PM UTC to 15:45 PM UTC with interval of every 30 minutes.
I want my lambda to be triggered at following times daily -
13:45 PM UTC, 14:15 PM UTC, 14:45 PM UTC, 15:15 PM UTC and 15:45 PM UTC
I have set following cron expression to achieve this -
45/30 13-15 * * ? *
But I am not able to achieve the expected schedule. Getting following scheduled time instead -
Please let me know how should I achieve expected scheduled time.

Confusion About Cloudwatch Alarms

I have a Cloudwatch Alarm which receives data from a Canary. My canary attempts to visit a website, and if the website is up and responding, then the datapoint is 0, if the server returns some sort of error then the datapoint is 1. Pretty standard canary stuff I hope. This canary runs every 30 minutes.
My Cloudwatch alarm is configured as follows:
With the expected behaviour that if my canary cannot reach the website 3 times in a row, then the alarm should go off.
Unfortunately, this is not what's happening. My alarm was triggered with the following canary data:
Feb 8 # 7:51 PM (MST)
Feb 8 # 8:22 PM (MST)
Feb 8 # 9:52 PM (MST)
How is it possible that these three datapoints would trigger my alarm?
My actual email was received as follows:
You are receiving this email because your Amazon CloudWatch Alarm "...." in the US West (Oregon) region has entered the ALARM state, because "Threshold Crossed: 3 out of the last 3 datapoints [1.0 (09/02/21 04:23:00), 1.0 (09/02/21 02:53:00), 1.0 (09/02/21 02:23:00)] were greater than or equal to the threshold (1.0) (minimum 3 datapoints for OK -> ALARM transition)." at "Tuesday 09 February, 2021 04:53:30 UTC".
I am even more confused because the times on these datapoints do not align. If I convert these times to MST, we have:
Feb 8 # 7:23 PM
Feb 8 # 7:53 PM
Feb 8 # 9:23 PM
The time range on the reported datapoints is a two hour window, when I have clearly specified my evaluation period as 1.5 hours.
If I view the "metrics" chart in cloudwatch for my alarm it makes even less sense:
The points in this chart as shown as:
Feb 9 # 2:30 UTC
Feb 9 # 3:00 UTC
Feb 9 # 4:30 UTC
Which, again, appears to be a 2 hour evaluation period.
Help? I don't understand this.
How can I configure my alarm to fire if my canary cannot reach the website 3 times in a row (waiting 30 minutes in-between checks)?
I have two things to answer this:
Every time a canary runs 1 datapoint is sent to cloudwatch. So if within 30 mins you are checking for 3 failures for alarms to be triggered then your canary should run at a interval for 10 mins. So in 30 mins 3 data point and all 3 failed data points for alarm to be triggered.
For some reasons statistics was not working for me so I used count option. May be this might help.
My suggestion to run canary every 5 mins. So in 30 mins 6 data points and create alarm for if count=4.
The way i read your config, your alarm is expecting to find 3 data points within a 30 minute window - but your metric is only updated every 30 minutes so this condition will never be true.
You need to increase the period so there is 3 or more metrics available in order to trigger the alarm.

Amazon API: Date parameter not working as per documentation

I am using postman to do a simple API call to Amazon SES, in the documentation
https://docs.aws.amazon.com/ses/latest/APIReference/CommonParameters.html
section X-Amz-Date, they state that
For example, the following date time is a valid X-Amz-Date value: 20120325T120000Z
However when i use my date in that formar I get an error
<Message>Invalid date 20120325T120000Z. It must be in one of the formats specified by HTTP RFC 2616 section 3.3.1</Message>
so if i look into HTTP RFC 2616 section 3.3.1 (https://www.rfc-editor.org/rfc/rfc2616)
they are 3 possible formats
Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
It seems to be working with the option 1 and 3, however i keep getting an error
<Message>Request timestamp: Wed, 20 Feb 2019 10:22:00 GMT expired. It must be within 300 secs/ of server time.</Message>
I have move the time several minutes back and forward in case my pc time is fast or behind, but i keep getting 300 sec
Is the documentation of amazon wrong?
if the second is the right format, how can i get the server time, my instance is set to n. virginia I used https://www.timeanddate.com/worldclock/usa/virginia to get the time and use different options but all of them ends with the 300sec error
I assume that should be translated in postman as:

AWS Lambda Cron Schedule Error

I have several Lambda Functions that are on a schedule and those are working without any issues. However, I have a onetime job that I am trying to set up for an existing function and am getting an error when I am creating the new rule:
Details: Parameter ScheduleExpression is not valid..
I need this to run on Monday September 26th 2016 at 14:30 hours UTC.
Here are all of the variations I have tried:
cron(30 14 26 SEP ? 2016)
cron(30 14 26 9 ? 2016)
cron(30 14 26 SEP ?*)
cron(30 14 26 9 ? *)
cron(30 14 26 SEP MON 2016)
cron(30 14 26 9 MON 2016)
I must have been staring at this too long because I can't figure out what is the deal on this one. I am using the reference provided here:
http://docs.aws.amazon.com/lambda/latest/dg/tutorial-scheduled-events-schedule-expressions.html
Thanks all!
Are you using web console or CLI? In web console you should not fill the cron() part. Your first expression is correct.
See the screenshot:

AWS API Gateway Method to Serve static content from S3 Bucket

I want to serve my lambda microservices through API Gateway which seems not to be a big problem.
Every of my microservices has a JSON-Schema specification of the resource provided. Since it is a static file, I would like to serve it from an S3 Bucket
rather than also running a lambda function to serve it.
So while
GET,POST,PUT,DELETE http://api.domain.com/ressources
should be forwarded to a lambda function. I want
GET http://api.domain.com/ressources/schema
to serve my schema.json from S3.
My naive first approach was to setup the resource and methods for "/v1/contracts/schema - GET - Integration Request" and configure it to behave as an HTTP Proxy with endpoint url pointing straight to the contracts JSON-Schema. I get a 500 - Internal Server error.
Execution log for request test-request
Fri Nov 27 09:24:02 UTC 2015 : Starting execution for request: test-invoke-request
Fri Nov 27 09:24:02 UTC 2015 : API Key: test-invoke-api-key
Fri Nov 27 09:24:02 UTC 2015 : Method request path: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request query string: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request headers: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request body before transformations: null
Fri Nov 27 09:24:02 UTC 2015 : Execution failed due to configuration error: Invalid endpoint address
Am I on a complete wrong path or do I just miss some configurations ?
Unfortunately there is a limitation when using TestInvoke with API Gateway proxying to Amazon S3 (and some other AWS services) within the same region. This will not be the case once deployed, but if you want to test from the console you will need to use a bucket in a different region.
We are aware of the issue, but I can't commit to when this issue would be resolved.
In one of my setups I put a CloudFront distribution in front of both an API Gateway and an S3 bucket, which are both configured as origins.
I did mostly it in order to be able to make use of an SSL certificate issued by the AWS Certificate manager, which can only be set on stand-alone CloudFront distributions, and not on API Gateways.
I just had a similar error, but for a totally different reason: if the s3 bucket name contains a period (as in data.example.com or similar), the proxz request will bail out with a ssl certification issue!