Amazon API: Date parameter not working as per documentation - amazon-web-services

I am using postman to do a simple API call to Amazon SES, in the documentation
https://docs.aws.amazon.com/ses/latest/APIReference/CommonParameters.html
section X-Amz-Date, they state that
For example, the following date time is a valid X-Amz-Date value: 20120325T120000Z
However when i use my date in that formar I get an error
<Message>Invalid date 20120325T120000Z. It must be in one of the formats specified by HTTP RFC 2616 section 3.3.1</Message>
so if i look into HTTP RFC 2616 section 3.3.1 (https://www.rfc-editor.org/rfc/rfc2616)
they are 3 possible formats
Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
It seems to be working with the option 1 and 3, however i keep getting an error
<Message>Request timestamp: Wed, 20 Feb 2019 10:22:00 GMT expired. It must be within 300 secs/ of server time.</Message>
I have move the time several minutes back and forward in case my pc time is fast or behind, but i keep getting 300 sec
Is the documentation of amazon wrong?
if the second is the right format, how can i get the server time, my instance is set to n. virginia I used https://www.timeanddate.com/worldclock/usa/virginia to get the time and use different options but all of them ends with the 300sec error
I assume that should be translated in postman as:

Related

Download Limit of AWS API Gateway

We have service which is used to download time series data from influxdb .We are not manipulating influx response , after updating some meta information , we push the records as such.
So there is no content length attached to response.
Want to give this service via Amazon API Gateway. Is it possible to integrate such a service with API gateway , mainly is there any limit on response size .Service not waiting for whole query results to come , but will API gateway do the same or it will wait for the whole data to be wrote to output stream.
When I tried , I observed content-length header being added by API Gateway.
HTTP/1.1 200 OK
Date: Tue, 26 Apr 2022 06:03:31 GMT
Content-Type: application/json
Content-Length: 3024
Connection: close
x-amzn-RequestId: 41dfebb4-f63e-43bc-bed9-1bdac5759210
X-B3-SpanId: 8322f100475a424a
x-amzn-Remapped-Connection: keep-alive
x-amz-apigw-id: RLKwCFztliAFR2Q=
x-amzn-Remapped-Server: akka-http/10.1.8
X-B3-Sampled: 0
X-B3-ParentSpanId: 43e304282e2f64d1
X-B3-TraceId: d28a4653e7fca23d
x-amzn-Remapped-Date: Tue, 26 Apr 2022 06:03:31 GMT
Is this means that API Gateway waits for whole response/EOF from integration?
If above case is true , then what's the maximum bytes limit api gateway buffer can hold?
Will API Gateway time out , if response from integration is too large or do not end stipulated time ?

polly.us-east-2.amazonaws.com/v1/speech returns 200 on ubuntu 18 but forbidden 403 on ubuntu 16 server

AWS polly polly.us-east-2.amazonaws.com/v1/speech does not work on ubuntu 16 server and returns 403 forbidden but it works on 18 ubuntu and returns 200 OK.
I am facing this issue and wired to understand the reason behind it.
What can be reason behind it ? And how to solve this issue ?
Dear guys,
Finally I got the actual reason behind it. I checked today and found it was due to the next date set 15 MAY, 15:10:00" at this Ubuntu 16 server. When I changed to today's date time and tested it. The api was fired with 200,OK.
Actually current date time is used in calculating AWS Signature 4 Code so it could not be matched with AWS server side in a particular region.
Below are the logs::
root#abc:/usr/local/vvv/Demo_Project# date
Sat May 15 15:07:09 IST 2021
root#abc:/usr/local/vvv/Demo_Project# java -jar AWSTTS.jar
AWS jsonString Format :: 2021-05-15 15:07:17.482
2021-05-15 15:07:18.981
AWS ResponseCode & ResponseMessage :: 403 Forbidden
3. 2021-05-15 15:07:19.229
root#abc:/usr/local/vvv/Demo_Project# date -s "14 MAY 2021 15:07:00"
Fri May 14 15:07:00 IST 2021
root#abc:/usr/local/vvv/Demo_Project# java -jar AWSTTS.jar
AWS jsonString Format :: 2021-05-14 15:07:04.650
2021-05-14 15:07:04.653
AWS ResponseCode & ResponseMessage :: 200 OK
3. 2021-05-14 15:07:06.295

Send email with Microsoft Flow when Power BI alert is triggered

I am trying to build a flow that sends an email to me, when a Power BI alert is triggered. I have build the flow, and now trying the test option.
This gives me a Status code error 429
Addional details:
Headers
Retry-After: 600
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
RequestId: ad5eb81f-a02d-4edd-b0c2-964cef662d01
Timing-Allow-Origin: *
x-ms-apihub-cached-response: false
Cache-Control: no-store, must-revalidate, no-cache
Date: Thu, 28 Mar 2019 12:35:42 GMT
Content-Length: 254
Content-Type: application/json
Body
{
"error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"pbi.error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"parameters": {},
"details": [],
"exceptionCulprit": 1
}
}
}
I noticed this 429 is caused by too many requests, but I do not understand this, since I only have 1 alert, and this is a very simple Flow thats connected to this 1 alert, and should then send an email.
In general Error 429 means you have exceeded the limit of triggers per period (probably 60 seconds according to https://learn.microsoft.com/en-gb/connectors/powerbi/ ). You should find these parameters in Peek code tool.
My suggestion is to check how many alerts for the data tracked you had in Power BI service. Too low limit might be the answer.
I got the same error.
It was appearing when testing manually.
When I changed the testing to "Automatic" the error changed and it was clear that the "Send an e-mail" step caused the issue.
It turned out that the second step needed to change to Outlook - Send an email (V2) step.
It was really confusing as the MicrosoftFlowCheckAlertStatusEndpointThrottled was irrelevant and it was not the real issue!

FB API "100 continue" & "500 Internal Server Error" (error_subcode 99)

I have a peculiar issue with the Facebook API. I think it probably has to do with high volume, but that has not brought me any closer to the solution. When posting out messages to Facebook API, I occasionally receive an error such as:
HTTP/1.1 100 Continue
Date: Sat, 17 Dec 2016 19:22:38 GMT
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Origin: *
Pragma: no-cache
Cache-Control: private, no-cache, no-store, must-revalidate
facebook-api-version: v2.3
Expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-trace-id: El4BfeJo4vI
x-fb-rev: 2746767
Content-Type: text/html
X-FB-Debug: F3xHF4IY15E3VK9M5acge9B6jBKOEqwP2Ob4F8WsoYRkGeAiY2PkzOjiiawhQ/Uq0TT/Xen+JLZtFXA9ZUsbRg==
Date: Sat, 17 Dec 2016 19:23:08 GMT
Connection: keep-alive
Content-Length: 77
{"error":{"code":1,"message":"An unknown error occurred","error_subcode":99}}
Usually retries later will work for the same request so it would not appear to be the culprit. The issue here, however, is that the message still sometimes appears to go through. How should such responses be handled?
I read on the Continue header, but I'm none the wiser now - especially since it comes with a non-descriptive 500 Internal Server error.
You can probably safely ignore the 100 header; it's correctly been followed up as you have another response (the 500).
You should never really get a a 500 from any site: means their code is broken. Should report it here: https://developers.facebook.com/bugs/
fb-reply
"An unknown error occurred","code":1,"error_subcode":99"
Reason for above error according to facebook:
This error code is an indication that your request timed out. It may be the case that the request is valid, however the maximum processing time for the API was exceeded. Recommendation: Wait a few minutes, and then try again. If the problem persists, please continue filing a bug report.
It has been reported several times that a too long request will result in this type of error-subcodes (99). Try to narrow your request parameters while Facebook doesn't support long requests. (Although the 500 error looks new to me.)
You should use pagination as this document:
https://m.facebook.com/groups/pmdcommunity/?view=permalink&id=1174638509255282

AWS API Gateway Method to Serve static content from S3 Bucket

I want to serve my lambda microservices through API Gateway which seems not to be a big problem.
Every of my microservices has a JSON-Schema specification of the resource provided. Since it is a static file, I would like to serve it from an S3 Bucket
rather than also running a lambda function to serve it.
So while
GET,POST,PUT,DELETE http://api.domain.com/ressources
should be forwarded to a lambda function. I want
GET http://api.domain.com/ressources/schema
to serve my schema.json from S3.
My naive first approach was to setup the resource and methods for "/v1/contracts/schema - GET - Integration Request" and configure it to behave as an HTTP Proxy with endpoint url pointing straight to the contracts JSON-Schema. I get a 500 - Internal Server error.
Execution log for request test-request
Fri Nov 27 09:24:02 UTC 2015 : Starting execution for request: test-invoke-request
Fri Nov 27 09:24:02 UTC 2015 : API Key: test-invoke-api-key
Fri Nov 27 09:24:02 UTC 2015 : Method request path: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request query string: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request headers: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request body before transformations: null
Fri Nov 27 09:24:02 UTC 2015 : Execution failed due to configuration error: Invalid endpoint address
Am I on a complete wrong path or do I just miss some configurations ?
Unfortunately there is a limitation when using TestInvoke with API Gateway proxying to Amazon S3 (and some other AWS services) within the same region. This will not be the case once deployed, but if you want to test from the console you will need to use a bucket in a different region.
We are aware of the issue, but I can't commit to when this issue would be resolved.
In one of my setups I put a CloudFront distribution in front of both an API Gateway and an S3 bucket, which are both configured as origins.
I did mostly it in order to be able to make use of an SSL certificate issued by the AWS Certificate manager, which can only be set on stand-alone CloudFront distributions, and not on API Gateways.
I just had a similar error, but for a totally different reason: if the s3 bucket name contains a period (as in data.example.com or similar), the proxz request will bail out with a ssl certification issue!