Download Limit of AWS API Gateway - amazon-web-services

We have service which is used to download time series data from influxdb .We are not manipulating influx response , after updating some meta information , we push the records as such.
So there is no content length attached to response.
Want to give this service via Amazon API Gateway. Is it possible to integrate such a service with API gateway , mainly is there any limit on response size .Service not waiting for whole query results to come , but will API gateway do the same or it will wait for the whole data to be wrote to output stream.
When I tried , I observed content-length header being added by API Gateway.
HTTP/1.1 200 OK
Date: Tue, 26 Apr 2022 06:03:31 GMT
Content-Type: application/json
Content-Length: 3024
Connection: close
x-amzn-RequestId: 41dfebb4-f63e-43bc-bed9-1bdac5759210
X-B3-SpanId: 8322f100475a424a
x-amzn-Remapped-Connection: keep-alive
x-amz-apigw-id: RLKwCFztliAFR2Q=
x-amzn-Remapped-Server: akka-http/10.1.8
X-B3-Sampled: 0
X-B3-ParentSpanId: 43e304282e2f64d1
X-B3-TraceId: d28a4653e7fca23d
x-amzn-Remapped-Date: Tue, 26 Apr 2022 06:03:31 GMT
Is this means that API Gateway waits for whole response/EOF from integration?
If above case is true , then what's the maximum bytes limit api gateway buffer can hold?
Will API Gateway time out , if response from integration is too large or do not end stipulated time ?

Related

Live Stream from AWS MediaLive service not viewable from VLC

I am trying to build a custom live streaming service as documented here:
https://aws.amazon.com/solutions/implementations/live-streaming-on-aws/
I used the pre-provided cloudformation template for "Live Streaming on AWS with MediaStore" which provisioned all the relevant resources for me. Next, I wanted to test my custom streamer.
I used OBS Studio to stream my webcam output to MediaLivePushEndpoint that was created during AWS cloudformation provisioning. OBS Suggests that it is already streaming the webcam stream to the rtmp endpoint to AWS MediaLive RTMP endpoint.
Now, to confirm if I can watch the stream, when I try to set the Input Nerwork Stream in VLC player to the cloudfront endpoint that was created for me (which looks like this: https://aksj2arbacadabra.cloudfront.net/stream/index.m3u8), VLC is unable to fetch the stream and fails with the following error message in the logs. What am I missing? Thanks!
...
...
...
http debug: outgoing request: GET /stream/index.m3u8 HTTP/1.1 Host: d2lasasasauyhk.cloudfront.net Accept: */* Accept-Language: en_US User-Agent: VLC/3.0.11 LibVLC/3.0.11 Range: bytes=0-
http debug: incoming response: HTTP/1.1 404 Not Found Content-Type: application/x-amz-json-1.1 Content-Length: 31 Connection: keep-alive x-amzn-RequestId: HRNVKYNLTdsadasdasasasasaPXAKWD7AQ55HLYBBXHPH6GIBH5WWY x-amzn-ErrorType: ObjectNotFoundException Date: Wed, 18 Nov 2020 04:08:53 GMT X-Cache: Error from cloudfront Via: 1.1 5085d90866d21sadasdasdad53213.cloudfront.net (CloudFront) X-Amz-Cf-Pop: EWR52-C4 X-Amz-Cf-Id: btASELasdasdtzaLkdbIu0hJ_asdasdasdbgiZ5hNn1-utWQ==
access error: HTTP 404 error
main debug: no access modules matched
main debug: dead input
qt debug: IM: Deleting the input
main debug: changing item without a request (current 2/3)
main debug: nothing to play
Updates based on Zach's response:
Here are the parameters I used while deploying the cloudformation template for live streaming using MediaLive (notice that I am using RTMP_PUSH):
I am using MediaLive and not MediaPackage so when I go to MediaLive to my channel, I see this:
Notice that it says that it cannot find the "stream [stream]" but I confirmed that the rtmp endpoint I add to my OBS is exactly the one which was created as an output for me from my cloudformation stack:
Finally, when I try to go to media store to see if there are any objects, it is completely empty:
Vader,
Thank you for the clarification here, I can see the issue is with your settings in OBS. When you setup your input for MediaLive you created a unique Application Name and Instance. Which is part of the URI, the Application Name is LiveStreamingwithMediaStore and the Instance is stream, in OBS you are going to want remove stream from the end of the Server URI and place it in the Stream Key portion, where you currently have a 1.
OBS Settings:
Server: rtmp://server_ip:1935/Application_Name/
Stream Key: Instance_Name
Since you posted the screenshot here on an open forum, which really helped determine the issue, but does expose settings that would allow someone to send to the RTMP input I would suggest that you change the Application Name and Instance.
Zach

CloudFront still serves old image while S3 upload of new image was 5 days ago

I have a CloudFront distribution with orign S3. The Bucket (versioning disabled) contains images.
Behaviour of the connection between CloudFront and S3:
Redirect HTTP to HTTPS
Cached options:GET, HEAD (Cached by default) & OPTIONS
Cache Based on Selected Request Headers (None)
Use Origin Cache Headers
Min TTL: 0
Default TTL: 86400
Max TTL: 31536000
Forwad cookies: all
Query string forwarding: forward all based on cache
restrict viewer access, streaming, compress: no
My images in S3 have the following metadata (no cache control headers):
Content-Type image/jpeg
x-amz-meta-md5 lYw9zHZxxxxxxx8468A==
Now we have uploaded a new image in S3 around 5 days ago. When we open the image in S3 or download it we see the new image.
Now in CloudFront we are still seeing the old image while we were expecting a cache refresh after 24 hours.
By default, CloudFront caches a response from Amazon S3 for 24 hours
(Default TTL of 86,400 seconds).
When I curl the image 2 times:
HTTP/1.1 200 OK
Content-Type: image/jpeg
Content-Length: 12769
Connection: keep-alive
Date: Tue, 22 Oct 2019 08:57:57 GMT
Last-Modified: Thu, 18 Oct 2018 10:00:56 GMT
ETag: "0d581eef776ab0b6d44dd27c8759714a"
x-amz-meta-md5: DVge73dqxxxdJ8h1lxSg==
Accept-Ranges: bytes
Server: AmazonS3
X-Cache: Miss from cloudfront
HTTP/1.1 200 OK
Content-Type: image/jpeg
Content-Length: 12769
Connection: keep-alive
Date: Tue, 22 Oct 2019 08:57:57 GMT
Last-Modified: Thu, 18 Oct 2018 10:00:56 GMT
ETag: "0d581eef776ab0b6d44dd27c8759714a"
x-amz-meta-md5: DVge73dqxxxdJ8h1lxSg==
Accept-Ranges: bytes
Server: AmazonS3
X-Cache: Hit from cloudfront
First a miss, then a hit, but the last modified date is still too long ago and the new image is not retrieved from S3. I know I can create an invalidation but I don't want to make new invalidations every time we have new images available.
What could be the issue here? if you need more info, please ask!

Send email with Microsoft Flow when Power BI alert is triggered

I am trying to build a flow that sends an email to me, when a Power BI alert is triggered. I have build the flow, and now trying the test option.
This gives me a Status code error 429
Addional details:
Headers
Retry-After: 600
Strict-Transport-Security: max-age=31536000;includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
RequestId: ad5eb81f-a02d-4edd-b0c2-964cef662d01
Timing-Allow-Origin: *
x-ms-apihub-cached-response: false
Cache-Control: no-store, must-revalidate, no-cache
Date: Thu, 28 Mar 2019 12:35:42 GMT
Content-Length: 254
Content-Type: application/json
Body
{
"error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"pbi.error": {
"code": "MicrosoftFlowCheckAlertStatusEndpointThrottled",
"parameters": {},
"details": [],
"exceptionCulprit": 1
}
}
}
I noticed this 429 is caused by too many requests, but I do not understand this, since I only have 1 alert, and this is a very simple Flow thats connected to this 1 alert, and should then send an email.
In general Error 429 means you have exceeded the limit of triggers per period (probably 60 seconds according to https://learn.microsoft.com/en-gb/connectors/powerbi/ ). You should find these parameters in Peek code tool.
My suggestion is to check how many alerts for the data tracked you had in Power BI service. Too low limit might be the answer.
I got the same error.
It was appearing when testing manually.
When I changed the testing to "Automatic" the error changed and it was clear that the "Send an e-mail" step caused the issue.
It turned out that the second step needed to change to Outlook - Send an email (V2) step.
It was really confusing as the MicrosoftFlowCheckAlertStatusEndpointThrottled was irrelevant and it was not the real issue!

FB API "100 continue" & "500 Internal Server Error" (error_subcode 99)

I have a peculiar issue with the Facebook API. I think it probably has to do with high volume, but that has not brought me any closer to the solution. When posting out messages to Facebook API, I occasionally receive an error such as:
HTTP/1.1 100 Continue
Date: Sat, 17 Dec 2016 19:22:38 GMT
HTTP/1.1 500 Internal Server Error
Access-Control-Allow-Origin: *
Pragma: no-cache
Cache-Control: private, no-cache, no-store, must-revalidate
facebook-api-version: v2.3
Expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-trace-id: El4BfeJo4vI
x-fb-rev: 2746767
Content-Type: text/html
X-FB-Debug: F3xHF4IY15E3VK9M5acge9B6jBKOEqwP2Ob4F8WsoYRkGeAiY2PkzOjiiawhQ/Uq0TT/Xen+JLZtFXA9ZUsbRg==
Date: Sat, 17 Dec 2016 19:23:08 GMT
Connection: keep-alive
Content-Length: 77
{"error":{"code":1,"message":"An unknown error occurred","error_subcode":99}}
Usually retries later will work for the same request so it would not appear to be the culprit. The issue here, however, is that the message still sometimes appears to go through. How should such responses be handled?
I read on the Continue header, but I'm none the wiser now - especially since it comes with a non-descriptive 500 Internal Server error.
You can probably safely ignore the 100 header; it's correctly been followed up as you have another response (the 500).
You should never really get a a 500 from any site: means their code is broken. Should report it here: https://developers.facebook.com/bugs/
fb-reply
"An unknown error occurred","code":1,"error_subcode":99"
Reason for above error according to facebook:
This error code is an indication that your request timed out. It may be the case that the request is valid, however the maximum processing time for the API was exceeded. Recommendation: Wait a few minutes, and then try again. If the problem persists, please continue filing a bug report.
It has been reported several times that a too long request will result in this type of error-subcodes (99). Try to narrow your request parameters while Facebook doesn't support long requests. (Although the 500 error looks new to me.)
You should use pagination as this document:
https://m.facebook.com/groups/pmdcommunity/?view=permalink&id=1174638509255282

AWS API Gateway Method to Serve static content from S3 Bucket

I want to serve my lambda microservices through API Gateway which seems not to be a big problem.
Every of my microservices has a JSON-Schema specification of the resource provided. Since it is a static file, I would like to serve it from an S3 Bucket
rather than also running a lambda function to serve it.
So while
GET,POST,PUT,DELETE http://api.domain.com/ressources
should be forwarded to a lambda function. I want
GET http://api.domain.com/ressources/schema
to serve my schema.json from S3.
My naive first approach was to setup the resource and methods for "/v1/contracts/schema - GET - Integration Request" and configure it to behave as an HTTP Proxy with endpoint url pointing straight to the contracts JSON-Schema. I get a 500 - Internal Server error.
Execution log for request test-request
Fri Nov 27 09:24:02 UTC 2015 : Starting execution for request: test-invoke-request
Fri Nov 27 09:24:02 UTC 2015 : API Key: test-invoke-api-key
Fri Nov 27 09:24:02 UTC 2015 : Method request path: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request query string: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request headers: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request body before transformations: null
Fri Nov 27 09:24:02 UTC 2015 : Execution failed due to configuration error: Invalid endpoint address
Am I on a complete wrong path or do I just miss some configurations ?
Unfortunately there is a limitation when using TestInvoke with API Gateway proxying to Amazon S3 (and some other AWS services) within the same region. This will not be the case once deployed, but if you want to test from the console you will need to use a bucket in a different region.
We are aware of the issue, but I can't commit to when this issue would be resolved.
In one of my setups I put a CloudFront distribution in front of both an API Gateway and an S3 bucket, which are both configured as origins.
I did mostly it in order to be able to make use of an SSL certificate issued by the AWS Certificate manager, which can only be set on stand-alone CloudFront distributions, and not on API Gateways.
I just had a similar error, but for a totally different reason: if the s3 bucket name contains a period (as in data.example.com or similar), the proxz request will bail out with a ssl certification issue!