AWS API Gateway Method to Serve static content from S3 Bucket - amazon-web-services

I want to serve my lambda microservices through API Gateway which seems not to be a big problem.
Every of my microservices has a JSON-Schema specification of the resource provided. Since it is a static file, I would like to serve it from an S3 Bucket
rather than also running a lambda function to serve it.
So while
GET,POST,PUT,DELETE http://api.domain.com/ressources
should be forwarded to a lambda function. I want
GET http://api.domain.com/ressources/schema
to serve my schema.json from S3.
My naive first approach was to setup the resource and methods for "/v1/contracts/schema - GET - Integration Request" and configure it to behave as an HTTP Proxy with endpoint url pointing straight to the contracts JSON-Schema. I get a 500 - Internal Server error.
Execution log for request test-request
Fri Nov 27 09:24:02 UTC 2015 : Starting execution for request: test-invoke-request
Fri Nov 27 09:24:02 UTC 2015 : API Key: test-invoke-api-key
Fri Nov 27 09:24:02 UTC 2015 : Method request path: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request query string: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request headers: {}
Fri Nov 27 09:24:02 UTC 2015 : Method request body before transformations: null
Fri Nov 27 09:24:02 UTC 2015 : Execution failed due to configuration error: Invalid endpoint address
Am I on a complete wrong path or do I just miss some configurations ?

Unfortunately there is a limitation when using TestInvoke with API Gateway proxying to Amazon S3 (and some other AWS services) within the same region. This will not be the case once deployed, but if you want to test from the console you will need to use a bucket in a different region.
We are aware of the issue, but I can't commit to when this issue would be resolved.

In one of my setups I put a CloudFront distribution in front of both an API Gateway and an S3 bucket, which are both configured as origins.
I did mostly it in order to be able to make use of an SSL certificate issued by the AWS Certificate manager, which can only be set on stand-alone CloudFront distributions, and not on API Gateways.

I just had a similar error, but for a totally different reason: if the s3 bucket name contains a period (as in data.example.com or similar), the proxz request will bail out with a ssl certification issue!

Related

Download Limit of AWS API Gateway

We have service which is used to download time series data from influxdb .We are not manipulating influx response , after updating some meta information , we push the records as such.
So there is no content length attached to response.
Want to give this service via Amazon API Gateway. Is it possible to integrate such a service with API gateway , mainly is there any limit on response size .Service not waiting for whole query results to come , but will API gateway do the same or it will wait for the whole data to be wrote to output stream.
When I tried , I observed content-length header being added by API Gateway.
HTTP/1.1 200 OK
Date: Tue, 26 Apr 2022 06:03:31 GMT
Content-Type: application/json
Content-Length: 3024
Connection: close
x-amzn-RequestId: 41dfebb4-f63e-43bc-bed9-1bdac5759210
X-B3-SpanId: 8322f100475a424a
x-amzn-Remapped-Connection: keep-alive
x-amz-apigw-id: RLKwCFztliAFR2Q=
x-amzn-Remapped-Server: akka-http/10.1.8
X-B3-Sampled: 0
X-B3-ParentSpanId: 43e304282e2f64d1
X-B3-TraceId: d28a4653e7fca23d
x-amzn-Remapped-Date: Tue, 26 Apr 2022 06:03:31 GMT
Is this means that API Gateway waits for whole response/EOF from integration?
If above case is true , then what's the maximum bytes limit api gateway buffer can hold?
Will API Gateway time out , if response from integration is too large or do not end stipulated time ?

How can I use automation in AWS to replicate a github repo to an S3 bucket (quickstart-git2s3)?

I'd like to try and automate an S3 bucket replication of a Github repo (for the sole reason that Cloudformation modules must reference templates in S3).
This quickstart I tried to use looked like it could do it, but it doesn't result in success for me, even though github reports success in pushing via the webhook for my repository.
https://aws-quickstart.github.io/quickstart-git2s3/
I configured these parameters.
I am not sure what to configure for allowed IP's, so I tested fully open.
AllowedIps 0.0.0.0/0 -
ApiSecret **** -
CustomDomainName - -
ExcludeGit True -
OutputBucketName - -
QSS3BucketName aws-quickstart -
QSS3BucketRegion us-east-1 -
QSS3KeyPrefix quickstart-git2s3/ -
ScmHostnameOverride - -
SubnetIds subnet-124j124 -
VPCCidrRange 172.31.0.0/16 -
VPCId vpc-l1kj4lk2j1l2k4j
I tried manually executing the code build as well but got this error:
COMMAND_EXECUTION_ERROR: Error while executing command: python3 - << "EOF" from boto3 import client import os s3 = client('s3') kms = client('kms') enckey = s3.get_object(Bucket=os.getenv('KeyBucket'), Key=os.getenv('KeyObject'))['Body'].read() privkey = kms.decrypt(CiphertextBlob=enckey)['Plaintext'] with open('enc_key.pem', 'w') as f: print(privkey.decode("utf-8"), file=f) EOF . Reason: exit status 1
The github webhook page reports this response:
Headers
Content-Length: 0
Content-Type: application/json
Date: Thu, 24 Jun 2021 21:33:47 GMT
Via: 1.1 9b097dfab92228268a37145aac5629c1.cloudfront.net (CloudFront)
X-Amz-Apigw-Id: 1l4kkn14l14n=
X-Amz-Cf-Id: 1l43k135ln13lj1n3l1kn414==
X-Amz-Cf-Pop: IAD89-C1
X-Amzn-Requestid: 32kjh235-d470-1l412-bafa-l144l1
X-Amzn-Trace-Id: Root=1-60d4fa3b-73d7403073276ca306853b49;Sampled=0
X-Cache: Miss from cloudfront
Body
{}
From the following link:
https://aws-quickstart.github.io/quickstart-git2s3/
You can see the following excerpts I have included:
Allowed IP addresses (AllowedIps)
18.205.93.0/25,18.234.32.128/25,13.52.5.0/25
Comma-separated list of allowed IP CIDR blocks. The default addresses listed are BitBucket Cloud IP ranges.
As such, since you said you're using GitHub, I believe you should use this URL to determine the IP range:
https://api.github.com/meta
As that API will respond with JSON, you should search for the attribute hooks since I believe that it described using hooks.
Why don't you copy/checkout the file you want before you run your cloudformation commands, no reason to get too fancy.
git checkout -- path/to/some/file
aws cloudformation ...
Otherwise why not fork the repo and add your Cloudformation stuff, and then it's all there. You could also delete everything you didn't need and merge/pull changes in the future. That way your deploys will be reproducible, and you can rollback easily from one commit to the other.

AWS API Gateway returns 200 even if Lambda returns error

I'm building an AWS API mail service that basically checks the body of the request before sending a mail with data from that body. It checks if every required parameter is there and does some basic checks if the data conforms to what we require.
The problem is, that even if the Lambda function throws an error (which I verified using the Lambda Test interface), the API Gateway returns a 200 response code with the error object as body
That means that I get a log like this:
Tue Apr 11 14:23:43 UTC 2017 : Method response body after transformations:
{"errorMessage":"\"[BadRequest] Missing email\""}
Tue Apr 11 14:23:43 UTC 2017 : Method response headers: {X-Amzn-Trace-Id=Root=************, Content-Type=application/json}
Tue Apr 11 14:23:43 UTC 2017 : Successfully completed execution
Tue Apr 11 14:23:43 UTC 2017 : Method completed with status: 200
Because of the last part, I believe API Gateway is returning 200.
I did a couple of things to set up error handling:
I added a second Method response for the error code:
I added a second Integration response for the error code:
At this point I'm not sure why it still fails to return a correct response. I checked various posts about this (including: Is there a way to change the http status codes returned by Amazon API Gateway? and How to return error collection/object from AWS Lambda function and map to AWS API Gateway response code) and read the documentation.
I also tried the 'Lambda Proxy Way', but that didn't yield any results (the lambda didn't perform correctly at all that way'.
Does anyone see what I am missing here?
I see a couple of things that could be causing problems.
Your error message is quoted: "\"[BadRequest] Missing email\"", so the ^[BadRequest] regex won't match the error string. In a simple test I ran, I had to escape the [] (i.e. \[\]) since square brackets are reserved for character classes.
Without changing your errorMessage formatting, a pattern like this should work:
^"\[BadRequest\].*

How to specify method request path variables with aws apigateway CLI

I have a route defined in AWS API Gateway that uses a path variable, to be accessed like so:
/route/{variable}
It is all configured properly and working as I expect, except that I cannot find how to test this route via the CLI. When I use the AWS Console's "TEST" function on that method, it prompts me to enter the desired value for my variable. I do this, and it works as expected, with the following appearing in the execution logs:
Thu Jan 07 16:30:06 UTC 2016 : Method request path: {variable=my specified value}
Thu Jan 07 16:30:06 UTC 2016 : Method request headers: {}
However when I execute it using the CLI with this command:
$ aws apigateway test-invoke-method --rest-api-id {rest-api-id} --resource-id {resource-id} --http-method GET --path-with-query-string 'variable=my specified value'
I get a 500 ISE response, with the following in the logs:
Thu Jan 07 16:38:20 UTC 2016 : Method request path: {}
Thu Jan 07 16:38:20 UTC 2016 : Method request headers: {}
Thu Jan 07 16:38:20 UTC 2016 : Execution failed due to configuration error: Unable to transform input
I've tried many variations on this theme, including using a JSON-encoded string for the --path-with-query-string value (e.g. {"variable":"my specified value"}), using a raw path (e.g. /route/my%20specified%20value), and several others.
I've also tried specifying this value using the --stage-variables switch, and making the --path-with-query-string value blank. This yields the same result.
I've been able to get my call to work by specifying --headers '{"variable":"my specified value"}' but this doesn't seem correct as it circumvents the path variable, so it isn't a completely valid test. Is there a way to specify Method request path variables using the CLI? Thanks so much for your help.
You will want to issue the command using the full path, i.e.:
$ aws apigateway test-invoke-method --rest-api-id abc123 --resource-id xyz987 --http-method GET --path-with-query-string '/route/123'
The error message above indicates an error with the request body transformation. You may need to specify a --body parameter, and/or specify accept/content-type headers via the --headers parameter.
Let me know if this helps.
Thanks,
Ryan

SignatureDoesNotMatch error while amazon web service SES through HTTP

I am stuck at SignatureDoesNotMatch error while using aws ses. I am creating signature key by using GMT DATE and security key with HMAC SHA256 and then converting it to Base64.
Signature = base64(HMAC SHA256(Date,Security KEY));
Url: https://email.us-west-2.amazonaws.com?Action=SendEmail&Source=exmaple%40gmail.com&Destination.ToAddresses.member.1=person2%40gmail.com&Message.Subject.Data=Hey&Message.Body.Text.Data=Hello
And input headers as x-amz-date: Thu, 30 Jul 2015 18:15:51 +0000
X-Amzn-Authorization: AWS3-HTTPS AWSAccessKeyId=AccessKEY,Algorithm=HmacSHA256,Signature=sign calculated using DATE and security Key.
Please tell me if i am calculating signature in wrong way or anything else is the problem?
I ran into a similar problem with a different service the other day and the solution was my parameters were not alphabetically ordered. You should try switching the "Message.Subject.Data" order with the "Message.Body.Text.Data" as the latter should appear earlier lexicographically. This should fix your problem.