I am trying to retrieve the request body from API Gateway proxy request. When I pass a body, I am getting a random string. The request works fine in Tests in API gateway but not in actual API
the request I got was
{
"path": "/movie",
"headers": {
"sec-fetch-mode": "cors",
"sec-fetch-site": "none",
"accept-language": "en-US,en;q=0.9",
"postman-token": "e9f9216f-850d-1037-a2c9-d6a554f55813",
"origin": "chrome-extension://fhbjgbiflinjbdggehcddcbncdddomop",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36",
"X-Forwarded-Proto": "https",
"Host": "8cfsbr5d62.execute-api.us-east-1.amazonaws.com",
"X-Forwarded-Port": "443",
"X-Amzn-Trace-Id": "Root=1-5ed9e7b8-94f205f0fed74580d6bb5bf0",
"accept": "*/*",
"X-Forwarded-For": "49.206.4.254",
"content-type": "application/json",
"cache-control": "no-cache",
"accept-encoding": "gzip, deflate, br",
"sec-fetch-dest": "empty"
},
"resource": "/movie",
"queryStringParameters": {
"movie": "ddk"
},
"httpMethod": "POST",
"body": "ewoJIm1vdmllIjoiZ3BwIgp9"
}
It is base64 encoded:
base64 -d <<< ewoJIm1vdmllIjoiZ3BwIgp9
{
"movie":"gpp"
}
Thus you have to decode it in your lambda.
You can get more info about API gateway encoding/decoding into base64:
Content type conversions in API Gateway
The problem was that I had Binary Media Types configuration as '*/*' since one of the API had image payload. But that configuration affected JSON payload as well and the API started encoding any request body to encoded string. My case the string was not actually random one, it was Base 64 encoded String.
Two options:
1) If you want to keep generic Binary Media Type, then decode the Base64 string in Lambda
2) Keep specific Binary Media Type in API gateway settings ex. image/*
I did accidentally deploy api gateway with:
BinaryMediaTypes:
- '*~1*'
...and noticed the request body to be base64. In the API I was working with it was unnecessary and I removed it.
However it stays in AWS console, post request body is sill base64 even that I remove BinaryMediaTypes in AWS console.
I had to remove and re-deploy the whole stack to get rid of this
Related
I'm dealing with Google Enterprise reCaptcha v3 invisible mode inside an iframe with src equals to "about:blank".
The iframe is created empty, then the content is injected through our javascript library.
Now works perfect outside of iframe, but inside we're getting a very low score when sending the token to API, the response is this:
{
"name": "[hidden]",
"event": {
"token": "[hidden]"
"siteKey": "[hidden]",
"userAgent": "Mozilla/5.0 ([hidden]) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.77 Safari/537.36",
"userIpAddress": "[hidden]",
"expectedAction": "[hidden]"
},
"score": 0,
"tokenProperties": {
"valid": false,
"invalidReason": "BROWSER_ERROR",
"hostname": "",
"action": ""
},
"reasons": []
}
The API docs explains the BROWSER_ERROR is related to internet failure or attacker but well is not our case.
Does someone know what can happen, it's a bug or I'm doing something too stupid?
Just learning my way through AWS - I have an APIGateway REST API setup with Lambda proxy integration. The API has a model defined, and request validation setup on the body using this model.
Say the model is
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"propertyA": {
"type": "string"
},
"propertyB": {
"type": "string"
},
"propertyC": {
"type": "string"
},
"propertyD": {
"type": "string"
}
},
"required": ["propertyA", "propertyB", "propertyC", "propertyD"]
}
Now, if I test the API via APIGateway console, and purposely give an invalid input (omitting a required property propertyD):
{
"propertyA": "valueA",
"propertyB": "valueB",
"propertyC": "valueC"
}
the request fails with the error(400): Sun Jul 11 13:07:07 UTC 2021 : Request body does not match model schema for content type application/json: [object has missing required properties (["propertyD"])]
But when I invoke the same API(and stage) with the same invalid input from Postman, the validation seems to be not happening, and request is proxied to Lambda, which even returns a 200 OK as long as I comment out the parts of code that depend on propertyD.
What's the difference here? Should I be passing in any request header from the client side? I couldn't find anything from the AWS documentations
Answering my question-
Issue was with the headers used in the request - Postman defaulted the JSON as a Content-Type of text/plain, I had to switch to JSON using the dropdown in Body tab to make PostMan set the Content-Type to application/json
Following this post seems to have fixed the problem: https://itnext.io/how-to-validate-http-requests-before-they-reach-lambda-2fff68bfe93b, although it doesn't explain how
Apparently the magic lies with the config adding Content-Type Header under HTTP Request Headers section, even though the header is set correctly as application/json in PostMan.
I have set up an API gateway with a JWT authorizer (the one that is already built in), but I cannot get it to accept tokens generated by Twitch. This is my JWS auth settings in AWS: https://i.stack.imgur.com/WR6Vi.png
I'm a bit confused about what 'audience' means, but I figured that has to be my Twitch extension secret since that's what the token is signed with in the first place.
I tried verifying it on https://jwt.io/ against the secret and it says the token is valid after ticking the secret base64 encoded box.
Problem is that every time I try to pass it in the header to the API, I get error="invalid_token" error_description="signing method HS256 is invalid".
This is the payload AWS receives:
version: '2.0',
routeKey: '$default',
rawPath: '/',
rawQueryString: '',
headers: {
accept: '*/*',
'accept-encoding': 'deflate, gzip',
'authorization': 'Bearer <MYTOKEN>',
'content-length': '0',
host: '<SOMETHING>.us-west-2.amazonaws.com',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36',
'x-amzn-trace-id': '<SOME ID>',
'x-forwarded-for': '<SOME IP>',
'x-forwarded-port': '443',
'x-forwarded-proto': 'https',
'x-real-ip': '<SOME IP>'
},
requestContext: {
accountId: '<ID>',
apiId: '<APP ID>',
domainName: '<SOMETHING>.us-west-2.amazonaws.com',
domainPrefix: '<SOMETHING>',
http: {
method: 'GET',
path: '/',
protocol: 'HTTP/1.1',
sourceIp: '<SOME IP>',
userAgent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36'
},
requestId: '<SOME ID>',
routeKey: '$default',
stage: '$default',
time: '26/Feb/2021:17:48:04 +0000',
timeEpoch: 1614361684261
},
isBase64Encoded: false
}
As you can see, it receives the header and token just fine.
One thing I noticed is that when I decode the token, there is no issuer. How does AWS know that Twitch is the issuer?
"alg": "HS256",
"typ": "JWT"
}
{
"exp": 1614341073,
"opaque_user_id": "U<SOME ID>",
"user_id": "<SOME ID>",
"channel_id": "<SOME ID>",
"role": "broadcaster",
"is_unlinked": false,
"pubsub_perms": {
"listen": [
"broadcast",
"whisper-<SOME ID>",
"global"
],
"send": [
"broadcast",
"whisper-*"
]
}
}```
As per the exeception error="invalid_token" error_description="signing method HS256 is invalid", it is clear that either AWS services does not support this algorithm HS256or you've to change the configuration to inform the AWS services about the type of algorithm it should use in order to validate the token.
Two way to proceed on this:
Let AWS services informed about the algorithm being used while token creation so that AWS auth services use the same in order to verify/validate the token.
Change the algorithm on the token issuer service side if the service allows to do so.
Usually token issuer use one of the following algorithm while creation of JWT token
HS256 HS384 HS512 RS256 RS384 RS512 ES256 ES384 ES512 PS256 PS384 PS512 EdDSA
Audience Claim in Token
aud (audience): Recipient for which the JWT is intended.
How does AWS know that Twitch is the issuer?
You've already mentioned about JWS auth settings in AWS.
I need to build a function that takes 5 particular HTTP headers + the request params, aggregate, order, encode, and then hash them in order to validate/authenticate the overall request. However, I am unable to get the header 'Content-Length' to come through to the lambda.
I used Terraform to create the API Gateway (aws_api_gateway_domain_name) and then Serverless to create the endpoints:
functions:
alerts:
handler: src/event.handler
role: arn:aws:iam::${env:AWS_ACCOUNT_ID}:role/alerts_lambda
environment:
API_TRANS_KEY: ${env:API_TRANS_KEY}
REGION: ${self:custom.region}
SNS_ARN: arn:aws:sns:us-east-1:${env:AWS_ACCOUNT_ID}:Transactions
STAGE: ${self:custom.deploymentStage}
events:
- http:
path: /alerts/AccountEvent
method: post
cors: true
integration: lambda-proxy
however the headers I get are :
"headers": {
"Accept": "*/*",
"accept-encoding": "gzip, deflate",
"Cache-Control": "no-cache",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-Country": "US",
"Content-Type": "application/x-www-form-urlencoded",
"Date": "20170504:141752UTC",
"Encryption-Type": "HMAC-SHA256",
"Host": "events.dev.myapi.com",
"Postman-Token": "84bd0cc3-f339-4b2a-8017-31ec9174c37e",
"User-Agent": "PostmanRuntime/7.11.0",
"User-ID": "galileo",
"Via": "1.1 50c3c79d5d7adbc8948ea11709b61d17.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "1OE1aGP_3Q-CkXFuJbRwvkGAR2ZaHAPuozckZ6747EP64zZcmXjphw==",
"X-Amzn-Trace-Id": "Root=1-5d0bf01b-8afdb9628f42a9357dbb5c68",
"X-Forwarded-For": "73.72.58.46, 70.132.57.87",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
Do I need to use a mapping template at this point? Is CloudFront/Api Gateway stripping this header out for some reason (note, I don't set up a CloudFront distribution but API Gateway is creating one due to 'edge' type, but I could change it to 'regional' if that would solve this)?
Based on my tests, the Content-Length header can be passed through when making an API request. However, it won't appear in the event payload for a Lambda proxy integration.
Two alternatives that you could use to receive the Content-Length header:
Use an HTTP API with Lambda proxy integration instead of a REST API in API Gateway.
Use a Lambda non-proxy Integration for the REST API and configure it as follows:
In the Mapping Templates for Integration Request:
Set application/json as the Content-Type
Change the default 'Method Passthrough Template' template by adding the statement - "X-Content-Length" : $input.body.length()
In the Lambda function, the variable event ['X-Content-Length'] will contain the Content-Length.
In the Integration Request stage of the API Gateway for a given endpoint, you can map header values to the body. You could also use the Integration Request step to move the lost headers into the body first. You may also be able to directly re-map it to the headers passed to the Lambda function as well.
More information on header/body mapping can be found here:
https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
In Django projects deployed on Heroku, I used to upload files to Google cloud storage via boto. However, recently I have to upload large files which will cause Heroku timeout.
I am following Heroku's documentation about direct file upload to S3, and customizing as follows:
Python:
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(expires_in=3600, method='PUT', bucket=<bucketName>, key=<fileName>, force_http=True)
JS:
url = 'https://<bucketName>.storage.googleapis.com/<fileName>?Signature=...&Expires=1471451569&GoogleAccessId=...'; // "presignUrl"
postData = new FormData();
postData.append(...);
...
$.ajax({
url: url,
type: 'PUT',
data: postData,
processData: false,
contentType: false,
});
I got the following error message:
XMLHttpRequest cannot load http:/... Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8000' is therefore not allowed access.
EDIT:
The output of gsutil cors get gs://<bucketName>:
[{"maxAgeSeconds": 3600, "method": ["GET", "POST", "HEAD", "DELETE", "PUT"], "origin": ["*"], "responseHeader": ["Content-Type"]}]
It seems the CORS is OK. So, how do I solve the problem? Thanks.
EDIT 2:
The header of the OPTION request from Firefox:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-TW,zh;q=0.8,en-US;q=0.5,en;q=0.3
Access-Control-Request-Method: PUT
Connection: keep-alive
Host: <bucketName>.storage.googleapis.com
Origin: http://localhost:8000
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:48.0) Gecko/20100101 Firefox/48.0
The header of the OPTION request from Chrome:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:zh-TW,zh;q=0.8,en;q=0.6,en-US;q=0.4,zh-CN;q=0.2
Access-Control-Request-Headers:
Access-Control-Request-Method:PUT
Connection:keep-alive
Host:directupload.storage.googleapis.com
Origin:http://localhost:8000
Referer:http://localhost:8000/
User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36
X-Client-Data:CIe2yQEIprbJAQjznMoB
The header issue is not coming from your app, I think it's coming from the cloud storage bucket. I had the same issue when setting up an api, the resource you are posting to is missing the header.
https://cloud.google.com/storage/docs/cross-origin
While useful for preventing malicious behavior, this security measure also prevents useful and legitimate interactions between known origins. For example, a script on a page hosted from Google App Engine at example.appspot.com might want to use static resources stored in a Cloud Storage bucket at example.storage.googleapis.com. However, because these are two different origins from the perspective of the browser, the browser won't allow a script from example.appspot.com to fetch resources from example.storage.googleapis.com using XMLHttpRequest because the resource being fetched is from a different origin.
So it looks like you need to configure the bucket to allow cors requests. The google documentation shows the following code to be run from the google cli.
https://cloud.google.com/storage/docs/cross-origin#Configuring-CORS-on-a-Bucket
gsutil cors set cors-json-file.json gs://example
[
{
"origin": ["http://mysite.heroku.com"],
"responseHeader": ["Content-Type"],
"method": ["GET", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
Which would allow you get, upload, and delete content. Hope that helps.
Based on the information in EDIT 2, something is wrong with the request. The preflight (OPTIONS) request includes the header ACCESS-CONTROL-REQUEST-HEADER. This is not a valid CORS header. The correct header is ACCESS-CONTROL-REQUEST-HEADERS, notice the 'S' at the end.
Even if the header was correct, it should not be requesting authorization for a access-control-allow-origin header. ACCESS-CONTROL-ALLOW-ORIGIN is not a header that is sent from the client. It is a header that will automatically be sent in the response from the server to the client when the server gets a preflight request. The client/browser will not allow a cross-origin PUT request unless it gets a ACCESS-CONTROL-ALLOW-ORIGIN header authorizing the browser document's current origin from the cross-origin server in the preflight request.
The presence of the bad header appears to correlate well with the error response you are receiving. However, it looks like that header was probably not in your original code, it looks like you added it later (based on your comments). Make sure to take that header config out, it is definitely not correct.
So I am a little confused about where that header is coming from, but I think it is the source of your problem.
It looks like you are using jQuery to make the AJAX PUT request. All I can really suggest is to make sure you haven't called $.ajaxSetup() somewhere in your JS code that might be configuring the bad header.
After so many trials and errors, I came up with the following. The programs worked, however, sometimes/some of the uploaded images are not visible; other times they are OK. I have no idea why this happened.
I'd like to solicit more ideas why file uploads are OK but some of the images are corrupted.
gsutil commands:
gsutil cors set cors.json gs://<bucketName>
gsutil defacl ch -u allUsers:R gs://<bucketName>
Content of cors.json file:
[
{
"origin": ["*"],
"responseHeader": ["Content-Type"],
"method": ["GET", "POST", "HEAD", "DELETE", "PUT"],
"maxAgeSeconds": 3600
}
]
HTML:
<p id=status>Choose your avatar:</p>
<input id=fileInput type=file>
JavaScript:
$(document).on('change', '#fileInput', function() {
var $this = $(this);
var file = $this[0].files[0];
$.ajax({
url: 'upload/sign/?fileName=' + file.name + '&contentType=' + file.type,
type: 'GET'
})
.done(function(data) {
var response = JSON.parse(data);
uploadFile(file, response.presignedUrl, response.url, response.contentType)
})
.fail(function() {
alert('Unable to obtain a signed URL.');
});
});
function uploadFile(file, presignedUrl, url, contentType) {
var postData = new FormData();
postData.append('file', file);
$.ajax({
url: presignedUrl,
type: 'PUT',
data: postData,
headers: {
'Content-Type': contentType,
},
processData: false,
contentType: false
})
.done(function() {
alert('File upload successful');
})
.fail(function() {
alert('Unable to upload the file.');
});
}
Django:
Project's urls.py:
urlpatterns = [
...
url(r'upload/', include('upload.urls', namespace='upload')),
]
App's urls.py:
urlpatterns = [
url(r'^$', views.upload, name='upload'),
url(r'^sign/', views.sign, name='sign'),
]
views.py:
def upload(request):
# ... render the template
def sign(request):
fileName = request.GET.get('fileName')
contentType = request.GET.get('contentType')
conn = boto.connect_gs(gs_access_key_id=GS_ACCESS_KEY,
gs_secret_access_key=GS_SECRET_KEY)
presignedUrl = conn.generate_url(3600, 'PUT', GS_BUCKET_NAME, fileName, headers={'Content-Type':contentType})
return HttpResponse(
json.dumps({
'presignedUrl': presignedUrl,
'url': GS_URL + fileName,
'contentType': contentType
})
)
I’m my experience I would like to note “It is not possible to by-pass heroku 30s timeout without using javascript AWS SDK”. Don’t use python AWS SDK (boto). You have to completely leave the back-end out of this. Now from your access origin error, the solution is your CORS. You should put this on your CORS Policy:
[
{
"AllowedHeaders": [
""
],
"AllowedMethods": [
"GET",
"PUT",
"POST",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
""
],
"ExposeHeaders": [
"ETag"
]
}
]
Next for javascript AWS SDK. Follow my answer here: Upload file to s3 in front-end with JavaScript AWS SDK on django
There’s a lot missing from the answer I linked as I had to come up with a custom solution because Javascript AWS SDK also passes heroku 30s timeout. What I did was upload the video via javascript SDK than pass the videos ‘AWS url’ another view in a two step django form. With the changing of djnago views I reset heroku 30s timeout with the video already in my s3 bucket, and passed the fileKey to my url with the redirect. On the second part of the form I gain other information for my djnago object than submit it. This was so hard going through all documentation of direct upload to s3. If anyone is reading this and need help please comment for more. I’m on my phone now but I’ll kindly respond from my desktop to post code snippets ✌🏾