Getting 403 Forbidden when trying to upload file to AWS S3 with presigned post using Boto3 (Django + Javascript) - django

I've tried researching other threads here on SO and other forums, but still can't overcome this issue. I'm generating a presigned post to S3 and trying to upload a file to it using these headers, but getting a 403: Forbidden.
Permissions
The IAM user loaded in with Boto3 has permissions to list, read and write to S3.
CORS
CORS from all origins and all headers are allowed
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD",
"POST",
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
The code
The code is based on Python in Django as well as Javascript. This is the logic:
First the file is retrieved from the HTML, and used to call a function for retrieving the signed URL.
(function () {
document.getElementById("file-input").onchange = function () {
let files = document.getElementById("file-input").files;
let file = files[0];
Object.defineProperty(file, "name", {
writeable: true,
value: `${uuidv4()}.pdf`
})
if (!file) {
return alert("No file selected");
}
getSignedRequest(file);
}
})();
Then a GET request is sent to retrieve the signed URL, using a Django view (described in the next section after this one)
function getSignedRequest(file) {
var xhr = new XMLHttpRequest();
xhr.open("GET", "/sign_s3?file_name=" + file.name + "&file_type=" + file.type)
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
let response = JSON.parse(xhr.responseText)
uploadFile(file, response.data, response.url)
}
else {
alert("Could not get signed URL")
}
}
};
xhr.send()
}
The Django view generating the signed URL
def Sign_s3(request):
S3_BUCKET = os.environ.get("BUCKET_NAME")
if (request.method == "GET"):
file_name = request.GET.get('file_name')
file_type = request.GET.get('file_type')
s3 = boto3.client('s3', config = boto3.session.Config(signature_version = 's3v4'))
presigned_post = s3.generate_presigned_post(
Bucket = S3_BUCKET,
Key = file_name,
Fields = {"acl": "public-read", "Content-Type": file_type},
Conditions = [
{"acl": "public-read"},
{"Content-Type": file_type}
],
ExpiresIn = 3600
)
return JsonResponse({
"data": presigned_post,
"url": "https://%s.s3.amazonaws.com/%s" % (S3_BUCKET, file_name)
})
Finally the file should be uploaded to the bucket (this is where I'm getting the 403 error)
function uploadFile(file, s3Data, url) {
let xhr = new XMLHttpRequest();
xhr.open("POST", s3Data.url)
let postData = new FormData()
for (key in s3Data.fields) {
postData.append(key, s3Data.fields[key])
}
postData.append("file", file)
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
if (xhr.status === 200 || xhr.status === 204) {
document.getElementById("cv-url").value = url
}
else {
alert("Could not upload file")
}
}
};
xhr.send(postData)
}
The network request
This is how the network request looks in the browser

#jellycsc helped me. I had to open up the BlockPublicAcl option for the bucket for it to work.

The URL that you should be using in the upload is supposed to be the one that the presigned response has. Don't just upload whatever url you want.
Update your response to be:
return JsonResponse({
"data": presigned_post,
"url": presigned_post.url
})
Specifically the url you are using looks like:
https://BUCKTET_NAME.s3.amazonaws.com/KEY_PATH
When it should look like:
https://s3.REGION.amazonaws.com/BUCKET_NAME
However looking at your code this is what it should be doing, but your screen shot from inspector says otherwise. Why does the url in the network request NOT match the url that was returned by the create_presigned_post request?

Related

How do you determine if an upload completed successfully in S3 presigned URL?

Using S3, I can generate a pre-signed URL to upload a file, but I can't find a way to determine from another resource whether the:
upload completed successfully or not.
another upload is in progress.
I am asking in the context of Expo's FileSystem.uploadAsync which according to the documentation uploads in the background.
Here's the snippet of my upload code.
const startResult = await this.webClient.post("/v3/upload/start", {
category: meta.category,
contentType: meta.contentType,
contentMd5Hash: fileInfo.md5,
contentLength: fileInfo.size,
});
if (startResult.status !== 200) {
throw new Error(
`start result got ${startResult.status} expecting 200. ${startResult.data}`
);
}
const {
presignedUploadUrl,
contentMd5,
completionToken,
} = startResult.data;
const result = await FileSystem.uploadAsync(
presignedUploadUrl,
fileInfo.uri,
{
headers: {
"Content-MD5": contentMd5 as string,
"Content-Type": meta.contentType,
},
httpMethod: "PUT",
}
);
// I need to do this to indicate that the upload was completed
// All this tells the backend that it is done, but there's no other
// proof that it was done.
const completeResult = await this.webClient.post("/v3/upload/complete", {
completionToken,
presignedUploadUrl,
});
const { artifactId } = completeResult.data;
console.log(artifactId);

AWS Cloudfront for S3 backed website + Rest API: (Error - MethodNotAllowed / The specified method is not allowed against this resource)

I have an AWS S3 backed static website and a RestApi. I am configuring a single Cloudfront Distribution for the static website and the RestApi. I have OriginConfigs done for the S3 origins and the RestApi origin. I am using AWS CDK to define the infrastructure in code.
The approach has been adopted from this article: https://dev.to/evnz/single-cloudfront-distribution-for-s3-web-app-and-api-gateway-15c3]
The API are defined under the relative path /r/<resourcename> or /r/api/<methodname>. Examples would be /r/Account referring to the Account resource and /r/api/Validate referring to an rpc-style method called Validate (in this case a HTTP POST method). The Lambda methods that implement the resource methods are configured with the proper PREFLIGHT OPTIONS with the static website's url listed in the allowed origins for that resource. For eg: the /r/api/Validate method lambda has
exports.main = async function(event, context) {
try {
var method = event.httpMethod;
if(method === "OPTIONS") {
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Headers" : "*",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Origin": website_url,
"Vary": "Origin",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET,DELETE"
}
};
return response;
} else if(method === "POST") {
...
}
...
}
The API and website are deployed fine. Here's the CDK deployment code fragment.
const string api_domain = "myrestapi.execute-api.ap-south-1.amazonaws.com";
const string api_stage = "prod";
internal WebAppStaticWebsiteStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
{
// The S3 bucket to hold the static website contents
var bucket = new Bucket(this, "WebAppStaticWebsiteBucket", new BucketProps {
PublicReadAccess = false,
BlockPublicAccess = BlockPublicAccess.BLOCK_ALL,
RemovalPolicy = RemovalPolicy.DESTROY,
WebsiteIndexDocument = "index.html",
Cors = new ICorsRule[] {
new CorsRule() {
AllowedHeaders = new string[] { "*" },
AllowedMethods = new HttpMethods[] { HttpMethods.GET, HttpMethods.POST, HttpMethods.PUT, HttpMethods.DELETE, HttpMethods.HEAD },
AllowedOrigins = new string[] { "*" }
}
}
});
var cloudfrontOAI = new OriginAccessIdentity(this, "CloudfrontOAI", new OriginAccessIdentityProps() {
Comment = "Allows cloudfront access to S3"
});
bucket.AddToResourcePolicy(new PolicyStatement(new PolicyStatementProps() {
Sid = "Grant cloudfront origin access identity access to s3 bucket",
Actions = new [] { "s3:GetObject" },
Resources = new [] { bucket.BucketArn + "/*" },
Principals = new [] { cloudfrontOAI.GrantPrincipal }
}));
// The cloudfront distribution for the website
var distribution = new CloudFrontWebDistribution(this, "WebAppStaticWebsiteDistribution", new CloudFrontWebDistributionProps() {
ViewerProtocolPolicy = ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
DefaultRootObject = "index.html",
PriceClass = PriceClass.PRICE_CLASS_ALL,
GeoRestriction = GeoRestriction.Whitelist(new [] {
"IN"
}),
OriginConfigs = new [] {
new SourceConfiguration() {
CustomOriginSource = new CustomOriginConfig() {
OriginProtocolPolicy = OriginProtocolPolicy.HTTPS_ONLY,
DomainName = api_domain,
AllowedOriginSSLVersions = new OriginSslPolicy[] { OriginSslPolicy.TLS_V1_2 },
},
Behaviors = new IBehavior[] {
new Behavior() {
IsDefaultBehavior = false,
PathPattern = $"/{api_stage}/r/*",
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS,
DefaultTtl = Duration.Seconds(0),
ForwardedValues = new CfnDistribution.ForwardedValuesProperty() {
QueryString = true,
Headers = new string[] { "Authorization" }
}
}
}
},
new SourceConfiguration() {
S3OriginSource = new S3OriginConfig() {
S3BucketSource = bucket,
OriginAccessIdentity = cloudfrontOAI
},
Behaviors = new [] {
new Behavior() {
IsDefaultBehavior = true,
//PathPattern = "/*",
DefaultTtl = Duration.Seconds(0),
Compress = false,
AllowedMethods = CloudFrontAllowedMethods.ALL,
CachedMethods = CloudFrontAllowedCachedMethods.GET_HEAD_OPTIONS
}
},
}
}
});
// The distribution domain name - output
var domainNameOutput = new CfnOutput(this, "WebAppStaticWebsiteDistributionDomainName", new CfnOutputProps() {
Value = distribution.DistributionDomainName
});
// The S3 bucket deployment for the website
var deployment = new BucketDeployment(this, "WebAppStaticWebsiteDeployment", new BucketDeploymentProps(){
Sources = new [] {Source.Asset("./website/dist")},
DestinationBucket = bucket,
Distribution = distribution
});
}
I am encountering the following error (extracted from Browser console error log):
bundle.js:67 POST https://mywebapp.cloudfront.net/r/api/Validate 405
bundle.js:67
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<Method>POST</Method>
<ResourceType>OBJECT</ResourceType>
<RequestId>xxxxx</RequestId>
<HostId>xxxxxxxxxxxxxxx</HostId>
</Error>
The intended flow is that the POST call (made using fetch() api) to https://mywebapp.cloudfront.net/r/api/Validate is forwarded to the RestApi backend by cloudfront. It appears like Cloudfront is doing it, but the backend is returning an error (based on the error message).
What am I missing? How do I make this work?
This was fixed by doing the following:
Moved to the Distribution construct (which as per AWS documentation is the one to use as it is receiving latest updates).
Adding a CachePolicy and OriginRequestPolicy to control Cookie forwarding and Header forwarding

AWS SDK JavaScript - The request signature we calculated does not match the signature you provided

I am trying to sign my HTTP request from my Lambda function to access my Elasticsearch endpoint as described here. I dont know is there a better way for doing this but I am getting status 403 error with the following response. How can i troubleshoot this error and identify the problem with my signature?
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
}
My Lambda function has IAM role (ROLE_X) with below permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:ESHttpPost",
"es:ESHttpPut",
"dynamodb:DescribeStream",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:ListStreams",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
I am also allowing access to this role in my Elasticsearch domain by providing ROLE_X's arn as Custom Access Policy.
Here is my lambda function written in NodeJS
'use strict';
var AWS = require('aws-sdk');
var region = 'eu-central-1';
var domain = 'search-mydomain-XXXX.eu-central-1.es.amazonaws.com';
var index = 'images';
var type = 'image';
var credentials = new AWS.EnvironmentCredentials('AWS');
exports.handler = (event, context, callback) => {
var endpoint = new AWS.Endpoint(domain);
var request = new AWS.HttpRequest(endpoint, region);
request.headers['host'] = domain;
request.headers['Content-Type'] = 'application/json';
// Content-Length is only needed for DELETE requests that include a request
// body, but including it for all requests doesn't seem to hurt anything.
request.headers['Content-Length'] = Buffer.byteLength(request.body);
request.path += index + '/' + type + '/';
let count = 0;
event.Records.forEach((record) => {
const id = JSON.stringify(record.dynamodb.Keys.id.S);
request.path += id;
if (record.eventName == 'REMOVE') {
request.method = 'DELETE';
console.log('Deleting document');
}
else { // record.eventName == 'INSERT'
request.method = 'PUT';
request.body = JSON.stringify(record.dynamodb.NewImage);
console.log('Adding document' + request.body);
}
// Signing HTTP Requests to Elasticsearch Service
var signer = new AWS.Signers.V4(request, 'es');
signer.addAuthorization(credentials, new Date());
// Sending HTTP Request to Elasticsearch Service
var client = new AWS.HttpClient();
client.handleRequest(request, null, function(response) {
console.log('sending request to ES');
console.log(response.statusCode + ' ' + response.statusMessage);
var responseBody = '';
response.on('data', function(chunk) {
responseBody += chunk;
});
response.on('end', function(chunk) {
console.log('Response body: ' + responseBody);
});
}, function(error) {
console.log('ERROR: ' + error);
callback(error);
});
request.path = request.path.replace(id, "");
count += 1;
console.log("COUNT :" + count);
});
callback(null, `Successfully processed ${count} records.`);
};
You can use the http-aws-es library. It uses aws-sdk to handle the signing of requests before accessing your ES endpoint. You can try the following changes to your code using http-aws-es.
var es = require('elasticsearch');
var AWS = require('aws-sdk');
AWS.config.update({
credentials: new AWS.EnvironmentCredentials('AWS'),
region: 'yy-region-1'
});
const client = es.Client({
hosts: ['https://xxxx.yy-region-1.es.amazonaws.com/'],
connectionClass: require('http-aws-es'),
awsConfig: new AWS.Config({region: 'yy-region-1'})
});
await client.search(....)

How to remove .html extension using AWS Lambda & Cloudfront

I've my website's source code stored in AWS S3 and I'm using AWS Cloudfront to deliver my content.
I want to use AWS Lamda#Edge to remove .html extension from all the web links that's served through Cloudfront.
My required output should be www.example.com/foo instead of www.example.com/foo.html or example.com/foo1 instead of example.com/foo1.html.
Please help me to implement this as I can't find clear solution to use. I've referred the point 3 mentioned on this article: https://forums.aws.amazon.com/thread.jspa?messageID=796961&tstart=0. But it's not clear what I need to do.
PFB the lambda code, how can I modify it-
const config = {
suffix: '.html',
appendToDirs: 'index.html',
removeTrailingSlash: false,
};
const regexSuffixless = /\/[^/.]+$/; // e.g. "/some/page" but not "/", "/some/" or "/some.jpg"
const regexTrailingSlash = /.+\/$/; // e.g. "/some/" or "/some/page/" but not root "/"
exports.handler = function handler(event, context, callback) {
const { request } = event.Records[0].cf;
const { uri } = request;
const { suffix, appendToDirs, removeTrailingSlash } = config;
// Append ".html" to origin request
if (suffix && uri.match(regexSuffixless)) {
request.uri = uri + suffix;
callback(null, request);
return;
}
// Append "index.html" to origin request
if (appendToDirs && uri.match(regexTrailingSlash)) {
request.uri = uri + appendToDirs;
callback(null, request);
return;
}
// Redirect (301) non-root requests ending in "/" to URI without trailing slash
if (removeTrailingSlash && uri.match(/.+\/$/)) {
const response = {
// body: '',
// bodyEncoding: 'text',
headers: {
'location': [{
key: 'Location',
value: uri.slice(0, -1)
}]
},
status: '301',
statusDescription: 'Moved Permanently'
};
callback(null, response);
return;
}
// If nothing matches, return request unchanged
callback(null, request);
};
Please help me to remove .html extension from my website and what updated code do I need to paste in my AWS Lambda
Thanks in advance!!

How to set content-length-range for s3 browser upload via boto

The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
You can't add it to a signed PUT URL. This only works with the signed policy that goes along with a POST because the two mechanisms are very different.
Signing a URL is a lossy (for lack of a better term) process. You generate the string to sign, then sign it. You send the signature with the request, but you discard and do not send the string to sign. S3 then reconstructs what the string to sign should have been, for the request it receives, and generates the signature you should have sent with that request. There's only one correct answer, and S3 doesn't know what string you actually signed. The signature matches, or doesn't, either because you built the string to sign incorrectly, or your credentials don't match, and it doesn't know which of these possibilities is the case. It only knows, based on the request you sent, the string you should have signed and what the signature should have been.
With that in mind, for content-length-range to work with a signed URL, the client would need to actually send such a header with the request... which doesn't make a lot of sense.
Conversely, with POST uploads, there is more information communicated to S3. It's not only going on whether your signature is valid, it also has your policy document... so it's possible to include directives -- policies -- with the request. They are protected from alteration by the signature, but they aren't encrypted or hashed -- the entire policy is readable by S3 (so, by contrast, we'll call this the opposite, "lossless.")
This difference is why you can't do what you are trying to do with PUT while you can with POST.