Django CSRF token failure risk - django

On our production server, periodically we suffer from many CSRF token failures. The site does work fine for the rest, and I am aware CSRF failures may be user-side errors. However, for example this morning we received a flood of new failures, so we want to exclude any other possibilities.
An example failure mail today:
{
"GET": {},
"COOKIES": {},
"ERROR": "Referer checking failed - no Referer.",
"USER": "AnonymousUser",
"META": {
"REMOTE_ADDR": "127.0.0.1",
"mod_wsgi.version": "(4, 5, 20)",
"DOCUMENT_ROOT": "/usr/local/apache2/htdocs",
"SERVER_ADDR": "127.0.0.1",
"HTTP_ACCEPT_ENCODING": "gzip, deflate, br",
"wsgi.multithread": "True",
"HTTP_FORWARDED_REQUEST_URI": "/",
"CONTEXT_DOCUMENT_ROOT": "/usr/local/apache2/htdocs",
"wsgi.file_wrapper": "<class 'mod_wsgi.FileWrapper'>",
"mod_wsgi.path_info": "/",
"HTTP_ORIGIN": "chrome-extension://aegnopegbbhjeeiganiajffnalhlkkjb",
(...)
},
"POST": {}
}
Especially the HTTP_ORIGIN looks "interesting": why is this Chrome extension scraping/bullying us?
So essentially: Do we need to be worried about this?
Thanks!

This looks like an oddly coded "feature" in the "Browser Safety" Chrome extension. It tries to check if a URL is valid by sending an empty POST request to it (why?!).
var checkUrlState = function (url) {
var urlState = null;
if (blacklists.indexOf(domainFromUrl((url).toString())) < 0) {
var xhr = new XMLHttpRequest();
try {
xhr.open("POST", url, true);
xhr.timeout = 5000; // time in milliseconds
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
urlState = xhr.status;
} else {
urlState = null;
}
}
xhr.ontimeout = function () {
}
xhr.send();
} catch (e) {
onErrorReceived.call(xhr);
}
}
return urlState;
}
I'm also seeing this on my sites. I would recommend filtering it on the frontend based on the Origin header.

Related

Getting 403 Forbidden when trying to upload file to AWS S3 with presigned post using Boto3 (Django + Javascript)

I've tried researching other threads here on SO and other forums, but still can't overcome this issue. I'm generating a presigned post to S3 and trying to upload a file to it using these headers, but getting a 403: Forbidden.
Permissions
The IAM user loaded in with Boto3 has permissions to list, read and write to S3.
CORS
CORS from all origins and all headers are allowed
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD",
"POST",
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
The code
The code is based on Python in Django as well as Javascript. This is the logic:
First the file is retrieved from the HTML, and used to call a function for retrieving the signed URL.
(function () {
document.getElementById("file-input").onchange = function () {
let files = document.getElementById("file-input").files;
let file = files[0];
Object.defineProperty(file, "name", {
writeable: true,
value: `${uuidv4()}.pdf`
})
if (!file) {
return alert("No file selected");
}
getSignedRequest(file);
}
})();
Then a GET request is sent to retrieve the signed URL, using a Django view (described in the next section after this one)
function getSignedRequest(file) {
var xhr = new XMLHttpRequest();
xhr.open("GET", "/sign_s3?file_name=" + file.name + "&file_type=" + file.type)
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
let response = JSON.parse(xhr.responseText)
uploadFile(file, response.data, response.url)
}
else {
alert("Could not get signed URL")
}
}
};
xhr.send()
}
The Django view generating the signed URL
def Sign_s3(request):
S3_BUCKET = os.environ.get("BUCKET_NAME")
if (request.method == "GET"):
file_name = request.GET.get('file_name')
file_type = request.GET.get('file_type')
s3 = boto3.client('s3', config = boto3.session.Config(signature_version = 's3v4'))
presigned_post = s3.generate_presigned_post(
Bucket = S3_BUCKET,
Key = file_name,
Fields = {"acl": "public-read", "Content-Type": file_type},
Conditions = [
{"acl": "public-read"},
{"Content-Type": file_type}
],
ExpiresIn = 3600
)
return JsonResponse({
"data": presigned_post,
"url": "https://%s.s3.amazonaws.com/%s" % (S3_BUCKET, file_name)
})
Finally the file should be uploaded to the bucket (this is where I'm getting the 403 error)
function uploadFile(file, s3Data, url) {
let xhr = new XMLHttpRequest();
xhr.open("POST", s3Data.url)
let postData = new FormData()
for (key in s3Data.fields) {
postData.append(key, s3Data.fields[key])
}
postData.append("file", file)
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
if (xhr.status === 200 || xhr.status === 204) {
document.getElementById("cv-url").value = url
}
else {
alert("Could not upload file")
}
}
};
xhr.send(postData)
}
The network request
This is how the network request looks in the browser
#jellycsc helped me. I had to open up the BlockPublicAcl option for the bucket for it to work.
The URL that you should be using in the upload is supposed to be the one that the presigned response has. Don't just upload whatever url you want.
Update your response to be:
return JsonResponse({
"data": presigned_post,
"url": presigned_post.url
})
Specifically the url you are using looks like:
https://BUCKTET_NAME.s3.amazonaws.com/KEY_PATH
When it should look like:
https://s3.REGION.amazonaws.com/BUCKET_NAME
However looking at your code this is what it should be doing, but your screen shot from inspector says otherwise. Why does the url in the network request NOT match the url that was returned by the create_presigned_post request?

How to properly set an API call in QML using XMLHttpRequest

I am building a small weather API as exercise to use QML and properly operate an API call using OpenWeather and you can see there a typical API response.
The problem I am having is that I can't get the API call to work. After setting a minimal example with some cities that you can see below, right next to the city it should appear the symbol of the weather, but it does not happen. The list of the icon can be found here. Source code of the MVE can be found here for completeness.
The error from the compiler: qrc:/main.qml:282: SyntaxError: JSON.parse: Parse error
This is what is happening
This is what is expected
Typical API JSON response can be found both here and below:
{
"coord": {
"lon": -122.08,
"lat": 37.39
},
"weather": [
{
"id": 800,
"main": "Clear",
"description": "clear sky",
"icon": "01d"
}
],
"base": "stations",
"main": {
"temp": 282.55,
"feels_like": 281.86,
"temp_min": 280.37,
"temp_max": 284.26,
"pressure": 1023,
"humidity": 100
},
"visibility": 16093,
"wind": {
"speed": 1.5,
"deg": 350
},
"clouds": {
"all": 1
},
"dt": 1560350645,
"sys": {
"type": 1,
"id": 5122,
"message": 0.0139,
"country": "US",
"sunrise": 1560343627,
"sunset": 1560396563
},
"timezone": -25200,
"id": 420006353,
"name": "Mountain View",
"cod": 200
}
Below a snippet of code related to the API call:
main.qml
// Create the API getcondition to get JSON data of weather
function getCondition(location, index) {
var res
var url = "api.openweathermap.org/data/2.5/weather?id={city id}&appid={your api key}"
var doc = new XMLHttpRequest()
// parse JSON data and put code result into codeList
doc.onreadystatechange = function() {
if(doc.readyState === XMLHttpRequest.DONE) {
res = doc.responseText
// parse data
var obj = JSON.parse(res) // <-- Error Here
if(typeof(obj) == 'object') {
if(obj.hasOwnProperty('query')) {
var ch = onj.query.results.channel
var item = ch.item
codeList[index] = item.condition["code"]
}
}
}
}
doc.open('GET', url, true)
doc.send()
}
In order to solve this problem I consulted several sources, first of all : official documentation and the related function. I believe it is correctly set, but I added the reference for completeness.
Also I came across this one which explained how to simply apply XMLHttpRequest.
Also I dug more into the problem to find a solution and also consulted this one which also explained how to apply the JSON parsing function. But still something is not correct.
Thanks for pointing in the right direction for solving this problem.
Below the answer to my question. I was not reading properly the JSON file and after console logging the problem the solution is below. code was correct from beginning, only the response needed to be reviewed properly and in great detail being the JSON response a bit confusing:
function getCondition() {
var request = new XMLHttpRequest()
request.open('GET', 'http://api.openweathermap.org/data/2.5/weather?q=London&units=metric&appid=key', true);
request.onreadystatechange = function() {
if (request.readyState === XMLHttpRequest.DONE) {
if (request.status && request.status === 200) {
console.log("response", request.responseText)
var result = JSON.parse(request.responseText)
} else {
console.log("HTTP:", request.status, request.statusText)
}
}
}
request.send()
}
Hope that helps!
In your code, your url shows this: "api.openweathermap.org/data/2.5/weather?id={city id}&appid={your api key}". You need to replace {city id} and {your api key} with real values.
You can solve it by providing an actual city ID and API key in your request URL

Loopback - How to extend api using loopback

I want to extend my api using loopback . I have read the documentation
'use strict';
module.exports = function(Meetups,pusher) {
Meetups.status = function(cb) {
var currentDate = new Date();
var currentHour = currentDate.getHours();
var OPEN_HOUR = 6;
var CLOSE_HOUR = 20;
console.log('Current hour is %d', currentHour);
var response;
if (currentHour >= OPEN_HOUR && currentHour < CLOSE_HOUR) {
response = 'We are open yeah!!! for business.';
} else {
response = 'Sorry, we are closed. Open daily from 6am to 8pm.';
}
cb(null, response);
};
Meetups.remoteMethod(
'status', {
http: {
path: '/status',
verb: 'get'
},
returns: {
arg: 'status',
type: 'string'
}
}
);
Meetups.pusher = function(cb) {
if (2>1) {
response = 'sending something';
} else {
response = 'mont blanc';
}
cb(null, response);
};
Meetups.remoteMethod(
'pusher', {
http: {
path: '/pusher',
verb: 'get'
},
returns: {
arg: 'pusher',
type: 'string'
}
}
);
};
First, I added /status route and it worked fine. But, when i tried to add /pusher . It just didnt work. I am getting an error
{
"error": {
"statusCode": 500,
"name": "ReferenceError",
"message": "response is not defined",
"stack": "ReferenceError: response is not defined\n at Function.Meetups.pusher (/Users/ankursharma/Documents/projects/meetupz/common/models/meetups.js:34:20)\n at SharedMethod.invoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/shared-method.js:270:25)\n at HttpContext.invoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/http-context.js:297:12)\n at phaseInvoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:677:9)\n at runHandler (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:135:5)\n at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)\n at Object.async.eachSeries (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:162:9)\n at runHandlers (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:144:13)\n at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)\n at /Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:157:25\n at /Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:154:25\n at execStack (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:522:7)\n at RemoteObjects.execHooks (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:526:10)\n at phaseBeforeInvoke (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/lib/remote-objects.js:673:10)\n at runHandler (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/lib/phase.js:135:5)\n at iterate (/Users/ankursharma/Documents/projects/meetupz/node_modules/strong-remoting/node_modules/loopback-phase/node_modules/async/lib/async.js:146:13)"
}
}
I am pretty sure, its a very small mistake. I am beginner in loopback and trying to implement loopback in my project.
In the example they define response as a local variable to that remote method, you did not. Secondly, (Meetups,pusher) you do not need to export pusher here. You are adding to Meetups.
You have to declare response in your pusher remote method.
An alternative way without declaring response is, Simply returning the value.
Example:
Meetups.pusher = function(cb) {
if (2>1) {
return 'sending something';
} else {
return 'mont blanc';
}
};
Define the variable and return the variable or you can directly call the cb in if and else like
Meetups.pusher = function(cb) {
if (2>1) {
cb(null,'sending something');
} else {
cb(null, 'mont blanc');
}
};

How to handle expired access token in asp.net core using refresh token with OpenId Connect

I have configured an ASOS OpenIdConnect Server using and an asp.net core mvc app that uses the "Microsoft.AspNetCore.Authentication.OpenIdConnect": "1.0.0 and "Microsoft.AspNetCore.Authentication.Cookies": "1.0.0". I have tested the "Authorization Code" workflow and everything works.
The client web app processes the authentication as expected and creates a cookie storing the id_token, access_token, and refresh_token.
How do I force Microsoft.AspNetCore.Authentication.OpenIdConnect to request a new access_token when it expires?
The asp.net core mvc app ignores the expired access_token.
I would like to have openidconnect see the expired access_token then make a call using the refresh token to get a new access_token. It should also update the cookie values. If the refresh token request fails I would expect openidconnect to "sign out" the cookie (remove it or something).
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
AuthenticationScheme = "Cookies"
});
app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
ClientId = "myClient",
ClientSecret = "secret_secret_secret",
PostLogoutRedirectUri = "http://localhost:27933/",
RequireHttpsMetadata = false,
GetClaimsFromUserInfoEndpoint = true,
SaveTokens = true,
ResponseType = OpenIdConnectResponseType.Code,
AuthenticationMethod = OpenIdConnectRedirectBehavior.RedirectGet,
Authority = http://localhost:27933,
MetadataAddress = "http://localhost:27933/connect/config",
Scope = { "email", "roles", "offline_access" },
});
It seems there is no programming in the openidconnect authentication for asp.net core to manage the access_token on the server after received.
I found that I can intercept the cookie validation event and check if the access token has expired. If so, make a manual HTTP call to the token endpoint with the grant_type=refresh_token.
By calling context.ShouldRenew = true; this will cause the cookie to be updated and sent back to the client in the response.
I have provided the basis of what I have done and will work to update this answer once all work as been resolved.
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
AuthenticationScheme = "Cookies",
ExpireTimeSpan = new TimeSpan(0, 0, 20),
SlidingExpiration = false,
CookieName = "WebAuth",
Events = new CookieAuthenticationEvents()
{
OnValidatePrincipal = context =>
{
if (context.Properties.Items.ContainsKey(".Token.expires_at"))
{
var expire = DateTime.Parse(context.Properties.Items[".Token.expires_at"]);
if (expire > DateTime.Now) //TODO:change to check expires in next 5 mintues.
{
logger.Warn($"Access token has expired, user: {context.HttpContext.User.Identity.Name}");
//TODO: send refresh token to ASOS. Update tokens in context.Properties.Items
//context.Properties.Items["Token.access_token"] = newToken;
context.ShouldRenew = true;
}
}
return Task.FromResult(0);
}
}
});
You must enable the generation of refresh_token by setting in startup.cs:
Setting values to AuthorizationEndpointPath = "/connect/authorize"; // needed for refreshtoken
Setting values to TokenEndpointPath = "/connect/token"; // standard token endpoint name
In your token provider, before validating the token request at the end of the HandleTokenrequest method, make sure you have set the offline scope:
// Call SetScopes with the list of scopes you want to grant
// (specify offline_access to issue a refresh token).
ticket.SetScopes(
OpenIdConnectConstants.Scopes.Profile,
OpenIdConnectConstants.Scopes.OfflineAccess);
If that is setup properly, you should receive a refresh_token back when you login with a password grant_type.
Then from your client you must issue the following request (I'm using Aurelia):
refreshToken() {
let baseUrl = yourbaseUrl;
let data = "client_id=" + this.appState.clientId
+ "&grant_type=refresh_token"
+ "&refresh_token=myRefreshToken";
return this.http.fetch(baseUrl + 'connect/token', {
method: 'post',
body : data,
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'application/json'
}
});
}
and that's it, make sure that your auth provider in HandleRequestToken is not trying to manipulate the request that is of type refresh_token:
public override async Task HandleTokenRequest(HandleTokenRequestContext context)
{
if (context.Request.IsPasswordGrantType())
{
// Password type request processing only
// code that shall not touch any refresh_token request
}
else if(!context.Request.IsRefreshTokenGrantType())
{
context.Reject(
error: OpenIdConnectConstants.Errors.InvalidGrant,
description: "Invalid grant type.");
return;
}
return;
}
The refresh_token shall just be able to pass through this method and is handled by another piece of middleware that handles refresh_token.
If you want more in depth knowledge about what the auth server is doing, you can have a look at the code of the OpenIdConnectServerHandler:
https://github.com/aspnet-contrib/AspNet.Security.OpenIdConnect.Server/blob/master/src/AspNet.Security.OpenIdConnect.Server/OpenIdConnectServerHandler.Exchange.cs
On the client side you must also be able to handle the auto refresh of the token, here is an example of an http interceptor for Angular 1.X, where one handles 401 reponses, refresh the token, then retry the request:
'use strict';
app.factory('authInterceptorService',
['$q', '$injector', '$location', 'localStorageService',
function ($q, $injector, $location, localStorageService) {
var authInterceptorServiceFactory = {};
var $http;
var _request = function (config) {
config.headers = config.headers || {};
var authData = localStorageService.get('authorizationData');
if (authData) {
config.headers.Authorization = 'Bearer ' + authData.token;
}
return config;
};
var _responseError = function (rejection) {
var deferred = $q.defer();
if (rejection.status === 401) {
var authService = $injector.get('authService');
console.log("calling authService.refreshToken()");
authService.refreshToken().then(function (response) {
console.log("token refreshed, retrying to connect");
_retryHttpRequest(rejection.config, deferred);
}, function () {
console.log("that didn't work, logging out.");
authService.logOut();
$location.path('/login');
deferred.reject(rejection);
});
} else {
deferred.reject(rejection);
}
return deferred.promise;
};
var _retryHttpRequest = function (config, deferred) {
console.log('autorefresh');
$http = $http || $injector.get('$http');
$http(config).then(function (response) {
deferred.resolve(response);
},
function (response) {
deferred.reject(response);
});
}
authInterceptorServiceFactory.request = _request;
authInterceptorServiceFactory.responseError = _responseError;
authInterceptorServiceFactory.retryHttpRequest = _retryHttpRequest;
return authInterceptorServiceFactory;
}]);
And here is an example I just did for Aurelia, this time I wrapped my http client into an http handler that checks if the token is expired or not. If it is expired it will first refresh the token, then perform the request. It uses a promise to keep the interface with the client-side data services consistent. This handler exposes the same interface as the aurelia-fetch client.
import {inject} from 'aurelia-framework';
import {HttpClient} from 'aurelia-fetch-client';
import {AuthService} from './authService';
#inject(HttpClient, AuthService)
export class HttpHandler {
constructor(httpClient, authService) {
this.http = httpClient;
this.authService = authService;
}
fetch(url, options){
let _this = this;
if(this.authService.tokenExpired()){
console.log("token expired");
return new Promise(
function(resolve, reject) {
console.log("refreshing");
_this.authService.refreshToken()
.then(
function (response) {
console.log("token refreshed");
_this.http.fetch(url, options).then(
function (success) {
console.log("call success", url);
resolve(success);
},
function (error) {
console.log("call failed", url);
reject(error);
});
}, function (error) {
console.log("token refresh failed");
reject(error);
});
}
);
}
else {
// token is not expired, we return the promise from the fetch client
return this.http.fetch(url, options);
}
}
}
For jquery you can look a jquery oAuth:
https://github.com/esbenp/jquery-oauth
Hope this helps.
Following on from #longday's answer, I have had success in using this code to force a client refresh without having to manually query an open id endpoint:
OnValidatePrincipal = context =>
{
if (context.Properties.Items.ContainsKey(".Token.expires_at"))
{
var expire = DateTime.Parse(context.Properties.Items[".Token.expires_at"]);
if (expire > DateTime.Now) //TODO:change to check expires in next 5 mintues.
{
context.ShouldRenew = true;
context.RejectPrincipal();
}
}
return Task.FromResult(0);
}

How to set content-length-range for s3 browser upload via boto

The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
You can't add it to a signed PUT URL. This only works with the signed policy that goes along with a POST because the two mechanisms are very different.
Signing a URL is a lossy (for lack of a better term) process. You generate the string to sign, then sign it. You send the signature with the request, but you discard and do not send the string to sign. S3 then reconstructs what the string to sign should have been, for the request it receives, and generates the signature you should have sent with that request. There's only one correct answer, and S3 doesn't know what string you actually signed. The signature matches, or doesn't, either because you built the string to sign incorrectly, or your credentials don't match, and it doesn't know which of these possibilities is the case. It only knows, based on the request you sent, the string you should have signed and what the signature should have been.
With that in mind, for content-length-range to work with a signed URL, the client would need to actually send such a header with the request... which doesn't make a lot of sense.
Conversely, with POST uploads, there is more information communicated to S3. It's not only going on whether your signature is valid, it also has your policy document... so it's possible to include directives -- policies -- with the request. They are protected from alteration by the signature, but they aren't encrypted or hashed -- the entire policy is readable by S3 (so, by contrast, we'll call this the opposite, "lossless.")
This difference is why you can't do what you are trying to do with PUT while you can with POST.