How to access authorizer.claims from API gateway Lambda Auth with simple response mode in Flask API - flask

Hi guys I struck with this for a days and I hope some of you can figure it out
I have lambda authorizer with simple response mode like this.
I want to forward value inside JWT token (user_id) to my Flask API.
My authorizer work fine but I don't know to access authorizer.claims in the Flask API.
exports.handler = async(event) => {
const method = 'main';
log.info(`[${method}] BEGIN`);
log.info(`[${method}] incoming event= ${JSON.stringify(event)}`);
const secret = await secretManager(secretName);
const response = {
isAuthorized: false,
context: {
authorizer: {
claims: {}
},
error: {}
}
};
const authorization = event.headers.authorization;
if(!authorization){
log.error(`[${method}] No authorization found`);
response.context.error.message = 'No authorization found';
response.context.error.responseType = 401;
return response;
}
const auth = authorization.split(' ');
if (auth[0] !== 'Bearer' || auth.length !== 2) {
log.error(`[${method}] Invalid authorization`);
response.context.error.message = 'Invalid authorization';
response.context.error.responseType = 401;
return response;
}
try {
const decoded = verify(auth[1], secret.public_key, {
algorithms: ['RS512'],
});
response.isAuthorized = true;
response.context.authorizer.claims = decoded;
} catch (error) {
log.error(`[${method}] Error occured:${error.message}`);
response.isAuthorized = false;
response.context.error.message = error.message;
response.context.error.responseType = 401;
}
log.info(`[${method}] END`);
return response;
};
And my Flask application be like this.
I used blueprint and app.before_request_funcs as a middlewares.
from api.middlewares.fetchUser import fetchUser
from api.routes.blueprint import myBlueprint
app.before_request_funcs = {
myBlueprint.name: [fetchUser]
}
app.register_blueprint(myBlueprint)
Here is where I want to access authorizer.claims to fetch user before my API start.
from flask import request, g
from api.utils.logger import logger
import json
def fetchUser():
logger.warn('FETCH USER')
logger.warn(json.dumps(request.headers.to_wsgi_list()))
logger.warn(json.dumps(request.authorization))
How to access authorizer.claims
Thanks you.

Related

Flutter sending a post request to a Django API with a file as the body

I have a Django API where a user is able to upload a file through a post request with the following body:
{
"file": *attached file*
}
In Django, the file is gathered from the request with request.FILES['file']
The request has to be sent from flutter (dart) code. I have tried a few ways, this is the function from my latest attempt which shows an error - because the "file" is not in the correct format.
static void uploadProfilePhoto(File file, String fileId) async {
Uint8List fileBytes = file.readAsBytesSync();
var response = http.post(
globals.baseURL() + "/upload/",
//headers: {"Content-Type": "application/json"},
body: {
"file":base64Encode(fileBytes)
}
).then((v)=>print("v: "+v.body));
}
Any idea in what format the "file" should be sent from flutter? Else is there any other method which might work? Thank you in advance.
in flutter use
import 'dart:convert' as convert;
import 'dart:io';
import 'package:http/http.dart' as http;
#override
Future<Map<String, dynamic>> sendFiletodjango(
{File file,
}) async {
var endPoint = url;
Map data = {};
String base64file = base64Encode(file.readAsBytesSync());
String fileName = file.path.split("/").last;
data['name']=fileName;
data['file']= base64file;
try {
var response = await http.post(endPoint,headers: yourRequestHeaders, body:convert.json.encode(data));
} catch (e) {
throw (e.toString());
}
}
in python django use
from django.core.files.base import ContentFile
file = response.data.get("file")
name = response.data.get("name")
your_file = ContentFile(base64.b64decode(file),name)
model.fileField = your_file
model.save()
You can try multipart/form-data to upload files from Flutter to the Django server using HTTP post request.
From flutter, you can send multipart/form-data request in the below shown way.
Future<Response> uploadFile(File file) async {
Response response;
var uri = Uri.parse(url);
var request = http.MultipartRequest('POST', uri);
request.files.add(await http.MultipartFile.fromPath('file', file.path));
var response = await request.send();
if (response.statusCode == 200 || response.statusCode == 201) {
print('Uploaded!');
}
return response;
}
You can find more about it here Dart MultipartRequest.

google cloud authentication with bearer token via nodejs

My client has a GraphQL API running on Google cloud run.
I have recieved a service account for authentication as well as access to the gcloud command line tool.
When using gcloud command line like so:
gcloud auth print-identity-token
I can generate a token that can be used to make post requests to the api. This works and I can make successful post requests to the api from postman, insomnia and from my nodejs app.
However, when I use JWT authentication with "googleapis" or "google-auth" npm libraries like so :
var { google } = require('googleapis')
let privatekey = require('./auth/google/service-account.json')
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
['https://www.googleapis.com/auth/cloud-platform']
)
jwtClient.authorize(function(err, _token) {
if (err) {
console.log(err)
return err
} else {
console.log('token obj:', _token)
}
})
This outputs a "bearer" token:
token obj: {
access_token: 'ya29.c.Ko8BvQcMD5zU-0raojM_u2FZooWMyhB9Ni0Yv2_dsGdjuIDeL1tftPg0O17uFrdtkCuJrupBBBK2IGfUW0HGtgkYk-DZiS1aKyeY9wpXTwvbinGe9sud0k1POA2vEKiGONRqFBSh9-xms3JhZVdCmpBi5EO5aGjkkJeFI_EBry0E12m2DTm0T_7izJTuGQ9hmyw',
token_type: 'Bearer',
expiry_date: 1581954138000,
id_token: undefined,
refresh_token: 'jwt-placeholder'
}
however this bearer token does not work as the one above and always gives an "unauthorised error 401" when making the same requests as with the gcloud command "gcloud auth print-identity-token".
Please help, I am not sure why the first bearer token works but the one generated with JWT does not.
EDIT
I have also tried to get an identity token instead of an access token like so :
let privatekey = require('./auth/google/service-account.json')
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
[]
)
jwtClient
.fetchIdToken('https://my.audience.url')
.then((res) => console.log('res:', res))
.catch((err) => console.log('err', err))
This prints an identity token, however, using this also just gives a "401 unauthorised" message.
Edit to show how I am calling the endpoint
Just a side note, any of these methods below work with the command line identity token, however when generated via JWT, it returns a 401
Method 1:
const client = new GraphQLClient(baseUrl, {
headers: {
Authorization: 'Bearer ' + _token.id_token
}
})
const query = `{
... my graphql query goes here ...
}`
client
.request(query)
.then((data) => {
console.log('result from query:', data)
res.send({ data })
return 0
})
.catch((err) => {
res.send({ message: 'error ' + err })
return 0
})
}
Method 2 (using the "authorized" client I have created with google-auth):
const res = await client.request({
url: url,
method: 'post',
data: `{
My graphQL query goes here ...
}`
})
console.log(res.data)
}
Here is an example in node.js that correctly creates an Identity Token with the correct audience for calling a Cloud Run or Cloud Functions service.
Modify this example to fit the GraphQLClient. Don't forget to include the Authorization header in each call.
// This program creates an OIDC Identity Token from a service account
// and calls an HTTP endpoint with the Identity Token as the authorization
var { google } = require('googleapis')
const request = require('request')
// The service account JSON key file to use to create the Identity Token
let privatekey = require('/config/service-account.json')
// The HTTP endpoint to call with an Identity Token for authorization
// Note: This url is using a custom domain. Do not use the same domain for the audience
let url = 'https://example.jhanley.dev'
// The audience that this ID token is intended for (example Google Cloud Run service URL)
// Do not use a custom domain name, use the Assigned by Cloud Run url
let audience = 'https://example-ylabperdfq-uc.a.run.app'
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
audience
)
jwtClient.authorize(function(err, _token) {
if (err) {
console.log(err)
return err
} else {
// console.log('token obj:', _token)
request(
{
url: url,
headers: {
"Authorization": "Bearer " + _token.id_token
}
},
function(err, response, body) {
if (err) {
console.log(err)
return err
} else {
// console.log('Response:', response)
console.log(body)
}
}
);
}
})
You can find the official documentation for node OAuth2
A complete OAuth2 example:
const {OAuth2Client} = require('google-auth-library');
const http = require('http');
const url = require('url');
const open = require('open');
const destroyer = require('server-destroy');
// Download your OAuth2 configuration from the Google
const keys = require('./oauth2.keys.json');
/**
* Start by acquiring a pre-authenticated oAuth2 client.
*/
async function main() {
const oAuth2Client = await getAuthenticatedClient();
// Make a simple request to the People API using our pre-authenticated client. The `request()` method
// takes an GaxiosOptions object. Visit https://github.com/JustinBeckwith/gaxios.
const url = 'https://people.googleapis.com/v1/people/me?personFields=names';
const res = await oAuth2Client.request({url});
console.log(res.data);
// After acquiring an access_token, you may want to check on the audience, expiration,
// or original scopes requested. You can do that with the `getTokenInfo` method.
const tokenInfo = await oAuth2Client.getTokenInfo(
oAuth2Client.credentials.access_token
);
console.log(tokenInfo);
}
/**
* Create a new OAuth2Client, and go through the OAuth2 content
* workflow. Return the full client to the callback.
*/
function getAuthenticatedClient() {
return new Promise((resolve, reject) => {
// create an oAuth client to authorize the API call. Secrets are kept in a `keys.json` file,
// which should be downloaded from the Google Developers Console.
const oAuth2Client = new OAuth2Client(
keys.web.client_id,
keys.web.client_secret,
keys.web.redirect_uris[0]
);
// Generate the url that will be used for the consent dialog.
const authorizeUrl = oAuth2Client.generateAuthUrl({
access_type: 'offline',
scope: 'https://www.googleapis.com/auth/userinfo.profile',
});
// Open an http server to accept the oauth callback. In this simple example, the
// only request to our webserver is to /oauth2callback?code=<code>
const server = http
.createServer(async (req, res) => {
try {
if (req.url.indexOf('/oauth2callback') > -1) {
// acquire the code from the querystring, and close the web server.
const qs = new url.URL(req.url, 'http://localhost:3000')
.searchParams;
const code = qs.get('code');
console.log(`Code is ${code}`);
res.end('Authentication successful! Please return to the console.');
server.destroy();
// Now that we have the code, use that to acquire tokens.
const r = await oAuth2Client.getToken(code);
// Make sure to set the credentials on the OAuth2 client.
oAuth2Client.setCredentials(r.tokens);
console.info('Tokens acquired.');
resolve(oAuth2Client);
}
} catch (e) {
reject(e);
}
})
.listen(3000, () => {
// open the browser to the authorize url to start the workflow
open(authorizeUrl, {wait: false}).then(cp => cp.unref());
});
destroyer(server);
});
}
main().catch(console.error);
Edit
Another example for cloud run.
// sample-metadata:
// title: ID Tokens for Cloud Run
// description: Requests a Cloud Run URL with an ID Token.
// usage: node idtokens-cloudrun.js <url> [<target-audience>]
'use strict';
function main(
url = 'https://service-1234-uc.a.run.app',
targetAudience = null
) {
// [START google_auth_idtoken_cloudrun]
/**
* TODO(developer): Uncomment these variables before running the sample.
*/
// const url = 'https://YOUR_CLOUD_RUN_URL.run.app';
const {GoogleAuth} = require('google-auth-library');
const auth = new GoogleAuth();
async function request() {
if (!targetAudience) {
// Use the request URL hostname as the target audience for Cloud Run requests
const {URL} = require('url');
targetAudience = new URL(url).origin;
}
console.info(
`request Cloud Run ${url} with target audience ${targetAudience}`
);
const client = await auth.getIdTokenClient(targetAudience);
const res = await client.request({url});
console.info(res.data);
}
request().catch(err => {
console.error(err.message);
process.exitCode = 1;
});
// [END google_auth_idtoken_cloudrun]
}
const args = process.argv.slice(2);
main(...args);
For those of you out there that do not want to waste a full days worth of work because of the lack of documentation. Here is the accepted answer in today's world since the JWT class does not accept an audience in the constructor anymore.
import { JWT } from "google-auth-library"
const client = new JWT({
forceRefreshOnFailure: true,
key: service_account.private_key,
email: service_account.client_email,
})
const token = await client.fetchIdToken("cloud run endpoint")
const { data } = await axios.post("cloud run endpoint"/path, payload, {
headers: {
Authorization: `Bearer ${token}`
}
})
return data

Connecting to Google API with Service Account and OAuth

I'm using an environment that doesn't have native support for a GCP client library. So I'm trying to figure out how to authenticate directly using manually crafted JWT token.
I've adapted the tasks from here Using nodeJS test environment, with jwa to implement the algorithm.
https://developers.google.com/identity/protocols/OAuth2ServiceAccount
The private key is taken from a JSON version of the service account file.
When the test runs, it catches a very basic 400 error, that just says "invalid request". I'm not sure how to troubleshoot it.
Could someone please help identify what I'm doing wrong?
var assert = require('assert');
const jwa = require('jwa');
const request = require('request-promise');
const pk = require('../auth/tradestate-2-tw').private_key;
const authEndpoint = 'https://www.googleapis.com/oauth2/v4/token';
describe('Connecting to Google API', function() {
it('should be able to get an auth token for Google Access', async () => {
assert(pk && pk.length, 'PK exists');
const header = { alg: "RS256", typ: "JWT" };
const body = {
"iss":"salesforce-treasury-wine#tradestate-2.iam.gserviceaccount.com",
"scope":"https://www.googleapis.com/auth/devstorage.readonly",
"aud":"https://www.googleapis.com/oauth2/v4/token",
"exp": new Date().getTime() + 3600 * 1000,
"iat": new Date().getTime()
};
console.log(JSON.stringify(body, null, 2));
const encodedHeader = Buffer.from(JSON.toString(header)).toString('base64')
const encodedBody = Buffer.from(JSON.toString(body)).toString('base64');
const cryptoString = `${encodedHeader}.${encodedBody}`;
const algo = jwa('RS256');
const signature = algo.sign(cryptoString, pk);
const jwt = `${encodedHeader}.${encodedBody}.${signature}`;
console.log('jwt', jwt);
const headers = {'Content-Type': 'application/x-www-form-urlencoded'};
const form = {
grant_type: 'urn:ietf:params:oauth:grant-type:jwt-bearer',
assertion: jwt
};
try {
const result = await request.post({url: authEndpoint, form, headers});
assert(result, 'Reached result');
console.log('Got result', JSON.stringify(result, null, 2));
} catch (err) {
console.log(JSON.stringify(err, null, 2));
throw (err);
}
});
});
Use JSON.stringify instead of JSON.toString. From the link in your question:
{"alg":"RS256","typ":"JWT"}
The Base64url representation of this is as follows:
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9
When using JSON.toString() and then base64 encoding, you would get W29iamVjdCBKU09OXQ== which explains the 400 for an invalid request as it can't decrypt anything you're sending it.

How to handle expired access token in asp.net core using refresh token with OpenId Connect

I have configured an ASOS OpenIdConnect Server using and an asp.net core mvc app that uses the "Microsoft.AspNetCore.Authentication.OpenIdConnect": "1.0.0 and "Microsoft.AspNetCore.Authentication.Cookies": "1.0.0". I have tested the "Authorization Code" workflow and everything works.
The client web app processes the authentication as expected and creates a cookie storing the id_token, access_token, and refresh_token.
How do I force Microsoft.AspNetCore.Authentication.OpenIdConnect to request a new access_token when it expires?
The asp.net core mvc app ignores the expired access_token.
I would like to have openidconnect see the expired access_token then make a call using the refresh token to get a new access_token. It should also update the cookie values. If the refresh token request fails I would expect openidconnect to "sign out" the cookie (remove it or something).
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
AuthenticationScheme = "Cookies"
});
app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions
{
ClientId = "myClient",
ClientSecret = "secret_secret_secret",
PostLogoutRedirectUri = "http://localhost:27933/",
RequireHttpsMetadata = false,
GetClaimsFromUserInfoEndpoint = true,
SaveTokens = true,
ResponseType = OpenIdConnectResponseType.Code,
AuthenticationMethod = OpenIdConnectRedirectBehavior.RedirectGet,
Authority = http://localhost:27933,
MetadataAddress = "http://localhost:27933/connect/config",
Scope = { "email", "roles", "offline_access" },
});
It seems there is no programming in the openidconnect authentication for asp.net core to manage the access_token on the server after received.
I found that I can intercept the cookie validation event and check if the access token has expired. If so, make a manual HTTP call to the token endpoint with the grant_type=refresh_token.
By calling context.ShouldRenew = true; this will cause the cookie to be updated and sent back to the client in the response.
I have provided the basis of what I have done and will work to update this answer once all work as been resolved.
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AutomaticAuthenticate = true,
AutomaticChallenge = true,
AuthenticationScheme = "Cookies",
ExpireTimeSpan = new TimeSpan(0, 0, 20),
SlidingExpiration = false,
CookieName = "WebAuth",
Events = new CookieAuthenticationEvents()
{
OnValidatePrincipal = context =>
{
if (context.Properties.Items.ContainsKey(".Token.expires_at"))
{
var expire = DateTime.Parse(context.Properties.Items[".Token.expires_at"]);
if (expire > DateTime.Now) //TODO:change to check expires in next 5 mintues.
{
logger.Warn($"Access token has expired, user: {context.HttpContext.User.Identity.Name}");
//TODO: send refresh token to ASOS. Update tokens in context.Properties.Items
//context.Properties.Items["Token.access_token"] = newToken;
context.ShouldRenew = true;
}
}
return Task.FromResult(0);
}
}
});
You must enable the generation of refresh_token by setting in startup.cs:
Setting values to AuthorizationEndpointPath = "/connect/authorize"; // needed for refreshtoken
Setting values to TokenEndpointPath = "/connect/token"; // standard token endpoint name
In your token provider, before validating the token request at the end of the HandleTokenrequest method, make sure you have set the offline scope:
// Call SetScopes with the list of scopes you want to grant
// (specify offline_access to issue a refresh token).
ticket.SetScopes(
OpenIdConnectConstants.Scopes.Profile,
OpenIdConnectConstants.Scopes.OfflineAccess);
If that is setup properly, you should receive a refresh_token back when you login with a password grant_type.
Then from your client you must issue the following request (I'm using Aurelia):
refreshToken() {
let baseUrl = yourbaseUrl;
let data = "client_id=" + this.appState.clientId
+ "&grant_type=refresh_token"
+ "&refresh_token=myRefreshToken";
return this.http.fetch(baseUrl + 'connect/token', {
method: 'post',
body : data,
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'application/json'
}
});
}
and that's it, make sure that your auth provider in HandleRequestToken is not trying to manipulate the request that is of type refresh_token:
public override async Task HandleTokenRequest(HandleTokenRequestContext context)
{
if (context.Request.IsPasswordGrantType())
{
// Password type request processing only
// code that shall not touch any refresh_token request
}
else if(!context.Request.IsRefreshTokenGrantType())
{
context.Reject(
error: OpenIdConnectConstants.Errors.InvalidGrant,
description: "Invalid grant type.");
return;
}
return;
}
The refresh_token shall just be able to pass through this method and is handled by another piece of middleware that handles refresh_token.
If you want more in depth knowledge about what the auth server is doing, you can have a look at the code of the OpenIdConnectServerHandler:
https://github.com/aspnet-contrib/AspNet.Security.OpenIdConnect.Server/blob/master/src/AspNet.Security.OpenIdConnect.Server/OpenIdConnectServerHandler.Exchange.cs
On the client side you must also be able to handle the auto refresh of the token, here is an example of an http interceptor for Angular 1.X, where one handles 401 reponses, refresh the token, then retry the request:
'use strict';
app.factory('authInterceptorService',
['$q', '$injector', '$location', 'localStorageService',
function ($q, $injector, $location, localStorageService) {
var authInterceptorServiceFactory = {};
var $http;
var _request = function (config) {
config.headers = config.headers || {};
var authData = localStorageService.get('authorizationData');
if (authData) {
config.headers.Authorization = 'Bearer ' + authData.token;
}
return config;
};
var _responseError = function (rejection) {
var deferred = $q.defer();
if (rejection.status === 401) {
var authService = $injector.get('authService');
console.log("calling authService.refreshToken()");
authService.refreshToken().then(function (response) {
console.log("token refreshed, retrying to connect");
_retryHttpRequest(rejection.config, deferred);
}, function () {
console.log("that didn't work, logging out.");
authService.logOut();
$location.path('/login');
deferred.reject(rejection);
});
} else {
deferred.reject(rejection);
}
return deferred.promise;
};
var _retryHttpRequest = function (config, deferred) {
console.log('autorefresh');
$http = $http || $injector.get('$http');
$http(config).then(function (response) {
deferred.resolve(response);
},
function (response) {
deferred.reject(response);
});
}
authInterceptorServiceFactory.request = _request;
authInterceptorServiceFactory.responseError = _responseError;
authInterceptorServiceFactory.retryHttpRequest = _retryHttpRequest;
return authInterceptorServiceFactory;
}]);
And here is an example I just did for Aurelia, this time I wrapped my http client into an http handler that checks if the token is expired or not. If it is expired it will first refresh the token, then perform the request. It uses a promise to keep the interface with the client-side data services consistent. This handler exposes the same interface as the aurelia-fetch client.
import {inject} from 'aurelia-framework';
import {HttpClient} from 'aurelia-fetch-client';
import {AuthService} from './authService';
#inject(HttpClient, AuthService)
export class HttpHandler {
constructor(httpClient, authService) {
this.http = httpClient;
this.authService = authService;
}
fetch(url, options){
let _this = this;
if(this.authService.tokenExpired()){
console.log("token expired");
return new Promise(
function(resolve, reject) {
console.log("refreshing");
_this.authService.refreshToken()
.then(
function (response) {
console.log("token refreshed");
_this.http.fetch(url, options).then(
function (success) {
console.log("call success", url);
resolve(success);
},
function (error) {
console.log("call failed", url);
reject(error);
});
}, function (error) {
console.log("token refresh failed");
reject(error);
});
}
);
}
else {
// token is not expired, we return the promise from the fetch client
return this.http.fetch(url, options);
}
}
}
For jquery you can look a jquery oAuth:
https://github.com/esbenp/jquery-oauth
Hope this helps.
Following on from #longday's answer, I have had success in using this code to force a client refresh without having to manually query an open id endpoint:
OnValidatePrincipal = context =>
{
if (context.Properties.Items.ContainsKey(".Token.expires_at"))
{
var expire = DateTime.Parse(context.Properties.Items[".Token.expires_at"]);
if (expire > DateTime.Now) //TODO:change to check expires in next 5 mintues.
{
context.ShouldRenew = true;
context.RejectPrincipal();
}
}
return Task.FromResult(0);
}

How to set content-length-range for s3 browser upload via boto

The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
You can't add it to a signed PUT URL. This only works with the signed policy that goes along with a POST because the two mechanisms are very different.
Signing a URL is a lossy (for lack of a better term) process. You generate the string to sign, then sign it. You send the signature with the request, but you discard and do not send the string to sign. S3 then reconstructs what the string to sign should have been, for the request it receives, and generates the signature you should have sent with that request. There's only one correct answer, and S3 doesn't know what string you actually signed. The signature matches, or doesn't, either because you built the string to sign incorrectly, or your credentials don't match, and it doesn't know which of these possibilities is the case. It only knows, based on the request you sent, the string you should have signed and what the signature should have been.
With that in mind, for content-length-range to work with a signed URL, the client would need to actually send such a header with the request... which doesn't make a lot of sense.
Conversely, with POST uploads, there is more information communicated to S3. It's not only going on whether your signature is valid, it also has your policy document... so it's possible to include directives -- policies -- with the request. They are protected from alteration by the signature, but they aren't encrypted or hashed -- the entire policy is readable by S3 (so, by contrast, we'll call this the opposite, "lossless.")
This difference is why you can't do what you are trying to do with PUT while you can with POST.