User Logging automation via Cloudwatch - amazon-web-services

I Have this task for my company where i have to do a monthly User access review via cloudwatch.
This is a manual process where i have to go to cloudwatch > cloudwatch_logs > log_groups > /var/log/example_access > example-instance and then document the logs for a list of users from random generated date. The example instance is a certificate manager box which is linked to our entire production fleet nodes. I also have to document what command that user used on a specific nodes.
Wondering is there any way i can automate this process and dump it into word docs? it's getting painful as the list of user/employees are increasing. Thanks

Sure there is, I don't reckon you want Word docs though, I'd launch an elasticsearch instance on AWS and then give users who want data Kibana access.
Also circulating word docs in an org is bad juju, depending on your windows/office version it carries risks.
Add this lambda function and then go into cloudwatch and add it as subscription filter to the right log groups.
Note you may get missing log entries if they're not logged in JSON format or have funky formatting, if you're using a standard log format it should work.
/* eslint-disable */
// Eslint disabled as this is adapted AWS code.
const zlib = require('zlib')
const elasticsearch = require('elasticsearch')
/**
* This is an example function to stream CloudWatch logs to ElasticSearch.
* #param event
* #param context
* #param callback
* #param utils
*/
export default (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = true
const payload = new Buffer(event.awslogs.data, 'base64')
const esClient = new elasticsearch.Client({
httpAuth: process.env.esAuth, // your params here
host: process.env.esEndpoint, // your params here.
})
zlib.gunzip(payload, (err, result) => {
if (err) {
return callback(null, err)
}
const logObject = JSON.parse(result.toString('utf8'))
const elasticsearchBulkData = transform(logObject)
const params = { body: [] }
params.body.push(elasticsearchBulkData)
esClient.bulk(params, (err, resp) => {
if (err) {
callback(null, 'success')
return
}
})
callback(null, 'success')
})
}
function transform(payload) {
if (payload.messageType === 'CONTROL_MESSAGE') {
return null
}
let bulkRequestBody = ''
payload.logEvents.forEach((logEvent) => {
const timestamp = new Date(1 * logEvent.timestamp)
// index name format: cwl-YYYY.MM.DD
const indexName = [
`cwl-${process.env.NODE_ENV}-${timestamp.getUTCFullYear()}`, // year
(`0${timestamp.getUTCMonth() + 1}`).slice(-2), // month
(`0${timestamp.getUTCDate()}`).slice(-2), // day
].join('.')
const source = buildSource(logEvent.message, logEvent.extractedFields)
source['#id'] = logEvent.id
source['#timestamp'] = new Date(1 * logEvent.timestamp).toISOString()
source['#message'] = logEvent.message
source['#owner'] = payload.owner
source['#log_group'] = payload.logGroup
source['#log_stream'] = payload.logStream
const action = { index: {} }
action.index._index = indexName
action.index._type = 'lambdaLogs'
action.index._id = logEvent.id
bulkRequestBody += `${[
JSON.stringify(action),
JSON.stringify(source),
].join('\n')}\n`
})
return bulkRequestBody
}
function buildSource(message, extractedFields) {
if (extractedFields) {
const source = {}
for (const key in extractedFields) {
if (extractedFields.hasOwnProperty(key) && extractedFields[key]) {
const value = extractedFields[key]
if (isNumeric(value)) {
source[key] = 1 * value
continue
}
const jsonSubString = extractJson(value)
if (jsonSubString !== null) {
source[`$${key}`] = JSON.parse(jsonSubString)
}
source[key] = value
}
}
return source
}
const jsonSubString = extractJson(message)
if (jsonSubString !== null) {
return JSON.parse(jsonSubString)
}
return {}
}
function extractJson(message) {
const jsonStart = message.indexOf('{')
if (jsonStart < 0) return null
const jsonSubString = message.substring(jsonStart)
return isValidJson(jsonSubString) ? jsonSubString : null
}
function isValidJson(message) {
try {
JSON.parse(message)
} catch (e) { return false }
return true
}
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n)
}
Now you should have your logs going into elastic, go into Kibana and you can search by date and even write endpoints to allow people to query their own data!
Easy way is just give stakeholders Kibana access and let them check it out.
Might not be exactly what ya wanted by I reckon it'll work better.

Related

Flutter aws amplify not returning data when calling graphql api

On button click I have programmed to call a graphql api which is connected to a Lambda function and the function is pulling data from a dynamodb table. The query does not produce any error, but it doesn't give me any results as well. I have also checked the cloudwatch logs and I dont see any traces of the function being called. Not sure on the careless mistake I am making here.
Here is my api
void findUser() async {
try {
String graphQLDocument = '''query getUserById(\$userId: ID!) {
getUserById(userId: \$id) {
id
name
}
}''';
var operation = Amplify.API.query(
request: GraphQLRequest<String>(
document: graphQLDocument,
variables: {'id': 'USER-14160000000'}));
var response = await operation.response;
var data = response.data;
print('Query result: ' + data);
} on ApiException catch (e) {
print('Query failed: $e');
}
}
Here is my lambda function -
const getUserById = require('./user-queries/getUserById');
exports.handler = async (event) => {
var userId = event.arguments.userId;
var name = event.arguments.name;
var avatarUrl = event.arguments.avatarUrl;
//console.log('Received Event - ', JSON.stringify(event,3));
console.log(userId);
switch(event.info.fieldName) {
case "getUserById":
return getUserById(userId);
}
};
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region: 'ca-central-1'});
async function getUserById(userId) {
const params = {
TableName:"Bol-Table",
KeyConditionExpression: 'pk = :hashKey and sk = :sortKey',
ExpressionAttributeValues: {
':hashKey': userId,
':sortKey': 'USER'
}
};
try {
const Item = await docClient.query(params).promise();
console.log(Item);
return {
id: Item.Items[0].pk,
name: Item.Items[0].details.displayName,
avatarUrl: Item.Items[0].details.avatarUrl,
createdAt: Item.Items[0].details.createdAt,
updatedAt: Item.Items[0].details.updatedAt
};
} catch(err) {
console.log("BOL Error: ", err);
}
}
module.exports = getUserById;
Upon button click I get this
Moving my comment to an answer:
Can you try changing your graphQLDocumnet to
String graphQLDocument = '''query getUserById(\$id: ID!) {
getUserById(userId: \$id) {
id
name
}
}''';
Your variable is $userId and then $id. Try calling it $id in both places like in your variables object.
Your flutter code is working fine but in lambda from the aws is returning blank string "" to not to print anything

Lambda not deleting DynamoDB records when triggered with Cloud Watch events

I am trying to delete items in my Dynamodb table using Cloud Watch event triggered Lambda. This lambda scans the dynamo table and deletes all expired items. My code seems to be working when I test it using the test event in the console (i.e it deletes all the expired items). But when lambda gets triggered automatically using the Cloud Watch event it does not delete, event though I see that the lambda is being triggered.
exports.handler = async function () {
var params = {
TableName: TABLE_NAME
}
try {
const data = await docClient.scan(params).promise();
const items = data.Items;
if (items.length != 0) {
Promise.all(items.map(async (item) => {
const expirationDT = new Date(item.ExpiresAt);
const now = new Date();
if (now > expirationDT) {
console.log("Deleting item with otc: " + item.Otc + " and name: " + item.SecretName);
const deleteParams = {
TableName: TABLE_NAME,
Key: {
"Otc": item.Otc,
"SecretName": item.SecretName,
},
};
try {
await docClient.delete(deleteParams).promise();
} catch (err) {
console.log("The Secret was not deleted due to: ", err.message);
}
}
}))
}
} catch (err) {
console.log("The items were not able to be scanned due to : ", err.message)
}}
I know using DynamoDB TTL is an option, but I need these deletions to be somewhat precise, and TTL can sometimes take up to 48 hours, and I am aware I can use a filter when retrieving records to counter-act that. Just wondering what's wrong with my code here.
You need to await Promise.all or your lambda will end execution before it resolves
await Promise.all(items.map(async (item) => {
const expirationDT = new Date(item.ExpiresAt);
const now = new Date();
// ...

Terraform lambda invocation probable timeout

After having spent 7 hours on this, I decided to reach out to you.
I need to update credentials within in a terraform flow. Since the secrets shall not be in the state-file, I use an AWS lambda function to update the secret of the RDS instance. The password is passed via CLI.
locals {
db_password = tostring(var.db_user_password)
}
data "aws_lambda_invocation" "credentials_manager" {
function_name = "credentials-manager"
input = <<JSON
{
"secretString": "{\"username\":\"${module.db_instance.db_user}\",\"password\":\"${local.db_password}\",\"dbInstanceIdentifier\":\"${module.db_instance.db_identifier}\"}",
"secretId": "${module.db_instance.db_secret_id}",
"storageId": "${module.db_instance.db_identifier}",
"forcedMod": "${var.forced_mod}"
}
JSON
depends_on = [
module.db_instance.db_secret_id,
]
}
output "result" {
description = "String result of Lambda execution"
value = jsondecode(data.aws_lambda_invocation.credentials_manager.result)
}
In order to make sure that the RDS instance status is 'available' the lambda function also contains a waiter.
When I manually execute the function everything works like a charm.
But within in terraform it does not proceed from here:
data.aws_lambda_invocation.credentials_manager: Refreshing state...
However, when I look into AWS Cloud Watch I can see that the lambda function is being invoked by Terraform over and over again.
This is the lambda policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1589715377799",
"Action": [
"rds:ModifyDBInstance",
"rds:DescribeDBInstances"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
The lambda function looks like this:
const secretsManager = require('aws-sdk/clients/secretsmanager')
const rds = require('aws-sdk/clients/rds')
const elastiCache = require('aws-sdk/clients/elasticache')
const log = require('loglevel')
/////////////////////////////////////////
// ENVIRONMENT VARIABLES
/////////////////////////////////////////
const logLevel = process.env["LOG_LEVEL"];
const region = process.env["REGION"]
/////////////////////////////////////////
// CONFIGURE LOGGER
log.setLevel(logLevel);
let protocol = []
/////////////////////////////////////////
/////////////////////////////////////////
// DEFINE THE CLIENTS
const SM = new secretsManager({ region })
const RDS = new rds({ region })
const ELC = new elastiCache({region})
/////////////////////////////////////////
/////////////////////////////////////////
// FUNCTION DEFINITIONS
/////////////////////////////////////////
// HELPERS
/**
* #function waitForSeconds
* Set a custom waiter.
*
* #param {int} milseconds - the milliseconds to set as timeout.
*
*/
const waitForSeconds = (ms) => {
return new Promise(resolve => setTimeout(resolve, ms))
}
// AWS SECRETS MANAGER FUNCTIONS
/**
* #function UpdateSecretInSM
* The function updates the secrect value in the corresponding secret.
*
* #param {string} secretId - The id of the secret located in AWS SecretsManager
* #param {string} secretString - The value of the new secret
*
*/
const UpdateSecretInSM = async (secretId,secretString) => {
const params = {SecretId: secretId, SecretString: secretString}
try {
const data = await SM.updateSecret(params).promise()
log.info(`[INFO]: Password for ${secretId} successfully changed in Scecrets Manager!`)
let success = {Timestamp: new Date().toISOString(),Func: 'UpdateSecretInSM', Message: `Secret for ${secretId} successfully changed!`}
protocol.push(success)
return
} catch (err) {
log.debug("[DEBUG]: Error: ", err.stack);
let error = {Timestamp: new Date().toISOString(),Func: 'UpdateSecretInSM', Error: err.stack}
protocol.push(error)
return
}
}
/**
* #function GetSecretFromSM
* The function retrieves the specified secret from AWS SecretsManager.
* Returns the password.
*
* #param {string} secretId - secretId that is available in AWS SecretsManager
*
*/
const GetSecretFromSM = async (secretId) => {
try {
const data = await SM.getSecretValue({SecretId: secretId}).promise()
log.debug("[DEBUG]: Secret: ", data);
let success = {Timestamp: new Date().toISOString(),Func: 'GetSecretFromSM', Message: 'Secret from SecretsManager successfully received!'}
protocol.push(success)
const { SecretString } = data
const password = JSON.parse(SecretString)
return password.password
} catch (err) {
log.debug("[DEBUG]: Error: ", err.stack);
let error = {Timestamp: new Date().toISOString(),Func: 'GetSecretFromSM', Error: err.stack}
protocol.push(error)
return
}
}
// AWS RDS FUNCTIONS
/**
* #function ChangeRDSSecret
* Change the secret of the specified RDS instance.
*
* #param {string} rdsId - id of the RDS instance
* #param {string} password - new password
*
*/
const ChangeRDSSecret = async (rdsId,password) => {
const params = {
DBInstanceIdentifier: rdsId,
MasterUserPassword: password
}
try {
await RDS.modifyDBInstance(params).promise()
log.info(`[INFO]: Password for ${rdsId} successfully changed!`)
let success = {Timestamp: new Date().toISOString(), Func: 'ChangeRDSSecret', Message: `Secret for ${rdsId} successfully changed!`}
protocol.push(success)
return
} catch (err) {
log.debug("[DEBUG]: Error: ", err.stack);
let error = {Timestamp: new Date().toISOString(), Func: 'ChangeRDSSecret', Error: err.stack}
protocol.push(error)
return
}
}
const DescribeRDSInstance = async(id) => {
const params = { DBInstanceIdentifier : id }
const secondsToWait = 10000
try {
let pendingModifications = true
while (pendingModifications == true) {
log.info(`[INFO]: Checking modified values for ${id}`)
let data = await RDS.describeDBInstances(params).promise()
console.log(data)
// Extract the 'PendingModifiedValues' object
let myInstance = data['DBInstances']
myInstance = myInstance[0]
if (myInstance.DBInstanceStatus === "resetting-master-credentials") {
log.info(`[INFO]:Password change is being processed!`)
pendingModifications = false
}
log.info(`[INFO]: Waiting for ${secondsToWait/1000} seconds!`)
await waitForSeconds(secondsToWait)
}
let success = {Timestamp: new Date().toISOString(), Func: 'DescribeRDSInstance', Message: `${id} available again!`}
protocol.push(success)
return
} catch (err) {
log.debug("[DEBUG]: Error: ", err.stack);
let error = {Timestamp: new Date().toISOString(), Func: 'DescribeRDSInstance', Error: err.stack}
protocol.push(error)
return
}
}
const WaitRDSForAvailableState = async(id) => {
/**
* #function WaitRDSForAvailableState
* Wait for the instance to be available again.
*
* #param {string} id - id of the RDS instance
*
*/
const params = { DBInstanceIdentifier: id}
try {
log.info(`[INFO]: Waiting for ${id} to be available again!`)
const data = await RDS.waitFor('dBInstanceAvailable', params).promise()
log.info(`[INFO]: ${id} available again!`)
let success = {Timestamp: new Date().toISOString(), Func: 'WaitRDSForAvailableState', Message: `${id} available again!`}
protocol.push(success)
return
} catch (err) {
log.debug("[DEBUG]: Error: ", err.stack);
let error = {Timestamp: new Date().toISOString(), Func: 'WaitRDSForAvailableState', Error: err.stack}
protocol.push(error)
return
}
}
// AWS ELASTICACHE FUNCTIONS
// ... removed since they follow the same principle like RDS
/////////////////////////////////////////
// Lambda Handler
/////////////////////////////////////////
exports.handler = async (event,context,callback) => {
protocol = []
log.debug("[DEBUG]: Event:", event)
log.debug("[DEBUG]: Context:", context)
// Variable for the final message the lambda function will return
let finalValue
// Get the password and rds from terraform output
const secretString = event.secretString // manual input
const secretId = event.secretId // coming from secretesmanager
const storageId = event.storageId // coming from db identifier
const forcedMod = event.forcedMod // manual input
// Extract the password from the passed secretString to for comparison
const passedSecretStringJSON = JSON.parse(secretString)
const passedSecretString = passedSecretStringJSON.password
const currentSecret = await GetSecretFromSM(secretId)
// Case if the password has already been updated
if (currentSecret !== "ChangeMeViaScriptOrConsole" && passedSecretString === "ChangeMeViaScriptOrConsole") {
log.debug("[DEBUG]: No change necessary.")
finalValue = {timestamp: new Date().toISOString(),
message: 'Lambda function execution finished!',
summary: 'Password already updated. It is not "ChangeMeViaScriptOrConsole."'}
return finalValue
}
// Case if the a new password has not been set yet
if (currentSecret === "ChangeMeViaScriptOrConsole" && passedSecretString === "ChangeMeViaScriptOrConsole") {
finalValue = {timestamp: new Date().toISOString(),
message: 'Lambda function execution finished!',
summary: 'Password still "ChangeMeViaScriptOrConsole". Please change me!'}
return finalValue
}
// Case if the passed password is equal to the stored password and if pw modification is enforced
if (currentSecret === passedSecretString && forcedMod === "no") {
finalValue = {timestamp: new Date().toISOString(),
message: 'Lambda function execution finished!',
summary: 'Stored password is the same as the passed one. No changes made!'}
return finalValue
}
// Case for changing the password
if (passedSecretString !== "ChangeMeViaScriptOrConsole") {
// Update the secret in SM for the specified RDS Instances
await UpdateSecretInSM(secretId,secretString)
log.debug("[DEBUG]: Secret updated for: ", secretId)
// Change the new secret vom SM
const updatedSecret = await GetSecretFromSM(secretId)
log.debug("[DEBUG]: Updated secret: ", updatedSecret)
if (secretId.includes("rds")) {
// Update RDS instance with new secret and wait for it to be available again
await ChangeRDSSecret(storageId, updatedSecret)
await DescribeRDSInstance(storageId)
await WaitRDSForAvailableState(storageId)
} else if (secretId.includes("elasticache")) {
// ... removed since it is analogeous to RDS
} else {
protocol.push(`No corresponding Storage Id exists for ${secretId}. Please check the Secret Id/Name in the terraform configuration.`)
}
finalValue ={timestamp: new Date().toISOString(),
message: 'Lambda function execution finished!',
summary: protocol}
return finalValue
} else {
finalValue = {timestamp: new Date().toISOString(),
message: 'Lambda function execution finished!',
summary: 'Nothing changed'}
return finalValue
}
}
Anyone an idea how to solve or mitigate this behaviour?
Can you please show the iam policy for your lambda function? By my understanding you might be missing this resource aws_lambda_permission for your lambda function. https://www.terraform.io/docs/providers/aws/r/lambda_permission.html

TP not receiving transactions after block rejected due to state root hash mismatch Hyperledger Sawtooth

I have setup a Hyperlder Sawtooth Network from the Sawtooth Docs, you can find docker-compose.yaml I used to setup the network here:
https://sawtooth.hyperledger.org/docs/core/releases/1.0/app_developers_guide/sawtooth-default.yaml
Transaction processor code:
const { TransactionHandler } = require('sawtooth-sdk/processor/handler');
const { InvalidTransaction } = require('sawtooth-sdk/processor/exceptions');
const { TextEncoder, TextDecoder } = require('text-encoding/lib/encoding');
const crypto = require('crypto');
const _hash = (x) => {
return crypto.createHash('sha512').update(x).digest('hex').toLowerCase();
}
const encoder = new TextEncoder('utf8');
const decoder = new TextDecoder('utf8');
const TP_FAMILY = 'grocery';
const TP_NAMESPACE = _hash(TP_FAMILY).substring(0, 6);
class GroceryHandler extends TransactionHandler {
constructor() {
super(TP_FAMILY, ['1.0.0'], [TP_NAMESPACE]);
this.timeout = 500;
}
apply(request, context) {
console.log('Transaction Processor Called!');
this._context = context;
this._request = request;
const actions = ['createOrder'];
try {
let payload = JSON.parse(decoder.decode(request.payload));
let action = payload.action
if(!action || !actions.includes(action)) {
throw new InvalidTransaction(`Upsupported action "${action}"!`);
}
try {
return this[action](payload.data);
} catch(e) {
console.log(e);
}
} catch(e) {
throw new InvalidTransaction('Pass a valid json string.');
}
}
createOrder(payload) {
console.log('Creating order!');
let data = {
id: payload.id,
status: payload.status,
created_at: Math.floor((new Date()).getTime() / 1000)
};
return this._setEntry(this._makeAddress(payload.id), data);
}
_setEntry(address, payload) {
let dataBytes = encoder.encode(JSON.stringify(payload));
let entries = {
[address]: dataBytes
}
return this._context.setState(entries);
}
_makeAddress(id) {
return TP_NAMESPACE + _hash(id).substr(0,64);
}
}
const transactionProcessor = new TransactionProcessor('tcp://validator:4004');
transactionProcessor.addHandler(new GroceryHandler());
transactionProcessor.start();
Client code:
const { createContext, CryptoFactory } = require('sawtooth-sdk/signing');
const { protobuf } = require('sawtooth-sdk');
const { TextEncoder } = require('text-encoding/lib/encoding');
const request = require('request');
const crypto = require('crypto');
const encoder = new TextEncoder('utf8');
const _hash = (x) => {
return crypto.createHash('sha512').update(x).digest('hex').toLowerCase();
}
const TP_FAMILY = 'grocery';
const TP_NAMESPACE = _hash(TP_FAMILY).substr(0, 6);
const context = createContext('secp256k1');
const privateKey = context.newRandomPrivateKey();
const signer = new CryptoFactory(context).newSigner(privateKey);
let payload = {
action: 'create_order',
data: {
id: '1'
}
};
const address = TP_NAMESPACE + _hash(payload.id).substr(0, 64);
const payloadBytes = encoder.encode(JSON.stringify(payload));
const transactionHeaderBytes = protobuf.TransactionHeader.encode({
familyName: TP_FAMILY,
familyVersion: '1.0.0',
inputs: [address],
outputs: [address],
signerPublicKey: signer.getPublicKey().asHex(),
batcherPublicKey: signer.getPublicKey().asHex(),
dependencies: [],
payloadSha512: _hash(payloadBytes)
}).finish();
const transactionHeaderSignature = signer.sign(transactionHeaderBytes);
const transaction = protobuf.Transaction.create({
header: transactionHeaderBytes,
headerSignature: transactionHeaderSignature,
payload: payloadBytes
});
const transactions = [transaction]
const batchHeaderBytes = protobuf.BatchHeader.encode({
signerPublicKey: signer.getPublicKey().asHex(),
transactionIds: transactions.map((txn) => txn.headerSignature),
}).finish();
const batchHeaderSignature = signer.sign(batchHeaderBytes)
const batch = protobuf.Batch.create({
header: batchHeaderBytes,
headerSignature: batchHeaderSignature,
transactions: transactions
});
const batchListBytes = protobuf.BatchList.encode({
batches: [batch]
}).finish();
request.post({
url: 'http://localhost:8008/batches',
body: batchListBytes,
headers: { 'Content-Type': 'application/octet-stream' }
}, (err, response) => {
if (err) {
return console.log(err);
}
console.log(response.body);
});
Validator log: https://justpaste.it/74y5g
Transaction processor log: https://justpaste.it/5ayn6
> grocery-tp#1.0.0 start /processor
> node index.js tcp://validator:4004
Connected to tcp://validator:4004
Registration of [grocery 1.0.0] succeeded
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
Transaction Processor Called!
Creating order!
After the below entry in validator logs, I don't receive any transactions to the processor.
[2018-07-04 10:39:18.026 DEBUG block_validator] Block(c9636780f4babea6b8103665bc1fb19a59ce0ba66289494fc61972e97423a3273dd1d41e93ddf90c933809dab5350a0a83b282aaf25ebdcc6619735e25d8b337 (block_num:75, state:00704f66a517e79dc064e63586b12d677a3b60ce25363a4654fa819a59e4132c, previous_block_id:32b07cd79093aee0b7833b8924c8fef01fce798f3d58560c83c9891b2c05c02f2a4b894de43503fdcb0f129e9f365cfbdc415b798877393f7e75598195ad3c94)) rejected due to state root hash mismatch: 00704f66a517e79dc064e63586b12d677a3b60ce25363a4654fa819a59e4132c != e52737049078b9e0f149bb58fc4938473a5e889fa427536b0e862c4728df5004
When sawtooth processes a transaction it will send it to your TP more than once and then compare the hash between the multiple invocations to ensure the same result is returned. If, within the TP, you are generating a different address or variation of data stored at an address it will fail the transaction.
The famous saying in sawtooth is that the TP must be deterministic for each transaction, in other words it is similar to the rule in function programming: The same TP called with the same Transaction should produce the same result.
Things to watch for:
Be careful to not construct an address that incorporates timestamp elements, incremental counts or other random bits of information
Be careful to not do the same for the data you are storing at an address

How to set content-length-range for s3 browser upload via boto

The Issue
I'm trying to upload images directly to S3 from the browser and am getting stuck applying the content-length-range permission via boto's S3Connection.generate_url method.
There's plenty of information about signing POST forms, setting policies in general and even a heroku method for doing a similar submission. What I can't figure out for the life of me is how to add the "content-length-range" to the signed url.
With boto's generate_url method (example below), I can specify policy headers and have got it working for normal uploads. What I can't seem to add is a policy restriction on max file size.
Server Signing Code
## django request handler
from boto.s3.connection import S3Connection
from django.conf import settings
from django.http import HttpResponse
import mimetypes
import json
conn = S3Connection(settings.S3_ACCESS_KEY, settings.S3_SECRET_KEY)
object_name = request.GET['objectName']
content_type = mimetypes.guess_type(object_name)[0]
signed_url = conn.generate_url(
expires_in = 300,
method = "PUT",
bucket = settings.BUCKET_NAME,
key = object_name,
headers = {'Content-Type': content_type, 'x-amz-acl':'public-read'})
return HttpResponse(json.dumps({'signedUrl': signed_url}))
On the client, I'm using the ReactS3Uploader which is based on tadruj's s3upload.js script. It shouldn't be affecting anything as it seems to just pass along whatever the signedUrls covers, but copied below for simplicity.
ReactS3Uploader JS Code (simplified)
uploadFile: function() {
new S3Upload({
fileElement: this.getDOMNode(),
signingUrl: /api/get_signing_url/,
onProgress: this.props.onProgress,
onFinishS3Put: this.props.onFinish,
onError: this.props.onError
});
},
render: function() {
return this.transferPropsTo(
React.DOM.input({type: 'file', onChange: this.uploadFile})
);
}
S3upload.js
S3Upload.prototype.signingUrl = '/sign-s3';
S3Upload.prototype.fileElement = null;
S3Upload.prototype.onFinishS3Put = function(signResult) {
return console.log('base.onFinishS3Put()', signResult.publicUrl);
};
S3Upload.prototype.onProgress = function(percent, status) {
return console.log('base.onProgress()', percent, status);
};
S3Upload.prototype.onError = function(status) {
return console.log('base.onError()', status);
};
function S3Upload(options) {
if (options == null) {
options = {};
}
for (option in options) {
if (options.hasOwnProperty(option)) {
this[option] = options[option];
}
}
this.handleFileSelect(this.fileElement);
}
S3Upload.prototype.handleFileSelect = function(fileElement) {
this.onProgress(0, 'Upload started.');
var files = fileElement.files;
var result = [];
for (var i=0; i < files.length; i++) {
var f = files[i];
result.push(this.uploadFile(f));
}
return result;
};
S3Upload.prototype.createCORSRequest = function(method, url) {
var xhr = new XMLHttpRequest();
if (xhr.withCredentials != null) {
xhr.open(method, url, true);
}
else if (typeof XDomainRequest !== "undefined") {
xhr = new XDomainRequest();
xhr.open(method, url);
}
else {
xhr = null;
}
return xhr;
};
S3Upload.prototype.executeOnSignedUrl = function(file, callback) {
var xhr = new XMLHttpRequest();
xhr.open('GET', this.signingUrl + '&objectName=' + file.name, true);
xhr.overrideMimeType && xhr.overrideMimeType('text/plain; charset=x-user-defined');
xhr.onreadystatechange = function() {
if (xhr.readyState === 4 && xhr.status === 200) {
var result;
try {
result = JSON.parse(xhr.responseText);
} catch (error) {
this.onError('Invalid signing server response JSON: ' + xhr.responseText);
return false;
}
return callback(result);
} else if (xhr.readyState === 4 && xhr.status !== 200) {
return this.onError('Could not contact request signing server. Status = ' + xhr.status);
}
}.bind(this);
return xhr.send();
};
S3Upload.prototype.uploadToS3 = function(file, signResult) {
var xhr = this.createCORSRequest('PUT', signResult.signedUrl);
if (!xhr) {
this.onError('CORS not supported');
} else {
xhr.onload = function() {
if (xhr.status === 200) {
this.onProgress(100, 'Upload completed.');
return this.onFinishS3Put(signResult);
} else {
return this.onError('Upload error: ' + xhr.status);
}
}.bind(this);
xhr.onerror = function() {
return this.onError('XHR error.');
}.bind(this);
xhr.upload.onprogress = function(e) {
var percentLoaded;
if (e.lengthComputable) {
percentLoaded = Math.round((e.loaded / e.total) * 100);
return this.onProgress(percentLoaded, percentLoaded === 100 ? 'Finalizing.' : 'Uploading.');
}
}.bind(this);
}
xhr.setRequestHeader('Content-Type', file.type);
xhr.setRequestHeader('x-amz-acl', 'public-read');
return xhr.send(file);
};
S3Upload.prototype.uploadFile = function(file) {
return this.executeOnSignedUrl(file, function(signResult) {
return this.uploadToS3(file, signResult);
}.bind(this));
};
module.exports = S3Upload;
Any help would be greatly appreciated here as I've been banging my head against the wall for quite a few hours now.
You can't add it to a signed PUT URL. This only works with the signed policy that goes along with a POST because the two mechanisms are very different.
Signing a URL is a lossy (for lack of a better term) process. You generate the string to sign, then sign it. You send the signature with the request, but you discard and do not send the string to sign. S3 then reconstructs what the string to sign should have been, for the request it receives, and generates the signature you should have sent with that request. There's only one correct answer, and S3 doesn't know what string you actually signed. The signature matches, or doesn't, either because you built the string to sign incorrectly, or your credentials don't match, and it doesn't know which of these possibilities is the case. It only knows, based on the request you sent, the string you should have signed and what the signature should have been.
With that in mind, for content-length-range to work with a signed URL, the client would need to actually send such a header with the request... which doesn't make a lot of sense.
Conversely, with POST uploads, there is more information communicated to S3. It's not only going on whether your signature is valid, it also has your policy document... so it's possible to include directives -- policies -- with the request. They are protected from alteration by the signature, but they aren't encrypted or hashed -- the entire policy is readable by S3 (so, by contrast, we'll call this the opposite, "lossless.")
This difference is why you can't do what you are trying to do with PUT while you can with POST.