s3 SignedUrl x-amz-security-token - amazon-web-services

const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?

I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.

I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}

I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)

Related

Cloudfront Malformed Policy Error with AWS Cloudfront-Signer V3

I'm having an issue with the AWS Cookie-Signer V3 and Custom Policies. I'm currently using #aws-sdk/cloudfront-signer v3.254.0. I have followed the official docs of how to create and handle signed cookies - it works as long as I don't use custom policies.
Setup
I use a custom lambda via an API Gateway to obtain the Set-Cookie header with my signed cookies. These cookies will be attached to a further file-request via my AWS Cloudfront instance. In order to avoid CORS errors, I have set up custom domains for the API Gateway as well as for the Cloudfront instance.
A minified example of the signing and the return value looks as follows:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).toISOString();
// Cookie-Signer
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
url: "https://cloudfront.example.com/path-to-file/file.m3u8",
dateLessThan: getExpTime,
});
// Response
const response = {
statusCode: 200,
isBase64Encoded: false,
body: JSON.stringify({ url: url, bucket: bucket, key: key }),
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "https://example.com",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Methods": "OPTIONS,POST,GET",
},
multiValueHeaders: {
"Set-Cookie": [
`CloudFront-Expires=${signedCookie["CloudFront-Expires"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Signature=${signedCookie["CloudFront-Signature"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Key-Pair-Id=${signedCookie["CloudFront-Key-Pair-Id"]}; Domain=example.com; Path=/${path}/`,
],
},
};
This works well if I request a single file from my S3 bucket. However, since I want to stream video files from my S3 via Cloudfront and according to the AWS docs, wildcard characters are only allowed with Custom Policies. I need this wildcard to give access to the entire video folder with my video chunks. Again following the official docs, I have updated my lambda with:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).getTime();
// Custom Policy
const policyString = JSON.stringify({
Statement: [
{
Resource: "https://cloudfront.example.com/path-to-file/*",
Condition: {
DateLessThan: { "AWS:EpochTime": getExpTime },
},
},
],
});
// Cookie signing
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
policy: policyString,
url: "https://cloudfront.example.com/path-to-file/*",
});
which results in a Malformed Policy error.
What confuses me is that the getSignedCookies() method requires the url property even though I'm using a custom policy with the Ressource parameter. Since the Resource parameter is optional, I've also tried without which led to the same error.
To rule out that something is wrong with the wildcard character, I've also run a test where I've pointed to the exact file but using the custom policy. Although this works without custom policy, it does fail with the Malformed Policy error when using the custom policy.
Since there is also no example of how to use the Cloudfront Cookie-Signer V3 with custom policies, I'd be very grateful if someone can tell me how I'm supposed to type this out!
Cheers! 🙌

Uploading item in Amazon s3 bucket from React Native with user's info

I am uploading an image on AWS S3 using React Native with AWS amplify for mobile app development. Many users use my app.
I want that whenever any user uploads the image on S3 through the mobile app, I want to get user's ID also along with that image. So that later I can recognise the images on S3 that which image belongs to which user. How can I achieve this?
I am using AWS Auth Cognito for user registration/ Sign-In. I came to know that whenever a user is registered in AWS cognito (for the first time), the user gets a unique ID in the pool. Can I use this user ID to be passed alongwith image whenever user uploads image?
Basically I want to have some form of functionality so that I can track back to the user who uploaded the image on S3. This is because after the image is uploaded on S3, I later want to process this image and send the result back ONLY to the user of the image.
You can store the data in S3 in structure similar to the one below:
users/
123userId/
image1.jpeg
image2.jpeg
anotherUserId456/
image1.png
image2.png
Then, if you need all files from given user, you can use ListObjects API in S3 lambda - docs here
// for lambda function
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const objects = await s3.listObjectsV2({
Bucket: 'STRING_VALUE', /* required */
Delimiter: 'STRING_VALUE',
EncodingType: url,
ExpectedBucketOwner: 'STRING_VALUE',
Marker: 'STRING_VALUE',
MaxKeys: 'NUMBER_VALUE',
Prefix: 'STRING_VALUE',
RequestPayer: requester
}).promise()
objects.forEach(item => {
console.log(item
)});
Or if you are using S3 Lambda trigger, you can parse userId from "key" / filename in received event in S3 lambda (in case you used structure above).
{
"key": "public/users/e1e0858f-2ea1-90f892b68e0c/item.jpg",
"size": 269582,
"eTag": "db8aafcca5786b62966073f59152de9d",
"sequencer": "006068DC0B344DA9E9"
}
Another option is to write "userId" into metadata of the file that will be uploaded to S3.
You can pass "sub" property from Cognito's currently logged user, so in S3 Lambda trigger function you will get the userId from metadata.
import Auth from "#aws-amplify/auth";
const user = await Auth.currentUserInfo();
const userId = user.attributes.sub;
return userId;
// use userId from Cognito and put in into custom metadata
import {Storage} from "aws-amplify";
const userId = "userIdHere"
const filename = "filename" // or use uuid()
const ref = `public/users/#{userId}/${filename}`
const response = await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
});
AWS Amplify can do all above automatically (create folder structures, etc.), if you do not need any special structure how files are stored = docs here.
You only need to configure Storage ('globally' or per action) with "level" property.
Storage.configure({ level: 'private' });
await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
});
//or set up level only for given action
const ref= "userCollection"
await Storage.put(ref, blob, {
contentType: "image/jpeg",
metadata: {userId: userId},
level: "private"
});
So, for example, if you use level "private", file "124.jpeg" will be stored in S3 at
"private/us-east-1:6419087f-d13e-4581-b72e-7a7b32d7c7c1/userCollection/124.jpeg"
However, as you can see, "us-east-1:6419087f-d13e-4581-b72e-7a7b32d7c7c1" looks different than the "sub" in Cognito ("sub" property does not contain regions).
The related discussion is here, also with few workarounds, but basically you need to decide how you will manage user identification in your project on your own (if you use "sub" everywhere as userId, or you will go with another ID - I think it is called identityID and consider that as userId).
PS: If you are using React Native, I guess you will go with Push Notification for sending updates from backend - if that is the case, I was doing something similar ("moderation control") - so I added another Lambda function, Cognito's Post-Confirmation Lambda, that creates user in DynamoDB with ID of Cognitos's "sub" property.
Then user can save token from mobile device needed for push notifications, so when the AWS Rekognition finished detection on the image that user uploaded, I queried DynamoDB and used SNS to send the notification to the end user.

s3.putObject(params).promise() does not upload file, but successfully executes then() callback

I had pretty long number of attempts to put a file in S3 bucket, after which I have to update my model.
I have following code (note that I have tried commented lines too. It works neither with comments nor without it.)
The problem observed:
Everything in the first .then() block (successCallBack()) gets successfully executed, but I do not see result of s3.putObject().
The bucket in question is public, no access restrictions. It used to work with sls offline option, then because of it not working in AWS I had to make lot of changes and managed to make successCallback() work which does the database work successfully. However, file upload still doesn't work.
Some questions:
While solving this, the real questions I am pondering / searching are,
Is lambda supposed to return something? I saw AWS docs but they have fragmented code snippets.
Putting await in front of s3.putObject(params).promise() does not help. I see samples with and without await in front of things that have AWS Promise() function call. Not sure which ones are correct.
What is the correct way when you have chained async functions to accomplish within one lambda function?
UPDATE:
var myJSON = {}
const createBook = async (event) => {
let bucketPath = "https://com.xxxx.yyyy.aa-bb-zzzzzz-1.amazonaws.com"
let fileKey = //file key
let path = bucketPath + "/" + fileKey;
myJSON = {
//JSON from headers
}
var s3 = new AWS.S3();
let buffer = Buffer.from(event.body, 'utf8');
var params = {Bucket: 'com.xxxx.yyyy', Key: fileKey, Body: buffer, ContentEncoding: 'utf8'};
let putObjPromise = s3.putObject(params).promise();
putObjPromise
.then(successCallBack())
.then(c => {
console.log('File upload Success!');
return {
statusCode: 200,
headers: { 'Content-Type': 'text/plain' },
body: "Success!!!"
}
})
.catch(err => {
let str = "File upload / Creation error:" + err;
console.log(str);
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain' },
body: str
}
});
}
const successCallBack = async () => {
console.log("Inside success callback - " + JSON.stringify(myJSON)) ;
const { myModel } = await connectToDatabase()
console.log("After connectToDatabase")
const book = await myModel.create(myJSON)
console.log(msg);
}
Finally, I got this to work. My code worked already in sls offline setup.
What was different on AWS endpoint?
What I observed on console was the fact that my lambda was set to run under VPC.
When I chose No VPC, it worked. I do not know if this is the best practice. There must be some security advantage obtained by functions running under VPC.
I came across this huge explanation about VPC but I could not find anything related to S3.
The code posted in the question currently runs fine on AWS endpoint.
If the lambda is running in a VPC then you would need a VPC endpoint to access a service outside the vpc. S3 would be outside the VPC. Perhaps if security is an issue then creating a VPC endpoint would solve the issue in a better way. Also, if security is an issue, then perhaps adding a policy (or using the default AmazonS3FullAccess policy) to the role that the lambda is using, then the S3 bucket wouldn't need to be public.

Bypassing need for x-amz-cf-id header inclusion in S3 auth in cloudfront

I have a not completely orthodox CF->S3 setup. The relevant components here are:
Cloudfront distribution with origin s3.ap-southeast-2.amazonaws.com
Lambda#Edge function (Origin Request) that adds a S3 authorisation (version 2) query string (Signed using the S3 policy the function uses).
The request returned from Lambda is completely correct. If I log the uri, host and query string I get the file I am requesting. However, if I access it through the Cloudfront link directly, the request fails because it no longer uses the AWSAccessKeyID, instead it opts to use x-amz-cf-id (but uses the same Signature, Amz-Security-Token etc). CORRECTION: it may not replace, but be required in addition to.
I know this is the case because I have returned both the
StringToSign and the SignatureProvided. These both match the Lambda response except for the AWSAccessKeyID which has been replaced with the x-amz-cf-id.
This is a very specific question obviously. I may have to look at remodelling this architecture but I would prefer not to. There are several requirements which has led me down this not completely regular setup.
I believe the AWSAccessKeyID => x-amz-cf-id replacement is the result of two mechanisms:
First, you need to configure CloudFront to forward the query parameters to the origin. Without that, it will strip all parameters. If you use S3 signed URLs, make sure to also cache based on all parameters as otherwise you'll end up without any access control.
Second, CloudFront attaches the x-amz-cf-id to the requests that are not going to an S3 origin. You can double-check at the CloudFront console the origin type and you need to make sure it is reported as S3. I have a blog post describing it in detail.
But adding the S3 signature to all the requests with Lambda#Edge defeats the purpose. If you want to keep the bucket private and only allow CloudFront to access it then use the Origin Access Identity, that is precisely for the use-case.
So it seems like with Authentication V2 or V4, the x-amz-cf-id header that's appended to the origin request and inaccessible by the Lambda#Edge origin request function must be included in the authentication string. This is not possible.
The simple solution is to use the built-in S3 integration in Cloudflare, use a Lambda#Edge origin request function that switches the bucket if like me, that's your desired goal. For each bucket you want to use, add the following policy to allow your CF distribution to access the objects within the bucket.
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <CloudfrontID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket-name>/*"
}
]
}
CloudfrontID refers to the ID under Origin Access Identity, not the Amazon S3 Canonical ID.
X-amz-cf-id is a reserved header of CF and it could be get by event as event['Records'][0]['cf']['config']['requestId']. You don't have to calculate Authentication V4 with X-amz-cf-id.
I had alike task of returning S3 signed URL from a CloudFront origin request Lambda#Edge. Here is what I found:
If your S3 bucket does not have dots in the name you can use S3 origin in CloudFront, use domain name in the form of <bucket_name>.s3.<region>.amazonaws.com and generate signed URL e.g. via getSignedUrl from #aws-sdk/s3-request-presigner. CloudFront should be configured to pass URL query to the origin. Do not grant CloudFront access to S3 bucket in this case: presigned URL will grant access to the bucket.
However, when your bucket does have dots in the name, the signed URL produced by the function will have path-style URL and you will need to use CloudFront custom origin with s3.<region>.amazonaws.com domain. When using custom origin, CloudFront adds "x-amz-cf-id" header to the request to the origin. Quite inconveniently, value of the header should be signed. However, provided you do not change the origin domain in the Lambda#Edge return value, CloudFront seems to use the same value for "x-amz-cf-id" header as passed to the lambda event in event.Records[0].cf.config.requestId field. You can then generate S3 signed URL with the value of the header. With AWS JavaScript SDK v3 this can be done using S3Client.middlewareStack.add.
Here is an example of a JavaScript Lambda#Edge producing S3 signed URL with "x-amz-cf-id" header:
const {S3Client, GetObjectCommand} = require("#aws-sdk/client-s3");
const {getSignedUrl} = require("#aws-sdk/s3-request-presigner");
exports.handler = async function handler(event, context) {
console.log('Request: ', JSON.stringify(event));
let bucketName = 'XXX';
let fileName = 'XXX';
let bucketRegion = 'XXX';
// Pre-requisite: this Lambda#Edge function has 's3:GetObject' permission for bucket ${bucketName}, otherwise you will get AccessDenied
const command = new GetObjectCommand({
Bucket: bucketName, Key: fileName,
});
const s3Client = new S3Client({region: bucketRegion});
s3Client.middlewareStack.add((next, context) => async (args) => {
args.request.headers["x-amz-cf-id"] = event.Records[0].cf.config.requestId;
return await next(args);
}, {
step: "build", name: "addXAmzCfIdHeaderMiddleware",
});
let signedS3Url = await getSignedUrl(s3Client, command, {
signableHeaders: new Set(["x-amz-cf-id"]), unhoistableHeaders: new Set(["x-amz-cf-id"])
});
let parsedUrl = new URL(signedS3Url);
const request = event.Records[0].cf.request;
if (!request.origin.custom || request.origin.custom.domainName != parsedUrl.hostname) {
return {
status: '500',
body: `CloudFront should use custom origin configured to the matching domain '${parsedUrl.hostname}'.`,
headers: {
'content-type': [{key: 'Content-Type', value: 'text/plain; charset=UTF-8',}]
}
};
}
request.querystring = parsedUrl.search.substring(1); //drop '?'
request.uri = parsedUrl.pathname;
console.log('Response: ', JSON.stringify(request));
return request;
}

Cloudfront signed cookies issue, getting 403

We have used CloudFront to store image URLs and using signed cookies to provide access only through our application. Without signed cookies we are able to access contents but after enabling signed cookies we are getting HTTP 403.
Below is configuration/cookies we are sending:
Cookies going with the request:
CloudFront-Expires: 1522454400
CloudFront-Key-Pair-Id: xyz...
CloudFront-Policy: abcde...
CloudFront-Signature: abce...
Here is our CloudFront policy:
{
"Statement": [
{
"Resource":"https://*.abc.com/*",
"Condition":{
"DateLessThan":{"AWS:EpochTime":1522454400}
}
}
]
}
The cookie domain is .abc.com, and the resource path is https://*.abc.com/*.
We are using CannedPolicy to create CloudFront cookies.
Why isn't this working as expected?
I have got solution.Our requirement was wildcard access.
CloudFrontCookieSigner.getCookiesForCustomPolicy(
this.resourcePath,
pk,
this.keyPairId,
expiresOn,
null,
"0.0.0.0/0"
);
Where:
resource path = https+ "distribution name" + /*
activeFrom = it is optional so pass it as null
pk = private key ( few api also take file but it didn't work, so get the private key from file and use above function)
we want to access all contents under distribution, canned policy doesn't allow wildcard. So, we changed it to custom policy and it worked.
Review the documentation again
There are only 3 cookies, with the last being either CloudFront-Expires for a canned policy, or CloudFront-Policy for a custom policy.
We are using CannedPolicy
A canned policy has an implicit resource of *, so a canned policy statement cannot have an explicit Resource, so you are in fact using a custom policy. If all else is implemented correctly, your solution may simply be to remove the CloudFront-Expires cookie, which isn't used with a custom policy.
"Canned" (bottled, jugged, pre-packaged) policies are used in cases where the only unique information in the policy is the expiration. Their advantage is that they require marginally less bandwidth (and make shorter URLs when creating signed URLs). Their disadvantage is that they are wildcards by design, which is not always what you want.
While there can be multiple reasons for 403 - AccessDenied as response. In our case, after debugging, we learnt that when using signed cookies - the CloudFront-Key-Pair-Id cookie remains same for every request while CloudFront-Policy and CloudFront-Signature cookies change values per request, otherwise 403 access denied will occur.
For anyone still struggling with this today or looking for clarification: You need to generate a custom policy if you would like to use wildcards in the resource url i.e. https://mycdn.abc.com/protected-content-folder/*
The AWS Cloudfront API has changed over the years; Currently, the easiest way to generate signed Cloudfront cookies or urls is via the AWS SDK, if available in your language of choice. Here is an example in NodeJS using the Javascriptv3 AWS SDK:
const { getSignedCookies } = require("#aws-sdk/cloudfront-signer");
// Read in your private_key.pem that you generated.
const privateKey = fs.readFileSync("./.secrets/private_key.pem", {
encoding: "utf8",
});
const resource = `https://mycdn.abc.com/protected-content-folder/*`;
const dateLessThan = 1658593534;
const policyStr = JSON.stringify({
Statement: [
{
Resource: resource,
Condition: {
DateLessThan: {
"AWS:EpochTime": dateLessThan,
},
},
},
],
});
const cfCookies = getSignedCookies({
keyPairId,
privateKey,
policy: policyStr,
});
// Set CloudFront cookies using your web framework of choice
const cfCookieConfig = {
httpOnly: true,
secure: process.env.NODE_ENV === "production" ? true : false,
sameSite: "lax",
signed: false,
expires: expiryDate,
domain: ".abc.com",
};
for (let cookie in cfCookies) {
ctx.cookies.set(cookie, cfCookies[cookie], { ...cfCookieConfig });
}
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html