We have used CloudFront to store image URLs and using signed cookies to provide access only through our application. Without signed cookies we are able to access contents but after enabling signed cookies we are getting HTTP 403.
Below is configuration/cookies we are sending:
Cookies going with the request:
CloudFront-Expires: 1522454400
CloudFront-Key-Pair-Id: xyz...
CloudFront-Policy: abcde...
CloudFront-Signature: abce...
Here is our CloudFront policy:
{
"Statement": [
{
"Resource":"https://*.abc.com/*",
"Condition":{
"DateLessThan":{"AWS:EpochTime":1522454400}
}
}
]
}
The cookie domain is .abc.com, and the resource path is https://*.abc.com/*.
We are using CannedPolicy to create CloudFront cookies.
Why isn't this working as expected?
I have got solution.Our requirement was wildcard access.
CloudFrontCookieSigner.getCookiesForCustomPolicy(
this.resourcePath,
pk,
this.keyPairId,
expiresOn,
null,
"0.0.0.0/0"
);
Where:
resource path = https+ "distribution name" + /*
activeFrom = it is optional so pass it as null
pk = private key ( few api also take file but it didn't work, so get the private key from file and use above function)
we want to access all contents under distribution, canned policy doesn't allow wildcard. So, we changed it to custom policy and it worked.
Review the documentation again
There are only 3 cookies, with the last being either CloudFront-Expires for a canned policy, or CloudFront-Policy for a custom policy.
We are using CannedPolicy
A canned policy has an implicit resource of *, so a canned policy statement cannot have an explicit Resource, so you are in fact using a custom policy. If all else is implemented correctly, your solution may simply be to remove the CloudFront-Expires cookie, which isn't used with a custom policy.
"Canned" (bottled, jugged, pre-packaged) policies are used in cases where the only unique information in the policy is the expiration. Their advantage is that they require marginally less bandwidth (and make shorter URLs when creating signed URLs). Their disadvantage is that they are wildcards by design, which is not always what you want.
While there can be multiple reasons for 403 - AccessDenied as response. In our case, after debugging, we learnt that when using signed cookies - the CloudFront-Key-Pair-Id cookie remains same for every request while CloudFront-Policy and CloudFront-Signature cookies change values per request, otherwise 403 access denied will occur.
For anyone still struggling with this today or looking for clarification: You need to generate a custom policy if you would like to use wildcards in the resource url i.e. https://mycdn.abc.com/protected-content-folder/*
The AWS Cloudfront API has changed over the years; Currently, the easiest way to generate signed Cloudfront cookies or urls is via the AWS SDK, if available in your language of choice. Here is an example in NodeJS using the Javascriptv3 AWS SDK:
const { getSignedCookies } = require("#aws-sdk/cloudfront-signer");
// Read in your private_key.pem that you generated.
const privateKey = fs.readFileSync("./.secrets/private_key.pem", {
encoding: "utf8",
});
const resource = `https://mycdn.abc.com/protected-content-folder/*`;
const dateLessThan = 1658593534;
const policyStr = JSON.stringify({
Statement: [
{
Resource: resource,
Condition: {
DateLessThan: {
"AWS:EpochTime": dateLessThan,
},
},
},
],
});
const cfCookies = getSignedCookies({
keyPairId,
privateKey,
policy: policyStr,
});
// Set CloudFront cookies using your web framework of choice
const cfCookieConfig = {
httpOnly: true,
secure: process.env.NODE_ENV === "production" ? true : false,
sameSite: "lax",
signed: false,
expires: expiryDate,
domain: ".abc.com",
};
for (let cookie in cfCookies) {
ctx.cookies.set(cookie, cfCookies[cookie], { ...cfCookieConfig });
}
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html
Related
I'm having an issue with the AWS Cookie-Signer V3 and Custom Policies. I'm currently using #aws-sdk/cloudfront-signer v3.254.0. I have followed the official docs of how to create and handle signed cookies - it works as long as I don't use custom policies.
Setup
I use a custom lambda via an API Gateway to obtain the Set-Cookie header with my signed cookies. These cookies will be attached to a further file-request via my AWS Cloudfront instance. In order to avoid CORS errors, I have set up custom domains for the API Gateway as well as for the Cloudfront instance.
A minified example of the signing and the return value looks as follows:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).toISOString();
// Cookie-Signer
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
url: "https://cloudfront.example.com/path-to-file/file.m3u8",
dateLessThan: getExpTime,
});
// Response
const response = {
statusCode: 200,
isBase64Encoded: false,
body: JSON.stringify({ url: url, bucket: bucket, key: key }),
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "https://example.com",
"Access-Control-Allow-Credentials": true,
"Access-Control-Allow-Methods": "OPTIONS,POST,GET",
},
multiValueHeaders: {
"Set-Cookie": [
`CloudFront-Expires=${signedCookie["CloudFront-Expires"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Signature=${signedCookie["CloudFront-Signature"]}; Domain=example.com; Path=/${path}/`,
`CloudFront-Key-Pair-Id=${signedCookie["CloudFront-Key-Pair-Id"]}; Domain=example.com; Path=/${path}/`,
],
},
};
This works well if I request a single file from my S3 bucket. However, since I want to stream video files from my S3 via Cloudfront and according to the AWS docs, wildcard characters are only allowed with Custom Policies. I need this wildcard to give access to the entire video folder with my video chunks. Again following the official docs, I have updated my lambda with:
// Expiration time
const getExpTime = new Date(Date.now() + 5 * (60 * 60 * 1000)).getTime();
// Custom Policy
const policyString = JSON.stringify({
Statement: [
{
Resource: "https://cloudfront.example.com/path-to-file/*",
Condition: {
DateLessThan: { "AWS:EpochTime": getExpTime },
},
},
],
});
// Cookie signing
const signedCookie = getSignedCookies({
keyPairId: "MY-KEYPAIR-ID",
privateKey: "MY-PRIVATE-KEY",
policy: policyString,
url: "https://cloudfront.example.com/path-to-file/*",
});
which results in a Malformed Policy error.
What confuses me is that the getSignedCookies() method requires the url property even though I'm using a custom policy with the Ressource parameter. Since the Resource parameter is optional, I've also tried without which led to the same error.
To rule out that something is wrong with the wildcard character, I've also run a test where I've pointed to the exact file but using the custom policy. Although this works without custom policy, it does fail with the Malformed Policy error when using the custom policy.
Since there is also no example of how to use the Cloudfront Cookie-Signer V3 with custom policies, I'd be very grateful if someone can tell me how I'm supposed to type this out!
Cheers! 🙌
The question heading is broad but my question is not. I just want clarification on my approach. I have an s3 bucket with blocked public access. The bucket policy is set to http-referer. This is how it looks.
{
"Version": "2008-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.mysite.com and mysite.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::storage/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.example.com/*",
"https://example.com/*",
]
}
}
}
]
}
I still get an error if my frontend (on my tld) tries to access the s3 resource through the URL. (URL example - https://storage.s3.amazonaws.com/path/to/my/file.png).
I dropped the approach of hitting the s3 URL directly and decided to build a backend utility on my TLD that'd fetch the s3 resource in question and send it back to the frontend. So the URL would look something like this, https://<tld>/fetch-s3-resource/path/to/file.png.
I wanna know if this approach is correct or if there's a better one out there. In my mind even setting a http-referer policy doesn't make sense cause anyone can make a call to my bucket with http-referer manually set to my TLD.
UPDATE - I found out about signed URL's which should supposedly allow user's to publically access a resourec through the URL. This should solve my problem but I still have set public access to "off" and I don't really know which switch to toggle in order to allow for the user's with signed urls to be able to access the resource.
Here's the sample of s3 signed URL. Here's the link to the doc
import logging
import boto3
from botocore.exceptions import ClientError
def create_presigned_url(bucket_name, object_name, expiration=3600):
"""Generate a presigned URL to share an S3 object
:param bucket_name: string
:param object_name: string
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Presigned URL as string. If error, returns None.
"""
# Generate a presigned URL for the S3 object
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL
return response
UPDATE UPDATE - Since the question isn't already clear enough, and it seems like I took some liberties with my "loose" language, let me clarify some things.
1 - What am I actually trying to do?
I want to keep my s3 bucket secure in such a way that only user's with presigned URL's generated by "me" can access whatever resource is there.
2 - When I ask if "my approach" is better or if there's any other approach what do I mean by that?
I wanna know if there's a "native" / aws provided way of accessing the bucket without having to write a backend endpoint that'd fetch the resource and throw it back on the frontend.
3 - How do I measure one approach against another ?
I think this one is quite obvious, you don't try to write an authentication flow from scratch if there's one provided by your framework*. This logic applies here too, if there's a way to access the objects that's listed by AWS then I probably shouldn't go about writing my own "hack"
Presigned URLs came through. You can have all public access blocked and still be able to generate signed URL's and serve it the frontend. I've already linked the official documentation in my question, here's the final piece of code I ended up with.
def create_presigned_url(bucket_name, bucket_key, expiration=3600, signature_version='s3v4'):
"""Generate a presigned URL for the S3 object
:param bucket_name: string
:param bucket_key: string
:param expiration: Time in seconds for the presigned URL to remain valid
:param signature_version: string
:return: Presigned URL as string. If error, returns None.
"""
s3_client = boto3.client('s3',
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
config=Config(signature_version=signature_version),
region_name='us-east-1'
)
try:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': bucket_key},
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the pre-signed URL
return response
I created a cognito pool
Created users
Created 2 Groups WITHOUT any IAM roles
Assigned users to 2 different groups.
I store policies for a group in database and cache them .
In the lambda authorizer that has been configured , the deny policy works with principalId set to a random string.
For allowing access , i set the principal Id to the cognito User name. I get the policy from the database with permissions allowed for all api gateway end points. ( For testing )
But even after this i get the "User is not authorized" message.
Is my understanding wrong ? What am i doing wrong.
This is my policy for allowing access with the userId being the cognito user name.
authResponse = {}
authResponse['principalId'] = userId
authResponse['policyDocument'] = {
'Version': '2012-10-17',
'Statement': [
{
'Sid': 'FirstStatement',
'Action': 'execute-api:Invoke',
'Effect': 'Allow',
'Resource': 'arn:aws:execute-api:us-east-1:*:ppg7tavcld/test/GET/test-api-1/users/*'
}
]
}
return authResponse
Sorry . this was a mistake from me.
It was solved due to mixing up the stage position in the Resource
const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?
I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.
I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}
I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)
I have a not completely orthodox CF->S3 setup. The relevant components here are:
Cloudfront distribution with origin s3.ap-southeast-2.amazonaws.com
Lambda#Edge function (Origin Request) that adds a S3 authorisation (version 2) query string (Signed using the S3 policy the function uses).
The request returned from Lambda is completely correct. If I log the uri, host and query string I get the file I am requesting. However, if I access it through the Cloudfront link directly, the request fails because it no longer uses the AWSAccessKeyID, instead it opts to use x-amz-cf-id (but uses the same Signature, Amz-Security-Token etc). CORRECTION: it may not replace, but be required in addition to.
I know this is the case because I have returned both the
StringToSign and the SignatureProvided. These both match the Lambda response except for the AWSAccessKeyID which has been replaced with the x-amz-cf-id.
This is a very specific question obviously. I may have to look at remodelling this architecture but I would prefer not to. There are several requirements which has led me down this not completely regular setup.
I believe the AWSAccessKeyID => x-amz-cf-id replacement is the result of two mechanisms:
First, you need to configure CloudFront to forward the query parameters to the origin. Without that, it will strip all parameters. If you use S3 signed URLs, make sure to also cache based on all parameters as otherwise you'll end up without any access control.
Second, CloudFront attaches the x-amz-cf-id to the requests that are not going to an S3 origin. You can double-check at the CloudFront console the origin type and you need to make sure it is reported as S3. I have a blog post describing it in detail.
But adding the S3 signature to all the requests with Lambda#Edge defeats the purpose. If you want to keep the bucket private and only allow CloudFront to access it then use the Origin Access Identity, that is precisely for the use-case.
So it seems like with Authentication V2 or V4, the x-amz-cf-id header that's appended to the origin request and inaccessible by the Lambda#Edge origin request function must be included in the authentication string. This is not possible.
The simple solution is to use the built-in S3 integration in Cloudflare, use a Lambda#Edge origin request function that switches the bucket if like me, that's your desired goal. For each bucket you want to use, add the following policy to allow your CF distribution to access the objects within the bucket.
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <CloudfrontID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket-name>/*"
}
]
}
CloudfrontID refers to the ID under Origin Access Identity, not the Amazon S3 Canonical ID.
X-amz-cf-id is a reserved header of CF and it could be get by event as event['Records'][0]['cf']['config']['requestId']. You don't have to calculate Authentication V4 with X-amz-cf-id.
I had alike task of returning S3 signed URL from a CloudFront origin request Lambda#Edge. Here is what I found:
If your S3 bucket does not have dots in the name you can use S3 origin in CloudFront, use domain name in the form of <bucket_name>.s3.<region>.amazonaws.com and generate signed URL e.g. via getSignedUrl from #aws-sdk/s3-request-presigner. CloudFront should be configured to pass URL query to the origin. Do not grant CloudFront access to S3 bucket in this case: presigned URL will grant access to the bucket.
However, when your bucket does have dots in the name, the signed URL produced by the function will have path-style URL and you will need to use CloudFront custom origin with s3.<region>.amazonaws.com domain. When using custom origin, CloudFront adds "x-amz-cf-id" header to the request to the origin. Quite inconveniently, value of the header should be signed. However, provided you do not change the origin domain in the Lambda#Edge return value, CloudFront seems to use the same value for "x-amz-cf-id" header as passed to the lambda event in event.Records[0].cf.config.requestId field. You can then generate S3 signed URL with the value of the header. With AWS JavaScript SDK v3 this can be done using S3Client.middlewareStack.add.
Here is an example of a JavaScript Lambda#Edge producing S3 signed URL with "x-amz-cf-id" header:
const {S3Client, GetObjectCommand} = require("#aws-sdk/client-s3");
const {getSignedUrl} = require("#aws-sdk/s3-request-presigner");
exports.handler = async function handler(event, context) {
console.log('Request: ', JSON.stringify(event));
let bucketName = 'XXX';
let fileName = 'XXX';
let bucketRegion = 'XXX';
// Pre-requisite: this Lambda#Edge function has 's3:GetObject' permission for bucket ${bucketName}, otherwise you will get AccessDenied
const command = new GetObjectCommand({
Bucket: bucketName, Key: fileName,
});
const s3Client = new S3Client({region: bucketRegion});
s3Client.middlewareStack.add((next, context) => async (args) => {
args.request.headers["x-amz-cf-id"] = event.Records[0].cf.config.requestId;
return await next(args);
}, {
step: "build", name: "addXAmzCfIdHeaderMiddleware",
});
let signedS3Url = await getSignedUrl(s3Client, command, {
signableHeaders: new Set(["x-amz-cf-id"]), unhoistableHeaders: new Set(["x-amz-cf-id"])
});
let parsedUrl = new URL(signedS3Url);
const request = event.Records[0].cf.request;
if (!request.origin.custom || request.origin.custom.domainName != parsedUrl.hostname) {
return {
status: '500',
body: `CloudFront should use custom origin configured to the matching domain '${parsedUrl.hostname}'.`,
headers: {
'content-type': [{key: 'Content-Type', value: 'text/plain; charset=UTF-8',}]
}
};
}
request.querystring = parsedUrl.search.substring(1); //drop '?'
request.uri = parsedUrl.pathname;
console.log('Response: ', JSON.stringify(request));
return request;
}