I am in this situation where I need to have a pre-signed url to live for around a month. And since the signature v4 isn't able to deliver this, I've decided to use the V2 for now.
I have set the expiraten to one month but for some reason it expires
after 1 day? (don't know the exact time it expires could be within the same day)
<Code>ExpiredToken</Code>
<Message>The provided token has expired.</Message>
And as I digged further into this, It looked like the issue could be with the X-Amz-Security-Token which expires too early. But I've no idea how to set a value to this header? (couldnt find anything about it)
Setup:
Its a lambda function which generates a signed url to fetch a file from the S3. Everything is done through cloudformation. And done with the JavaScript SDK
const s3 = new AWS.S3({
signatureVersion: 'v2',
region: 'eu-west-1'
});
const bucketParam = {
Bucket: 'test-bucket',
Key: 'testFile-1111-2222',
Expires: 2592000
};
Any help would be appreciated
I believe the IAM role used by Lambda is using temporary credentials, which expire before the link. According to AWS, you need to generate the presigned URL with an IAM user and signatureVersion = 4 for the link to expire after 7 days:
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.
See Why is my presigned URL for an Amazon S3 bucket expiring before the expiration time that I specified? for more details
You should try creating an IAM user to generate those URLs, and actually use its credential and assume its role (using STS) in the Lambda function in order to generate the URL. And don't forget to use signatureVersion='s3v4'.
Hope this helps
The policy "expiration" field cannot be more than 7 days beyond the "x-amz-date" field.
I found no way around this. This seems broken or at least poorly documented.
The workaround seems to be to set "x-amz-date" in the future. While not intuitive this seems to be allowed, which enables you to set the expiration further in the future.
Related
I am sending s3 signed url using SES service in Lambda code and provided token expiration time to 1 day or 1 week but still its getting expired before 1 day. I am not sure exactly till what time its valid but first hours I am able to download object.
Any suggestions what other changes I am supposed to do ?
`
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': "key-name"
},
ExpiresIn = 604800 # 1 week
)
`
https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html#permissions-executionrole-session
Session duration for temporary security credentials
Lambda assumes the execution role associated with your function to fetch temporary security credentials which are then available as environment variables during a function's invocation. If you use these temporary credentials outside of Lambda, such as to create a presigned Amazon S3 URL, you can't control the session duration. The IAM maximum session duration setting doesn't apply to sessions that are assumed by AWS services such as Lambda. Use the sts:AssumeRole action if you need control over session duration.
If a presigned URL is created using a temporary token, then the URL expires when the token expires, even if the URL was created with a later expiration time.
I am trying to generate a signed URL for S3 bucket objects with the maximum expiration of 604800 seconds or 7 days. However, after testing I discovered that the links expire in under 24hrs. Doing some digging I came across this article claiming that the 7 day expiration is only available if the aws-sdk is authorized with an IAM user and the s3 library is making use of AWS Signature v4.
I am definitely using v4: exports.getS3 = () => new AWS.S3({region : 'us-east-1', signatureVersion: 'v4'})
Additionally, as far as I can tell, the lambdas deployed via serverless should default to my IAM user credentials when making use of the sdk without any other manipulation: const AWS = require('aws-sdk')
Here is the aforementioned article : https://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/
I also defined the IAM role delegated to my user to enable access to s3 iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:*
Resource: "*"
- Effect: Allow
Action:
- ssm:*
Resource: "*"
- Effect: Allow
Action:
- s3:*
Resource: "*"
I've verified that it is not something as asinine as passing the wrong argument exports.getSignedURL = (key,bucket,method,expiration) =>{
console.log(`GETTING SIGNED URL WITH EXPIRATION ${expiration}`)
return new Promise((resolve, reject) => {
exports.getS3().getSignedUrl(method,{
Bucket: bucket,
Key : key,
Expires : expiration
},(err,url)=>err?reject(err):resolve(url))
});
}
Has anybody encountered this issue or have any ideas what may be causing my problem? Is there some configuration I am missing?
Lambda functions deployed with serverless do not default to your IAM user credentials, as far as I know. They use the IAM role/policy that you supply in serverless.yml, plus basic CloudWatch Logs permissions which are auto-generated by serverless
The problem is that your Lambda function is using temporary credentials from STS (via an assumed IAM role) to generate the pre-signed URL. The URL will expire when the temporary session token expires (or earlier if you explicitly indicate an earlier timeout).
If you use IAM user credentials, rather than temporary credentials via an IAM role, you can extend the expiration to 7 days (with signature v4) or end of epoch (with the potentially deprecated signature v2). So, you need to supply your Lambda function with IAM user credentials, possibly through environment variables or AWS Parameter Store or AWS Secrets Manager.
For more, see Why is my presigned URL for an Amazon S3 bucket expiring before the expiration time that I specified?
Also, there are a couple of minor coding issues here:
all AWS methods have a .promise() option to return a promise, so no need to use callbacks and no need to manually create Promise objects
while the getSignedUrl method offers an asynchronous option, the operation itself is synchronous so you should simply run const url = s3.getSignedUrl(...)
There is a requirement to generate pre-signed URL's for accessing private S3 objects with configurable access time ranging from 1-6 days. Using a role (on EC2), I was able to generate URL and then access it for fairly long time before it fails with "Invalid Token" (the url though has an expiration that is valid).
With some checks found that its cause , the role's access id, secret keys are rotated at max 12 hrs, the other option is using an IAM USER whos keys do not expire.
I did try the same but with little luck.
session = boto3.session.Session(aws_access_key_id=getkval(KEY_ID), aws_secret_access_key=getkval(A_KEY),region_name='ap-south-1')
s3Client = session.client('s3', config= boto3.session.Config(signature_version='s3v4'))
url=s3Client.generate_presigned_url('get_object', Params = {'Bucket': filebucket, 'Key': key}, ExpiresIn = 86400*int(days))
This generates a presigned url with Access key associated with user (can see it in link), however it expires sooner than the expiry. What could be wrong ?
I am using an IAM role to access S3 from my EC2 instance. But in my application, I create a signed URL for downloading the files. However, when the user tries to download the files, it is showing access denied errors.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
//key=file path
var params = {Bucket: bucket, Key: prefix+'/'+key, Expires: 240}
var url = s3.getSignedUrl('getObject', params)
console.log(req.cookies.s, 'got', url) // expires in 60 seconds
res.redirect(url)
The fact that you are providing Signed URLs and they result in an Access Denied error says that the Signed URL is not valid.
Some potential reasons:
The Signed URL was incorrectly constructed (you didn't show us your code, so we can't determine this)
The credentials used to generate the Signed URL does not have permission to access the objects (a Signed URL is a way to authorise temporary, limited usage of credentials, but the underlying credentials must have access to the resource)
The time period for the Signed URL expired (but this would result in a different error message)
The object does not exist (which is quite likely, since Access Denied suggests this, rather than displaying an error related to the Signed URL)
Thanks for #john 's answer to point out the main issue with the Access Deniedis that the signed URL is not valid.
I want to add another thing that helps sort this issue out. Even you don't have correct credentials to the bucket, you will still be able to generate the signed URL!
That's where it got me stuck, as I didn't think my credentials were not correct as I generated the URL successfully.
My problem was other, so maybe I can help someone.
What I was lacking was the Permissions for the Lambda to use the S3 Bucket. I added this to the template where the LambdaFunction is defined:
Properties:
Policies:
- AWSLambdaVPCAccessExecutionRole
- AWSLambdaBasicExecutionRole
- S3CrudPolicy:
BucketName: !Ref MyS3Bucket
(the last 2 lines are the important! I just put all the policies for reference...)
I'm configuring an environment with a Amazon S3 Bucket for storage of media files and Amazon CloudFront for restricted distribution purposes.
The access to those media files needs to be private and should be done via a signed URL. So I created the S3 Bucket on South America (São Paulo) region and uploaded some test files. Then I created a CloudFront Distribution with that previous bucket as Origin and it's Bucket Access is restricted. I created a new OAI (Origin Access Identity) and also selected the option Yes, Update Bucket Policy so that it auto-configures the S3 Bucket Policies.
I'm only using the default Behavior and it's configured with HTTP and HTTPS viewer protocol policy and GET, HEAD allowed methods. Restrict Viewer Access (Use Signed URLs or Signed Cookies) is set and the Trusted Signer is set to Self.
Here's some images to clarify the setup:
S3 Bucket Policy
Distribution's Origin
Distribution's Behavior
I'm getting a HTTP 403 while trying to access the signed URL generated with either awscli or cfsign.pl
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
Is there something missing that I don't know? It looks like I made everything the docs said to do.
I received the same Access Denied error and spent the last couple hours trying to figure out what was going on. I finally realized that the Expires parameter was set in the past since I was using my local time instead of UTC. Make sure to set the Expires in the future according to UTC.
In my case the problem was with URL I was passing to URL signing code (I was using AWS SDK for Node.js).
cloudFront.getSignedUrl({
url: `${distributionUrl}/${encodeURI(key)}`,
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
Note encodeURI. I was not doing that. The resulting signed URL would still have URI components encoded, BUT would have invalid signature, thus causing 403 error.
EDIT: ...And you have to wrap it into url.format() like this:
cloudFront.getSignedUrl({
url: url.format(`${distributionUrl}/${encodeURI(key)}`),
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
I guess they should be doing that in SDK.
After recreating both the Amazon S3 Bucket and Amazon CloudFront Distribution I was still experiencing the issue. After a session with my rubber duck I found out that the Private Key file that I was using belongs to a deleted CloudFront Key-pair.
Now that I'm using the correct key to encrypt things everything is working fine. That doesn't explain why the first bucket and distribution weren't working because in that specific case I was using the same set of configurations and the right Private Key file.
I also encountered the same issue. Probably, we have to re-generate Clouf Front key-pair.