Access Denied when downloading via Signed URLs - amazon-web-services

I am using an IAM role to access S3 from my EC2 instance. But in my application, I create a signed URL for downloading the files. However, when the user tries to download the files, it is showing access denied errors.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
//key=file path
var params = {Bucket: bucket, Key: prefix+'/'+key, Expires: 240}
var url = s3.getSignedUrl('getObject', params)
console.log(req.cookies.s, 'got', url) // expires in 60 seconds
res.redirect(url)

The fact that you are providing Signed URLs and they result in an Access Denied error says that the Signed URL is not valid.
Some potential reasons:
The Signed URL was incorrectly constructed (you didn't show us your code, so we can't determine this)
The credentials used to generate the Signed URL does not have permission to access the objects (a Signed URL is a way to authorise temporary, limited usage of credentials, but the underlying credentials must have access to the resource)
The time period for the Signed URL expired (but this would result in a different error message)
The object does not exist (which is quite likely, since Access Denied suggests this, rather than displaying an error related to the Signed URL)

Thanks for #john 's answer to point out the main issue with the Access Deniedis that the signed URL is not valid.
I want to add another thing that helps sort this issue out. Even you don't have correct credentials to the bucket, you will still be able to generate the signed URL!
That's where it got me stuck, as I didn't think my credentials were not correct as I generated the URL successfully.

My problem was other, so maybe I can help someone.
What I was lacking was the Permissions for the Lambda to use the S3 Bucket. I added this to the template where the LambdaFunction is defined:
Properties:
Policies:
- AWSLambdaVPCAccessExecutionRole
- AWSLambdaBasicExecutionRole
- S3CrudPolicy:
BucketName: !Ref MyS3Bucket
(the last 2 lines are the important! I just put all the policies for reference...)

Related

AWS S3 signed url - X-Amz-Security-Token expires too early

I am in this situation where I need to have a pre-signed url to live for around a month. And since the signature v4 isn't able to deliver this, I've decided to use the V2 for now.
I have set the expiraten to one month but for some reason it expires
after 1 day? (don't know the exact time it expires could be within the same day)
<Code>ExpiredToken</Code>
<Message>The provided token has expired.</Message>
And as I digged further into this, It looked like the issue could be with the X-Amz-Security-Token which expires too early. But I've no idea how to set a value to this header? (couldnt find anything about it)
Setup:
Its a lambda function which generates a signed url to fetch a file from the S3. Everything is done through cloudformation. And done with the JavaScript SDK
const s3 = new AWS.S3({
signatureVersion: 'v2',
region: 'eu-west-1'
});
const bucketParam = {
Bucket: 'test-bucket',
Key: 'testFile-1111-2222',
Expires: 2592000
};
Any help would be appreciated
I believe the IAM role used by Lambda is using temporary credentials, which expire before the link. According to AWS, you need to generate the presigned URL with an IAM user and signatureVersion = 4 for the link to expire after 7 days:
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.
See Why is my presigned URL for an Amazon S3 bucket expiring before the expiration time that I specified? for more details
You should try creating an IAM user to generate those URLs, and actually use its credential and assume its role (using STS) in the Lambda function in order to generate the URL. And don't forget to use signatureVersion='s3v4'.
Hope this helps
The policy "expiration" field cannot be more than 7 days beyond the "x-amz-date" field.
I found no way around this. This seems broken or at least poorly documented.
The workaround seems to be to set "x-amz-date" in the future. While not intuitive this seems to be allowed, which enables you to set the expiration further in the future.

S3 Signed Url's expiring before argument passed

I am trying to generate a signed URL for S3 bucket objects with the maximum expiration of 604800 seconds or 7 days. However, after testing I discovered that the links expire in under 24hrs. Doing some digging I came across this article claiming that the 7 day expiration is only available if the aws-sdk is authorized with an IAM user and the s3 library is making use of AWS Signature v4.
I am definitely using v4: exports.getS3 = () => new AWS.S3({region : 'us-east-1', signatureVersion: 'v4'})
Additionally, as far as I can tell, the lambdas deployed via serverless should default to my IAM user credentials when making use of the sdk without any other manipulation: const AWS = require('aws-sdk')
Here is the aforementioned article : https://aws.amazon.com/premiumsupport/knowledge-center/presigned-url-s3-bucket-expiration/
I also defined the IAM role delegated to my user to enable access to s3 iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:*
Resource: "*"
- Effect: Allow
Action:
- ssm:*
Resource: "*"
- Effect: Allow
Action:
- s3:*
Resource: "*"
I've verified that it is not something as asinine as passing the wrong argument exports.getSignedURL = (key,bucket,method,expiration) =>{
console.log(`GETTING SIGNED URL WITH EXPIRATION ${expiration}`)
return new Promise((resolve, reject) => {
exports.getS3().getSignedUrl(method,{
Bucket: bucket,
Key : key,
Expires : expiration
},(err,url)=>err?reject(err):resolve(url))
});
}
Has anybody encountered this issue or have any ideas what may be causing my problem? Is there some configuration I am missing?
Lambda functions deployed with serverless do not default to your IAM user credentials, as far as I know. They use the IAM role/policy that you supply in serverless.yml, plus basic CloudWatch Logs permissions which are auto-generated by serverless
The problem is that your Lambda function is using temporary credentials from STS (via an assumed IAM role) to generate the pre-signed URL. The URL will expire when the temporary session token expires (or earlier if you explicitly indicate an earlier timeout).
If you use IAM user credentials, rather than temporary credentials via an IAM role, you can extend the expiration to 7 days (with signature v4) or end of epoch (with the potentially deprecated signature v2). So, you need to supply your Lambda function with IAM user credentials, possibly through environment variables or AWS Parameter Store or AWS Secrets Manager.
For more, see Why is my presigned URL for an Amazon S3 bucket expiring before the expiration time that I specified?
Also, there are a couple of minor coding issues here:
all AWS methods have a .promise() option to return a promise, so no need to use callbacks and no need to manually create Promise objects
while the getSignedUrl method offers an asynchronous option, the operation itself is synchronous so you should simply run const url = s3.getSignedUrl(...)

Amazon S3 upload works with one credentials but not other

I have two S3 buckets under two different accounts. Permissions and CORS setting of both buckets are same.
Regions of two buckets are as following (First one working)
Region: Asia Pacific (Singapore) (ap-southeast-1) works
Region: US East (Ohio) (us-east-2) does not work
I created Upload script with Node.js and supplied region plus following
Key : __XXXX__
secret: __XXXXX____,
bucket: _____XXXX__
'x-amz-acl': 'public-read',
ACL: 'public-read'
Code works fine with first, uploaded files is also accessible publicly. But with 2nd account(Region: us-east-2), script runs successfully and return URL also, but when I look in bucket there is no upload and url is saying permission denied which means resource is not available. Strange things are
Why URL is returned if file is not uploaded in bucket?
Why same code does not working for other account,
I tried AWS documentation also but that seems like it's not written for human like me. Help will be highly appreciated.
script runs successfully and return URL also, but when I look in bucket there is no upload
If you can really see no resources in the bucket, then the upload really failed (I've seen too many scripts just ignoring any error response) or the upload is executed to different place than you expect. Care to share the script?
and url is saying permission denied which means resource is not available
Unfortunatelly that's something you have find out yourself. If the object is in the bucket, it has public access, correct cors settings, then maybe the url is not correct.

Access denied on S3 PUT request with pre-signed URL

I'm trying to upload a file directly to S3 bucket with pre-signed URL but getting AccessDenied (403 Forbidden) error on PUT request.
PUT request is allowed in bucket's CORS configuration.
Do I also need to update bucket policy with allowing s3:PutObject, s3:PutObjectAcl action?
P.S. Forgot to add. I already tried to add s3:PutObject and s3:PutObjectAcl with Principal: * and in this case uploading is working just fine, but how to restrict access for uploading? It's should be only available for pre-signed URL's, right?
OK, I figured out how to fix it. Here are the steps:
Replace Principal: * with "Principal": {"AWS":"arn:aws:iam::USER-ID:user/username"}. Instead of USER-ID:user/username put desirable user credentials which you can find in Amazon IAM section. Read more about Principal here: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html.
Be sure that the user which you specified in Principal has s3:PutObject and s3:PutObjectAcl permissions for a needed bucket.
Check your Lambda's function permissions. It also should have s3:PutObject, s3:PutObjectAcl for a needed bucket. You can check it on IAM Roles page (if you created separate role for a Lambda function) or through function Designer page (read only)
In my case (and maybe help others) the problem was that (due to a typo in SAM Template) the proper policy was not being apply TO THE LAMBDA that create the SignedUrl
It took me sometime because I thought the problem was in the Actual Uploading, while the real problem was really in creating the url (though s3 didnt say anything about Permission Problems...)
So, you can check if the S3CrudPolicy is applying to the currect Bucket and that may fix the issue for you.

AWS CloudFront with Signed URL: 403 Access Denied

I'm configuring an environment with a Amazon S3 Bucket for storage of media files and Amazon CloudFront for restricted distribution purposes.
The access to those media files needs to be private and should be done via a signed URL. So I created the S3 Bucket on South America (São Paulo) region and uploaded some test files. Then I created a CloudFront Distribution with that previous bucket as Origin and it's Bucket Access is restricted. I created a new OAI (Origin Access Identity) and also selected the option Yes, Update Bucket Policy so that it auto-configures the S3 Bucket Policies.
I'm only using the default Behavior and it's configured with HTTP and HTTPS viewer protocol policy and GET, HEAD allowed methods. Restrict Viewer Access (Use Signed URLs or Signed Cookies) is set and the Trusted Signer is set to Self.
Here's some images to clarify the setup:
S3 Bucket Policy
Distribution's Origin
Distribution's Behavior
I'm getting a HTTP 403 while trying to access the signed URL generated with either awscli or cfsign.pl
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
Is there something missing that I don't know? It looks like I made everything the docs said to do.
I received the same Access Denied error and spent the last couple hours trying to figure out what was going on. I finally realized that the Expires parameter was set in the past since I was using my local time instead of UTC. Make sure to set the Expires in the future according to UTC.
In my case the problem was with URL I was passing to URL signing code (I was using AWS SDK for Node.js).
cloudFront.getSignedUrl({
url: `${distributionUrl}/${encodeURI(key)}`,
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
Note encodeURI. I was not doing that. The resulting signed URL would still have URI components encoded, BUT would have invalid signature, thus causing 403 error.
EDIT: ...And you have to wrap it into url.format() like this:
cloudFront.getSignedUrl({
url: url.format(`${distributionUrl}/${encodeURI(key)}`),
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
I guess they should be doing that in SDK.
After recreating both the Amazon S3 Bucket and Amazon CloudFront Distribution I was still experiencing the issue. After a session with my rubber duck I found out that the Private Key file that I was using belongs to a deleted CloudFront Key-pair.
Now that I'm using the correct key to encrypt things everything is working fine. That doesn't explain why the first bucket and distribution weren't working because in that specific case I was using the same set of configurations and the right Private Key file.
I also encountered the same issue. Probably, we have to re-generate Clouf Front key-pair.