There is a requirement to generate pre-signed URL's for accessing private S3 objects with configurable access time ranging from 1-6 days. Using a role (on EC2), I was able to generate URL and then access it for fairly long time before it fails with "Invalid Token" (the url though has an expiration that is valid).
With some checks found that its cause , the role's access id, secret keys are rotated at max 12 hrs, the other option is using an IAM USER whos keys do not expire.
I did try the same but with little luck.
session = boto3.session.Session(aws_access_key_id=getkval(KEY_ID), aws_secret_access_key=getkval(A_KEY),region_name='ap-south-1')
s3Client = session.client('s3', config= boto3.session.Config(signature_version='s3v4'))
url=s3Client.generate_presigned_url('get_object', Params = {'Bucket': filebucket, 'Key': key}, ExpiresIn = 86400*int(days))
This generates a presigned url with Access key associated with user (can see it in link), however it expires sooner than the expiry. What could be wrong ?
Related
I am sending s3 signed url using SES service in Lambda code and provided token expiration time to 1 day or 1 week but still its getting expired before 1 day. I am not sure exactly till what time its valid but first hours I am able to download object.
Any suggestions what other changes I am supposed to do ?
`
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': 'bucket-name',
'Key': "key-name"
},
ExpiresIn = 604800 # 1 week
)
`
https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html#permissions-executionrole-session
Session duration for temporary security credentials
Lambda assumes the execution role associated with your function to fetch temporary security credentials which are then available as environment variables during a function's invocation. If you use these temporary credentials outside of Lambda, such as to create a presigned Amazon S3 URL, you can't control the session duration. The IAM maximum session duration setting doesn't apply to sessions that are assumed by AWS services such as Lambda. Use the sts:AssumeRole action if you need control over session duration.
If a presigned URL is created using a temporary token, then the URL expires when the token expires, even if the URL was created with a later expiration time.
So, I'm inside an AWS lambda function. I need to return temp creds from an API endpoint, so that an upload can be performed from browser directly to S3.
The files can be very large, hundreds of gigabytes, so the creds need to last a long time. What's the easiest way to get these type of creds inside a lambda?
The short answer is that you need to assume a role, as I describe in this blog post. A key part of that post is using a session policy to scope the assumed role to a single key on S3.
However, if it takes more than an hour to upload the file, that solution won't work as written, because a Lambda can't assume another role with a requested duration > one hour (see role chaining), a limit that can't be increased.
This means that you need to create a user that can assume the role, and make that user's long-term credentials available to the Lambda (typically via Secrets Manager). Once you've retrieved those credentials, use them to create an STS client (you don't say what language you're using, and I typically use Python, so that's what's shown):
sts_client = boto3.client(
'sts',
aws_access_key_id=stored_access_key,
aws_secret_access_key=stored_secret_key)
Then with those credentials you can assume a role that can write to the file. In following the blog post, the base role has permissions to write to any file on S3, and the session policy limits the assumed role to the specific file:
session_policy = json.dumps({
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': 's3:PutObject',
'Resource': f"arn:aws:s3:::{BUCKET}/{KEY}"
}
]
})
response = sts_client.assume_role(
RoleArn=ASSUMABLE_ROLE_ARN,
RoleSessionName="example",
Policy=session_policy,
DurationSeconds=12 * 3600
)
# these are the credentials that you'd pass to the client application
limited_access_key = response['Credentials']['AccessKeyId']
limited_secret_key = response['Credentials']['SecretAccessKey']
limited_session_token = response['Credentials']['SessionToken']
12 hours is enough time to transfer 500 GB over a 100 mbps connection. If you need to more time than that, then you'll have to create an actual user and return its credentials. You can attach an inline policy to this user to limit its access to the single file (serving the same purpose as the session policy in this example). But since you're limited to 5,000 IAM users in an account, this is not something that you want to do on a regular basis.
Have following code to generate pre-signed URL:
params = {'Bucket': bucket_name, 'Key': object_name}
response = s3_client.generate_presigned_url('get_object',
Params=params,
ExpiresIn=expiration)
that works fine on old one bucket I am using for last year:
https://old-bucket.s3.amazonaws.com/test_image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIxxxxxxxxxxE%2F20210917%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20210917T210448Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=54e173601fec5f140dd901b0eae1dafbcd8d7ee8b8f311fdc1b120ca447cdd0c
I can paste this URL to browser and download file. File is AWS-KMS encrypted.
But same AWS-KMS encrypted file uploaded to new one created bucket returns following URL:
https://new-bucket.s3.amazonaws.com/test_image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIxxxxxxxxxxE%2F20210917%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20210917T210500Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=2313e0131d4251f9fba522fc8e9880d960f674f3449e141848bd38ca19e1b528
returns SignatureDoesNotMatch error:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
No any changes in source code - but just bucket name provided to generate_presigned_url function.
The IAM user I am providing to boto3.client has write/read permissions for both buckets.
Comparing properties and permissions for both buckets and for files I am requesting from buckets - everything looks the same.
GetObject and PutObject works fine for both buckets in a case of dealing with file directly. The issue is only in a case of using pre-signed URL.
So is any settings/permissions/rules/anything else need to be configured/enabled to make pre-signed URLs working with certain S3 bucket?
I am in this situation where I need to have a pre-signed url to live for around a month. And since the signature v4 isn't able to deliver this, I've decided to use the V2 for now.
I have set the expiraten to one month but for some reason it expires
after 1 day? (don't know the exact time it expires could be within the same day)
<Code>ExpiredToken</Code>
<Message>The provided token has expired.</Message>
And as I digged further into this, It looked like the issue could be with the X-Amz-Security-Token which expires too early. But I've no idea how to set a value to this header? (couldnt find anything about it)
Setup:
Its a lambda function which generates a signed url to fetch a file from the S3. Everything is done through cloudformation. And done with the JavaScript SDK
const s3 = new AWS.S3({
signatureVersion: 'v2',
region: 'eu-west-1'
});
const bucketParam = {
Bucket: 'test-bucket',
Key: 'testFile-1111-2222',
Expires: 2592000
};
Any help would be appreciated
I believe the IAM role used by Lambda is using temporary credentials, which expire before the link. According to AWS, you need to generate the presigned URL with an IAM user and signatureVersion = 4 for the link to expire after 7 days:
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.
See Why is my presigned URL for an Amazon S3 bucket expiring before the expiration time that I specified? for more details
You should try creating an IAM user to generate those URLs, and actually use its credential and assume its role (using STS) in the Lambda function in order to generate the URL. And don't forget to use signatureVersion='s3v4'.
Hope this helps
The policy "expiration" field cannot be more than 7 days beyond the "x-amz-date" field.
I found no way around this. This seems broken or at least poorly documented.
The workaround seems to be to set "x-amz-date" in the future. While not intuitive this seems to be allowed, which enables you to set the expiration further in the future.
I am trying to use Signed URL for images and once I was able to do it. But then, I deleted it and trying to do it once again and I am not able to do it.
Here is what I have done so far:
Created a Cloudfront Key Pair in IAM Management Console. Downloaded
the private and public key and also noted the Access Key ID =
XXXXXXXXXXXXXXXXXXXX.
Created a S3 bucket. No custom configuration
Created a Cloudfront distribution with the following settings
Origin domain name: my bucket
Restrict bucket access: Yes
Created a new Origin Access Identity
Grant read permissions on Bucket: Selected Yes
Restrict Viewer Access (Use Signed URLs or Signed Cookies) = Yes
Trusted Signers = Self
And then inside my laravel code, I have the following inside my controller:
$keyPairId = 'XXXXXXXXXXXXXXXXXXXX';
$privateKey = config_path('pk-XXXXXXXXXXXXXXXXXXXX.pem');
$url = "http://xxxxxxxxxxxxxx.cloudfront.net/image.jpg";
$cf = new UrlSigner($keyPairId, $privateKey);
$imgSrc = $cf->getSignedUrl($url, time());
echo "<img src='{$imgSrc}' />";
But every time I am getting the same error:
<Error>
<Code>MissingKey</Code>
<Message>Missing Key-Pair-Id query parameter or cookie value</Message>
</Error>
Need help.
You have to use CloudFront specific key pairs and add those to $cf. More information on how to download or upload your own public key:
http://docs.aws.amazon.com/AWSSecurityCredentials/1.0/AboutAWSCredentials.html#KeyPairs
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-trusted-signers.html#private-content-creating-cloudfront-key-pairs