I upload to and download from S3 with a presigned post/url. The presigned url/post are generated with boto3 in the Lambda function (it is deployed with zappa).
While I add my AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID as env variable works perfectly. Then I removed my credentials and I add an IAM role to lambda to full access to S3 bucket. After that the lambda return with the presigned URL and getObject is working well, however when i want to upload object through the URL, it returns an InvalidAccessKeyId error. The used key id ASIA... which means those are temporary credentials.
It seems that the lambda does not use IAM role, or what is the problem?
class S3Api:
def __init__(self):
self.s3 = boto3.client(
's3',
region_name='eu-central-1'
)
def generate_store_url(self, key):
return self.s3.generate_presigned_post(FILE_BUCKET,
key,
Fields=None,
Conditions=None,
ExpiresIn=604800)
def generate_get_url(self, key):
return self.s3.generate_presigned_url('get_object',
Params={'Bucket': FILE_BUCKET,
'Key': key},
ExpiresIn=604800)
My result for sts:getCallerIdentity:
{
'UserId': '...:dermus-api-dev',
'Account': '....',
'Arn': 'arn:aws:sts::....:assumed-role/dermus-api-dev-ZappaLambdaExecutionRole/dermus-api-dev',
'ResponseMetadata': {
'RequestId': 'a1bd7c31-0199-472e-bff7-b93a4f855450',
'HTTPStatusCode': 200,
'HTTPHeaders': {
'x-amzn-requestid': 'a1bd7c31-0199-472e-bff7-b93a4f855450',
'content-type': 'text/xml',
'content-length': '474',
'date': 'Tue, 09 Mar 2021 08:36:30 GMT'
},
'RetryAttempts': 0
}
}
dermus-api-dev-ZappaLambdaExecutionRole role is attached to dermus-api-dev lambda.
Presigned URLs and the Lambda credentials work in a non-obvious way together.
From the docs, emphasis mine:
Anyone with valid security credentials can create a presigned URL. However, in order to successfully access an object, the presigned URL must be created by someone who has permission to perform the operation that the presigned URL is based upon.
The credentials that you can use to create a presigned URL include:
IAM instance profile: Valid up to 6 hours
AWS Security Token Service : Valid up to 36 hours when signed with permanent credentials, such as the credentials of the AWS account root user or an IAM user
IAM user: Valid up to 7 days when using AWS Signature Version 4
To create a presigned URL that's valid for up to 7 days, first designate IAM user credentials (the access key and secret access key) to the SDK that you're using. Then, generate a presigned URL using AWS Signature Version 4.
If you created a presigned URL using a temporary token, then the URL expires when the token expires, even if the URL was created with a later expiration time.
Bottom line: The URL might be expired if you wait too long, because the Lambda functions credentials are already expired.
Related
Currently, I have an issue by creating a valid signature v4 presigned url for a PUT request.
The urls are generated on the server side and are then provided to clients.
The clients should use the urls to upload a file over an API Gateway into an Amazon S3 bucket.
To authenticate the request API Gateway IAM authentication is used.
For my use case, a direct upload into an S3 bucket via "s3-presigned-url" is not possible.
The following code describes the generation of the presigned url and is written in Typescript. The generation of the signature v4 url is based on the AWS provided package #aws-sdk/signature-v4.
import { SignatureV4 } from "#aws-sdk/signature-v4";
import { Sha256 } from "#aws-crypto/sha256-js";
import { formatUrl } from "#aws-sdk/util-format-url";
const createSignedUrl = async (credentials: {
accessKeyId: string,
secretAccessKey: string,
sessionToken: string,
}, requestParams: {
method: "GET" | "PUT",
host: string,
protocol: string,
path: string,
}) => {
const sigv4 = new SignatureV4({
service: "execute-api",
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: credentials.accessKeyId,
secretAccessKey: credentials.secretAccessKey,
sessionToken: credentials.sessionToken,
},
sha256: Sha256,
applyChecksum: false
});
const signedUrlRequest = await sigv4.presign({
method: requestParams.method,
hostname: requestParams.host,
path: requestParams.path,
protocol: requestParams.protocol,
headers: {
host: requestParams.host,
},
}, {
expiresIn: EXPIRES_IN,
});
const signedUrl = formatUrl(signedUrlRequest);
return signedUrl
};
I use Postman to test the presinged urls.
If I generate a presigned url for an GET request, everything works fine.
If I generate a presigned url for an PUT request and don't set a body in Postman for the PUT request, everything works fine. But I have an empty file in my bucket ;-(.
If I generate a presigned url for an PUT request and set a body in Postman (via Body -> binary -> [select file]), it fails!
Error message:
The request signature we calculated does not match the signature you provided. ...
The AWS documentation https://docs.aws.amazon.com/general/latest/gr/create-signed-request.html describes that the payload has to be hashed within the canonical request. But I don't have the payload at that time.
Is there also an UNSIGNED-PAYLOAD option if I want to generate an presigned url for a PUT request that is sent to an API Gateway, like described in the documentation for the AWS S3 service?
How do I configure the SignatureV4 object or the presign(...) method call to generate a valid PUT request url with UNSIGNED-PAYLOAD?
I was able to compare my generated canonical requests with the canonical request that is expected by the Amazon API Gateway.
The Amazon API Gateway always expects a hash of the payload no matter if I add the query param X-Amz-Content-Sha256=UNSIGNED-PAYLOAD to the url or not.
Thus the option "UNSIGNED-PAYLOAD" as canonical request hash value for API Gateway IAM Authentication is not possible, as would be possible with Amazon S3 service.
I have granted the Cognito user access to the S3 object matching that user attribute with Cognito Identity Pool.
How can I view images in a bucket from a URL in a browser logged in as the Cognito user?
What I want to
To split the s3 bucket into tenants and only allow image files from the tenant to which the Cognito user belongs to be viewed.
To display images with img tag <img src="">.
What I did
S3 object arn: arn:aws:s3:::sample-tenant-bucket/public/1/photos/test.jpg
Create Cognito User with custom attributes
aws cognito-idp admin-create-user \
--user-pool-id "ap-northeast-1_xxxx" \
--username "USER_NAME" \
--user-attributes Name=custom:tenant_id,Value=1
Cognito Identity Pool
Mapping of user attributes to principal tags
"tenantId": "custom:tenant_id"
Added sts:TagSession permission to the authenticated role
Attach policy to the authenticated role
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "*",
"Resource": "arn:aws:s3:::sample-tenant-bucket/public/${aws:PrincipalTag/tenantId}/*",
"Effect": "Allow"
}
]
}
Hypothesis
Typing the object URL (https://sample-tenant-bucket.s3.ap-northeast-1.amazonaws.com/public/1/1670773819-1.jpg) into the address bar results in a 403, even when logged in with an AWS account as an administrator, not as a Cognito User. Therefore,
I thought that Object URLs could not be used for non-public objects.
The Presigned URL seems to work, but I was wondering if there is another way to do it, since it is a method that does not depend on whether the user is already logged in to the AWS account or not.
Maybe this question is the same as asking how to display an image in a bucket by URL in a browser that is already logged in with the object owner's AWS account
thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials
I'm trying to register a temporary quicksight user and generate an embed url to put in my React App. However, when calling the register user api I get a 403 error for the CORS preflight OPTIONS request:
Access to XMLHttpRequest at 'https://quicksight.ap-southeast-2.amazonaws.com/accounts//namespaces/default/users' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource."
I've also tried using us-east-1 as my region, but that also fails.
Users sign into my webapp with Cognito credentials. The identity pool has an associated IAM role, and I've attached a policy to that role giving access to register a new quicksight user and get the embed url. My webapp currently uses the aws-sdk library to assume the role through sts, and then make the subsequent quicksight calls.
The React app is hosted on Amplify
quicksightRegisterUser(data) {
var params = {
AwsAccountId: 'QQQ',
Email: 'XXX',
IdentityType: 'IAM' ,
Namespace: 'default',
UserRole: "READER",
IamArn: 'arn:aws:iam::YYY:role/ZZZ',
SessionName: 'XXX',
UserName:'XXX'
};
var quicksight = new QuickSight();
quicksight.registerUser(params, function (err, data1) {
if (err) {
console.log("err register user");
console.log(err);
} // an error occurred
else {
console.log("Register User1");
console.log(data1)
}
})
}
As #sideshowbarker mentioned, you can't call the Quicksight API from your webapp.
The solution I found was to set-up a Lambda to generate the Embedding URL, given the user's Cognito Username and password.
Full details of the solution, and a step-by-step tutorial, can be found here:
https://github.com/aws-samples/amazon-quicksight-embedding-sample
I have an IAM user called server that uses s3cmd to backup up to S3.
s3cmd sync /path/to/file-to-send.bak s3://my-bucket-name/
Which gives:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
The same user can send email via SES so I know that the access_key and secret_key are correct.
I have also attached AmazonS3FullAccess policy to the IAM user and clicked on Simulate policy. I added all of the Amazon S3 actions and then clicked Run simulation. All of the actions were allowed so it seems that S3 thinks I should have access. The policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The only way I can get access is to use use the root accounts access_key and secret_key. I can not get any IAM user to be able to login.
Using s3cmd --debug gives:
DEBUG: Response: {'status': 403, 'headers': {'x-amz-bucket-region': 'eu-west-1', 'x-amz-id-2': 'XXX', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'XXX', 'date': 'Tue, 30 Aug 2016 09:10:52 GMT', 'content-type': 'application/xml'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXX</AWSAccessKeyId><StringToSign>GET\n\n\n\nx-amz-date:Tue, 30 Aug 2016 09:10:53 +0000\n/XXX/</StringToSign><SignatureProvided>XXX</SignatureProvided><StringToSignBytes>XXX</StringToSignBytes><RequestId>490BE76ECEABF4B3</RequestId><HostId>XXX</HostId></Error>'}
DEBUG: ConnMan.put(): connection put back to pool (https://XXX.s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
Where I have replaced anything sensitive looking with XXX.
Have I missed something in the permissions setup?
explictly use the correct iam access key and secret key used with the s3cmd ie
s3cmd --access_key=75674745756 --secret_key=F6AFHDGFTFJGHGH sync /path/to/file-to-send.bak s3://my-bucket-name/
The error shown is for an incorrect access key and/or secret key