I am trying to understand how PUT method works for uploading a file to s3 using presigned url.
I am generating a presigned url using boto3 library for a put call. The generated url looks like this:
https://My_bucket.s3.amazonaws.com/?AWSAccessKeyId=<ACCESS_KEY>&Signature=<Signature>&x-amz-security-token=<SEC_TOKEN>&Expires=<Expires>
In order to generate a v4 signature, I need KeyId and SecretAccessKey.
If we look at the url above, we can see that the KeyId matches to AWSAccessKeyId but there is no SecretAccessKey.
I have generated the presigned url using an account that has administrative privileges (it also has read/write access to the s3 bucket). From my understanding, any non privileged user can use the information in the link to upload a file to s3.
There is quite a bit of documentation but frankly, I am extremely confused.
I would appreciate if someone can explain
1. how the signature is used.
2. Where is the secret_access_key? Is this derived from signature?
3. How do I correctly generate a v4 signature using the uri query parameters from the signed url?
When I tried to use the signature I generated, I get an error
The authorization header is malformed; the authorization component
"Credential=SIGNATURE/20191217/ap-south-1/s3/aws4_request" is malformed.
The issue here was, the location of the bucket is in different region than the code that generates the pre-signed url.
The following code works for me.
import boto3
from botocore.client import Config
s3_client = boto3.client('s3', endpoint_url='http://s3.ap-south-1.amazonaws.com', config=boto3.session.Config(signature_version='s3v4'))
response = None
try:
response = s3_client.generate_presigned_url(
'put_object',
Params={
'Bucket': 'bucket-name-to-presign-south1',
'Key': 'car.jpg'},
ExpiresIn=5000)
print(response)
except Exception as e:
print("In client error exception code")
print(e)
The following reference helped me to look in the right direction:
https://javiermunhoz.com/blog/2016/02/01/on-s3-endpoints-regions-signatures-and-boto-3.html
Related
I am using CreatePolicy API to create a policy with specific permissions. Initially passed json code as a value to query string parameter "PolicyDocument" but the request failed with code 400 Bad request. While testing through postman found that we have to urlencode given policy document. This solution worked fine on postman but not on my HTTP Client. Error - "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details". Code is working fine for all other APIs even for IAM Get request, but failing when policy doc is being sent as a query string or as a body. Possibly there is something wrong while calculating the signature for IAM api with url encoded policy doc.
Ref - https://docs.aws.amazon.com/IAM/latest/APIReference/API_CreatePolicy.html
Tried passing policy doc as a request body and header - "Content-Type:application/x-www-form-urlencoded". (body is JSON converted to string)
Tried passing policy doc as a query parameter which is url-encoded
Note - Both these methods worked fine when testing them through postman
I understand that a pre-signed URL is a way to send a file to S3. By doing that way, how can the object be validated? For example, I want to submit a JSON file to S3 and I want to make sure the file is in a correct format as input. I'd like to know if there is any way to make a response that the file is correctly saved and is valid by own validator function.
You could have an S3 event for create object that triggers a Lambda function. This could perform the validation checks you desire.
See: https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
The best way to do this is to generate the pre-signed URL with GET and PUT permissions for the same object. First, you would fire the PUT request to upload the file to S3 bucket. Next, you can do a GET call to check that the file has been uploaded.
As long as you are uploading a fresh new file, there is no chance of getting a false positive.
The above concept is based on the fact that the pre-signed URLs are restricted by time validating and not by the number of requests. This allows you to perform any number of PUT and GET call to the file as you want until the URL is valid.
Note: S3 is a trustworthy service - As long as you get a 200 status for your PUT request, you can rest assured that your file is there. The above method is just to crosscheck in case you wish to
I am trying to get a pre-signed URL to upload files to the S3 bucket. Here is my workflow:
1. Invoke Lambda -> 2. Get pre-signed URL -> 3. Hit the URL (PUT) with file
1. Invoke Lambda
I made sure the AWS keys have the correct permission. In fact, it has full access. Here is the Source code:
AWS.config.update({
accessKeyId: '*****************',
secretAccessKey: '*****************',
region: 'us-east-1',
signatureVersion: 'v4'
});
let requestObject = JSON.parse(event["body"]);
let fileName = requestObject.fileName;
let fileType = requestObject.fileType;
const myBucket = 'jobobo-resumes';
s3.getSignedUrl('putObject', {
"Bucket": myBucket,
"Key": fileName,
"ContentType": fileType
}, function (err, url) {
if (err) {
mainCallback(null, err);
} else {
mainCallback(null, url);
}
}
So, I am getting the filename, file type(MIME) from the request and use that to create the signature.
2. Get pre-signed URL
When I hit the Lambda I get the pre-signed URL. Now, I will use this URL to upload the file to S3.
3. Hit the URL (PUT) with file
Now, I hit the URL with HTTP method and I add the file (binary), see my Postman request:
You can see that I hit the request with the PUT HTTP Method. I get the 403 error. Here are the headers in the request and you can see that the content-type is image/jpeg:
When I try the POST method, I get that the signature is invalid. I guess that is because of the signature is signed for the PUT method.
Here is the S3 bucket's settings:
Since I get access denied, I completely opened the bucket, I mean Block Public access: off.
What is wrong with the settings? Maybe S3?
You have a service object in the variable s3 but you don't show, in your code, where that object was constructed... but it appears to be before you call AWS.config.update() which doesn't retroactively reconfigure your s3 object. The order of these operations is the problem.
Updates you make to the global AWS.config object don't apply to previously created service objects.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/global-config-object.html
If you observe your generated URL closely you can see that the Lambda Execution Role credentials are actually being used, which is why access is denied.
The giveaways are that the AWSAccessKeyId in the URL is not yours, it's a session key beginning with ASIA instead of AKIA like a normal Access Key ID. Also, there's a session token x-amz-security-token which wouldn't be there in a URL generated with static credentials. Also this URL is Signature V2, so when you correct the credential configuration issue, the format of the signed URL will change significantly. Signature V4 URLs have X-Amz-Credential instead of AWSAccessKeyId.
I am creating a signed URL with AWS so I can safely pass this URL to another API for temporary use. The signed URL points to a S3 resource. The problem is the other API does not accept such long links. Therefore I am trying to shorten it. I tried to use shorteners like goo.gl or bit.ly to no avail because the URL was too long for them. I even built my own private shortener with AWS (AWS url shortener) but it had the same problem: "The length of website redirect location cannot exceed 2,048 characters.".
I am creating the signed URLs in iOS (Swift) with AWSS3PreSignedURLBuilder.default().getPreSignedURL(preSignedURLRequest) while using AWS Cognito as an unauthorised user.
I have tried the following things to no avail:
Choose the shortest possible S3 bucket name with 3 characters
Shorten the filename as much as possible. I limited the file name to 10 characters plus file extension name (14 characters in total). Shorter file names are not viable for me because they should be unique to a certain extent.
But even with all these minor tweaks the signed URL returned by AWS is sometimes too long. Especially the token parameter (X-Amz-Security-Token) seems to be really long. With my minor tweaks I sometimes get URLs shorter than 2,048 characters but sometimes slightly longer. I would like to find a solution which guarantees me that the URL is not too long and can be shortened.
In my own private AWS URL shortener the following code snippet creates the S3 object which redirects to the actual long URL.
s3.putObject({
Bucket: s3_bucket,
Key: key_short,
Body: "",
WebsiteRedirectLocation: url_long,
ContentType: "text/plain"
},
(err, data) => {
if (err) {
console.log(err);
done("", err.message);
} else {
const ret_url = "https://" + cdn_prefix + "/" + id_short;
console.log("Success, short_url = " + ret_url);
done(ret_url, "");
}
});
The method returns with the following error
The length of website redirect location cannot exceed 2,048
characters.
The documentation of putObject for the header "x-amz-website​-redirect-location" in the object meta states the following (see: put object documentation):
The length of the value is limited to 2 KB
How can I make sure that the initial AWS signed URL is not too long for the URL shorteners?
EDIT:
One of the problems I have identified is that I create the signed URL as an unauthenticated user in AWS Cognito. Therefore the signed URL includes this ridiculously long token as a parameter. I did not want to embed my accessKey and shortKey in the iOS App thats why I switched to AWS Cognito (see aws cognito). But currently there are no authorised users just unauthorised ones and I need to create the signed URL as an unauthorised AWS Cognito user. If I create the signed URL with with a regular credentials using accessKey and shortKey I get a much shorter URL. But for that I would have to embed my accessKey and shortKey in the iOS app which is not recommended.
I solved the problem by creating an AWS lambda for creating a presigned URL and returning the presigned URL. The presigned URL allows the caller to access (getObject) the S3 resource. There are two options regarding this:
The role assigned to the AWS lambda has the S3 permission for getObject. The resulting presigned URL will have a much shorter token included than the presigned URL created with the temporary credentials issued by AWS Cognito in the iOS app.
Embed the access key and secret key of a role with the S3 permission for getObject directly into the AWS lambda which will give you an even shorter URL because there is no token included in the resulting presigned URL. (e.g. sample AWS code)
I call this lambda from within my iOS app as an unauthorised cognito user. After receiving the presigned URL from the AWS lambda I am able to shorten it because with this method the presigned URLs are much shorter.
There is an older method of generating pre-signed URLs that produces a very short link, eg:
https://s3-ap-southeast-2.amazonaws.com/my-bucket/foo.png?AWSAccessKeyId=AKI123V12345RYTP123&Expires=1508620600&Signature=oB1/jca2JFXw5DbN7gBKEXkUQk8%3D
However, this pre-dates sigv4 so it does not work in the newer regions (Frankfurt onwards).
You can find sample code at:
mdwhatcott/s3_presign.py
S3 Generate Pre Signed Url
It can also be used to sign for uploads: Correct S3 Policy For Pre-Signed URLs
I am curious why I am getting a SignatureDoesNotMatch error on a Versioned URL for an object in my S3 bucket (by appending versionId=abdcd).
When the URL has the following pattern:
https://s3.amazonaws.com/bucket/object.tar.gz
everything seems to work fine. However as soon as I add the versionId query string, I get the signature mismatch error
https://s3.amazonaws.com/bucket/object.tar.gz?versionId=abcde
The method for accessing the object is controlled by CloudFormation::Init's source property, as describe here
S3 Bucket
The following example downloads a zip file from an Amazon S3 bucket
and unpacks it into /etc/myapp:
Note You can use authentication credentials for a source. However, you
cannot put an authentication key in the sources block. Instead,
include a buckets key in your S3AccessCreds block. For an example, see
the example template. For more information on Amazon S3 authentication
credentials, see AWS::CloudFormation::Authentication. JSON
"sources" : { "/etc/myapp" :
"https://s3.amazonaws.com/mybucket/myapp.tar.gz" }
This is not a permissions issue because it works fine when there's no versionId query string. I am curious why the signing method is failing by just adding that? And how can I fix it?
I read up AWS's Signature V4 but I don't see anything that explains why this is not working? It seems that query strings are only supposed to be used for auth headers?