AWS signed URL too long to shorten - amazon-web-services

I am creating a signed URL with AWS so I can safely pass this URL to another API for temporary use. The signed URL points to a S3 resource. The problem is the other API does not accept such long links. Therefore I am trying to shorten it. I tried to use shorteners like goo.gl or bit.ly to no avail because the URL was too long for them. I even built my own private shortener with AWS (AWS url shortener) but it had the same problem: "The length of website redirect location cannot exceed 2,048 characters.".
I am creating the signed URLs in iOS (Swift) with AWSS3PreSignedURLBuilder.default().getPreSignedURL(preSignedURLRequest) while using AWS Cognito as an unauthorised user.
I have tried the following things to no avail:
Choose the shortest possible S3 bucket name with 3 characters
Shorten the filename as much as possible. I limited the file name to 10 characters plus file extension name (14 characters in total). Shorter file names are not viable for me because they should be unique to a certain extent.
But even with all these minor tweaks the signed URL returned by AWS is sometimes too long. Especially the token parameter (X-Amz-Security-Token) seems to be really long. With my minor tweaks I sometimes get URLs shorter than 2,048 characters but sometimes slightly longer. I would like to find a solution which guarantees me that the URL is not too long and can be shortened.
In my own private AWS URL shortener the following code snippet creates the S3 object which redirects to the actual long URL.
s3.putObject({
Bucket: s3_bucket,
Key: key_short,
Body: "",
WebsiteRedirectLocation: url_long,
ContentType: "text/plain"
},
(err, data) => {
if (err) {
console.log(err);
done("", err.message);
} else {
const ret_url = "https://" + cdn_prefix + "/" + id_short;
console.log("Success, short_url = " + ret_url);
done(ret_url, "");
}
});
The method returns with the following error
The length of website redirect location cannot exceed 2,048
characters.
The documentation of putObject for the header "x-amz-website​-redirect-location" in the object meta states the following (see: put object documentation):
The length of the value is limited to 2 KB
How can I make sure that the initial AWS signed URL is not too long for the URL shorteners?
EDIT:
One of the problems I have identified is that I create the signed URL as an unauthenticated user in AWS Cognito. Therefore the signed URL includes this ridiculously long token as a parameter. I did not want to embed my accessKey and shortKey in the iOS App thats why I switched to AWS Cognito (see aws cognito). But currently there are no authorised users just unauthorised ones and I need to create the signed URL as an unauthorised AWS Cognito user. If I create the signed URL with with a regular credentials using accessKey and shortKey I get a much shorter URL. But for that I would have to embed my accessKey and shortKey in the iOS app which is not recommended.

I solved the problem by creating an AWS lambda for creating a presigned URL and returning the presigned URL. The presigned URL allows the caller to access (getObject) the S3 resource. There are two options regarding this:
The role assigned to the AWS lambda has the S3 permission for getObject. The resulting presigned URL will have a much shorter token included than the presigned URL created with the temporary credentials issued by AWS Cognito in the iOS app.
Embed the access key and secret key of a role with the S3 permission for getObject directly into the AWS lambda which will give you an even shorter URL because there is no token included in the resulting presigned URL. (e.g. sample AWS code)
I call this lambda from within my iOS app as an unauthorised cognito user. After receiving the presigned URL from the AWS lambda I am able to shorten it because with this method the presigned URLs are much shorter.

There is an older method of generating pre-signed URLs that produces a very short link, eg:
https://s3-ap-southeast-2.amazonaws.com/my-bucket/foo.png?AWSAccessKeyId=AKI123V12345RYTP123&Expires=1508620600&Signature=oB1/jca2JFXw5DbN7gBKEXkUQk8%3D
However, this pre-dates sigv4 so it does not work in the newer regions (Frankfurt onwards).
You can find sample code at:
mdwhatcott/s3_presign.py
S3 Generate Pre Signed Url
It can also be used to sign for uploads: Correct S3 Policy For Pre-Signed URLs

Related

Is it possible to generate a single presigned URL which can allow MULTIPLE unique objects to be uploaded?

I know based on the AWS docs here
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
that its possible to generate a URL which can can used to
upload a specific object to your bucket
and that
You can use the presigned URL multiple times, up to the expiration date and time.
It is also possible to generate a URL (perhaps a base s3 presigned URL) which would allow multiple different unique documents to be uploaded based on a single URL?
For example, lets imagine a client application would like to upload multiple unique/distinct documents to s3 using some type of presigned URL. I dont necessarily want to force them to get a batch of presigned URLs since that would require much more on the part of the client (they would have to request batch of presigned URLs, rather than a single URL)
Here is the flow for a single document upload.
What is the simplest known solution for allowing a client to use some type of presigned url to upload multiple documents?
It is also possible to generate a URL (perhaps a base s3 presigned URL) which would allow multiple different unique documents to be uploaded based on a single URL?
A presigned URL is limited to a single single object key. You can't, for example, presign a key of foo and then use it to upload foo/bar (because that's a different key).
That means that, if you want to provide the client with a single pre-signed URL, the client code will have to combine the files itself. For example, you require the client to upload a ZIP file, then trigger a Lambda that unpacks the files in that ZIP.
Another approach is to use the AWS SDK from the client, and use the Assume Role operation to generate temporary access credentials that are restricted to uploading files with a specified prefix using an inline session policy.
A third approach is to hide the URL requests. You don't say what your client application does, but assuming that you let the user select some number of files, you could simply loop over those files and retrieve a URL for each one without ever letting your user know that's happening.
It is possible to upload multiple files with a single pre-signed URL and properly configured 'Starts-with' policy. Please, refer to the following AWS documentation: Browser-Based Uploads Using POST

AWS S3 createPresignedPost vs getSignedUrl. Which one should I use for uploading various files from client side?

On S3 document, there is createPresignedPost and getSignedUrl.
On getSignedUrl:
Note: Not all operation parameters are supported when using pre-signed
URLs. Certain parameters, such as SSECustomerKey, ACL, Expires,
ContentLength, or Tagging must be provided as headers when sending a
request. If you are using pre-signed URLs to upload from a browser and
need to use these fields, see createPresignedPost().
Is createPresignedPost simply more customizable version of getSignedUrl?
Is it doing the same thing underneath?
If you want to restrict users from uploading files beyond certain size, you should be using createPresignedPost, and specify ContentLength
with getSignedUrl, there is no restricting object size and user can potentially upload a 5TB object (current object limit) to s3
Note that if you can specify ContentLength in params when calling getSignedUrl('putObject',params, callback)
you will be thrown
Presigning post data encountered an error { UnexpectedParameter: ContentLength is not supported in pre-signed URLs.
There is an issue on this subject

Using Amazon AWS S3 for simple file sharing

I'm trying to integrate AWS s3 to my app for sharing images.
I'm currently using Branch.io for sharing content between devices using deep links. But this approach have a problem. I cannot send image data with deep links as explained in this post.
So, having the same post as reference, I tried to use AWS S3 for uploading files and then sharing the links. But I believe this approach requires my app to have a login function for Cognito but I do not want that.
I also tried generating presigned urls for the images but then, my credentials expire after an hour and I get an "Expired token" error. I'm currently using this approach, I just wish it wouldn't expire in an hour. 7 days or so is enough for me, but an hour is too short.
I can also make images uploaded for sharing public, therefore access it freely from anywhere but I don't like that either for security reasons.
What would be the best way to handle this?
When you generate the URL you need to pass in the timeout parameter,
var params = {Bucket: 'myBucket', Key: 'myKey', Expires: 604800};
Sample code to generate getSigned URL.
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey', Expires: 604800};
var url = s3.getSignedUrl('getObject', params);
console.log("get URL is", url);
604800 seconds in a week.
Hope it helps.
you can use S3 Pre-signed URLs which has maximum expriation of 7 days. you can set it with query string parameter "X-Amz-Expires".
Amruta from Branch here:
Unfortunately, there is no way to add images to the link using the Branch SDK. As a workaround, as mentioned in the post you have tagged, you can create links on the Branch dashboard and upload images in the Social media tab. Once, the link is saved the same tab will provide you with the link for the image hosted by Branch.
Unfortunately, this would only work if you have a predefined set of images.
I solved the issue by not using temporary credentials generated by AWS SDK. I instead, created a new user for the app through AWS dashboard, gave it the necessary permissions to be able to upload the objects. In the code, I generated a new basic AWS credential with the id and secret key of the new user.
Now I can create a presigned url and it works up to 1 week. I can even increase the expire date by using V2 signing.

S3 Signature Mismatch on URL with VersionId

I am curious why I am getting a SignatureDoesNotMatch error on a Versioned URL for an object in my S3 bucket (by appending versionId=abdcd).
When the URL has the following pattern:
https://s3.amazonaws.com/bucket/object.tar.gz
everything seems to work fine. However as soon as I add the versionId query string, I get the signature mismatch error
https://s3.amazonaws.com/bucket/object.tar.gz?versionId=abcde
The method for accessing the object is controlled by CloudFormation::Init's source property, as describe here
S3 Bucket
The following example downloads a zip file from an Amazon S3 bucket
and unpacks it into /etc/myapp:
Note You can use authentication credentials for a source. However, you
cannot put an authentication key in the sources block. Instead,
include a buckets key in your S3AccessCreds block. For an example, see
the example template. For more information on Amazon S3 authentication
credentials, see AWS::CloudFormation::Authentication. JSON
"sources" : { "/etc/myapp" :
"https://s3.amazonaws.com/mybucket/myapp.tar.gz" }
This is not a permissions issue because it works fine when there's no versionId query string. I am curious why the signing method is failing by just adding that? And how can I fix it?
I read up AWS's Signature V4 but I don't see anything that explains why this is not working? It seems that query strings are only supposed to be used for auth headers?

Web app unable to access private s3 file even though IAM policy grants access

I am using CarrierWaveDirect to upload a high resolution images to s3. I then use that image to process multiple versions which are made public through Cloudfront urls.
The uploaded high res files need to remain private to anonymous users, but the web application needs to access the private file in order to do the processing for other versions.
I am currently setting all uploaded files to private in the CarrierWave initializer via
config.fog_public = false
I have an IAM policy for the web application that allows full admin access. I also have set the ACCESSKEY AND SECRETKEY in the app for that IAM user. Given these two criteria, I would think that the web app could access the private file and continue with processing, but it is denied access to the private file.
*When I log into the user account associated with the web app, I am able to access the private file because a token is added on to the URL.
I can't figure out why the app cannot access the private file given the ACCESSKEY AND SECRRETKEY
I was having a hard time getting to your problem. I am quite certain your question is not
unable to access private s3 file even though IAM policy grants access
but rather
how to handcraft a presigned URL for GETting a private file on S3
The gist shows you're trying to create the presigned URL for GET yourself. While this is perfectly fine, it's also very error-prone.
Please verify that what you're trying to do is working at all, using the AWS SDK for Ruby (I only post code known to work with version 1 here but if you aren't held back by legacy code, start with version 2):
s3 = AWS::S3.new
bucket = s3.buckets["your-bucket-name"]
obj = bucket.objects["your-object-path"]
obj.url_for(:read, expires: 10*60) # generate a URL that expires in 10 minutes
See the docs for AWS::S3::S3Object#url_for and Aws::S3::Object#presigned_url for details.
You may need to read up on passing args to AWS::S3.new here (for credentials, regions and so).
I'd advise you take the following steps:
Make it work locally using the access_key_id and secret_access_key
Make it work in your worker
If it works, you can compare the query string the SDK returned with the one you handcrafted yourself. Maybe you can spot an error.
But in any case, I suggest you use higher-level SDKs to do things like that for you.
If this doesn't get you anywhere, please post a comment with your new findings.