Trying to find a best practice to securely download contents from s3 on a SPA? Presigned URL seems to be one of the options. But whoever can get hold of that URL can access the contents. Is there a way to protect the presigned URL with an extra layer of security? Like to ensure the right person is accessing the files. Just a note, we use a third party IDP and not cognito.
Related
I have a use case of allowing users to access remote file present in s3. Currently I am sending the pre-signed url in an email and allowing access. But I have a use case that is not met with this solution.
That being, in case the email containing the pre-signed url is forwarded to someone unintended, the forward recipient should not be able to access the file. Is there a way of authenticating an s3 presigned url by means of id/password. I am also open to a different solution using other AWS services as well to meet the use case.
Pre-signed URLs aren't particularly good for emails.
The intention with a pre-signed URL is that a user would authenticate to an application, then request access to some private content. The application would verify that they are permitted access, then provide a pre-signed URL to grant time-limited access to the content. Such access would normally be for up to 5-10 minutes.
As demonstrated by your scenario, there is an issue if somebody forwards a pre-signed URL to somebody else. This is normally not a problem because access time is limited. However, if a pre-signed URL is generated that has access for hours or days, it becomes more of a security issue.
Solution: Provide a link to your application. Users should authenticate, then be provided with a short-duration (eg 5-minute) pre-signed URL. This lowers the chance that other people can use the link.
By default you can't limit who can use the pre-sign url. The entire purpose of pre-signed S3 urls is to enable access to your object to anyone who has them for a limited time:
Anyone who receives the presigned URL can then access the object.
If this does not suit you, you have few choices:
don't use pre-signed urls, but instead create an IAM user which just the permissions to download the object. This will require login to the AWS by your recipients.
use password protected 7zip or rar files that contain your objects. So instead of downloading objects directly, you provide pre-signed urls to password-protected archives.
use encryption and share encrypted files. You clients will need to decrypt them.
and many others
But ultimately they will be as save as your users' passwords, encryption keys or other types of protection you will implement.
This may be a simple question, but I can't find any tutorials for it
My website all store in S3 bucket, but the front-end and back-end are stores in different buckets
In my front-end website, JS initiated a request URL use relative path, like /api/***, the request URL to be http://front-end.com/api/***.
how can I make all these requests redirect to my back-end bucket. like this:
http://back-end.com/api/***
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/redirect-website-requests.html
this doc can't seem to do this
Is there a reason you need to use different domain names to serves your content ?
To redirect from http://front-end.com/api/* to http://back-end.com/api/*, there are couple of ways:
1. Use Lambda#edge viewer request function to redirect with 301/302 with new URL.
2. Use S3 bucket for redirection.
In any of above case, you need both front-end.com and back-end.com to point to CloudFront so that you can serve them from CloudFront.
An easy way is to access all of them using front-end.com and create a cache behavior which Path pattern "/api/*" and choose the Origin bucket where you want to make the request.
I need a to have a way of letting a client upload data to S3 without showing them the full location (path) of the file. Is that something doable with AWS S3 pre-signed URL?
I'm using boto3 as such
s3.client.generate_presigned_url(
ClientMethod='put_object',
ExpiresIn=7200,
Params={'Bucket': BUCKET, 'Key': name}
)
But the outcome will be:
https://s3.amazonaws.com/MY_BUCKET/upload/xxxx-xxxx/file-name.bin?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=XXXX
I need something like that won't show the Key name in the path (/upload/xxxx-xxxx/file-name.bin).
What other solutions do I have if not the pre-signed url?
I believe best way is distributing files with AWS Cloudfront. You can set the origin of the Cloudfront distribution to MY_BUCKET.s3.amazonaws.com. It is also possible to use subfolders like MY_BUCKET.s3.amazonaws.com/upload as origin.
Cloudfront will serve your files within S3 origin with generated CDN endpoint domain or it is possible to set and use custom domain as well.
https://d111111abcdef8.cloudfront.net/upload/xxxx-xxxx/file-name.bin
https://uploads.example.com/upload/xxxx-xxxx/file-name.bin
if you use subfolder as origin:
https://uploads.example.com/xxxx-xxxx/file-name.bin
More info on setting S3 Bucket as origin on Cloudfront: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#concept_S3Origin
More info on using directory paths of S3 Bucket as origin: https://aws.amazon.com/about-aws/whats-new/2014/12/16/amazon-cloudfront-now-allows-directory-path-as-origin-name/
More info on Custom URLs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
This isn’t a complete answer, but it’s too long to be a comment.
I think you should be able to use API Gateway as a proxy for S3 to hide the path. You can still use pre-signed urls, but you might need to create pre-signed API Gateway urls rather than pre-signed S3 urls. I’ve never done it myself—nor will I be able to try it out in near future—but I’ll do my best to lay out how I think it’s done, and maybe someone else can try it and write up a more complete answer.
First, we need to set up an API gateway endpoint that will act as a proxy to S3.
AWS has a very thorough write-up on how to make a general proxy for S3, and I think you can make your custom endpoint point to a specific bucket and folder in S3 by modifying the PathOverride of the proxy. If you look at the screenshot of the PathOverrides in this section of the AWS documentation, you can see they have set the path override to {bucket}/{object}, but I think you could set the PathOverride to mySecretBucket/my/secret/folder/{object}, and then update the path mappings appropriately.
Next, you need to be able to use pre-signed urls with this proxy. There’s two ways you might be able to do this.
The first thing that might work is making the url signature pass through API Gateway to S3. I know it’s possible to map query parameters in a similar way to path parameters. You may need to perform some url encoding on the pre-signed URL’s signature param to make it work—I’m not entirely sure.
The other option is to allow Api Gateway to always write to S3, and require a signed request for calling your proxy endpoint. This SO question has a pretty detailed answer that looks to me like it should work.
Again, I know this isn’t a complete answer, and I haven’t tried to verify that this works, but hopefully someone can start with this and get to a complete answer for your question.
Can I allow a 3rd party file upload to an S3 bucket without using IAM? I would like to avoid the hassle of sending them credentials for an AWS account, but still take advantage of the S3 UI. I have only found solutions for one or the other.
The pre-signed url option sounded great but appears to only work with their SDKs and I'm not about to tell my client to install python on their computer to upload a file.
The browser based upload requires me to make my own front end html form and run in on a server just to upload (lol).
Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time? Of course, making the bucket public is not an option either. Why is this so complicated!
Management Console
The Amazon S3 management console will only display S3 buckets that are associated with the AWS account of the user. Also, it is not possible to limit the buckets displayed (it will display all buckets in the account, even if the user cannot access them).
Thus, you certainly don't want to give them access to your AWS management console.
Pre-Signed URL
Your user does not require the AWS SDK to use a pre-signed URL. Rather, you must run your own system that generates the pre-signed URL and makes it available to the user (eg through a web page or API call).
Web page
You can host a static upload page on Amazon S3, but it will not be able to authenticate the user. Since you only wish to provide access to specific people, you'll need some code running on the back-end to authenticate them.
Generate...
You ask: "Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time?"
Yes and no. Yes, you can generate a pre-signed URL. However, it cannot be used with the S3 console (see above).
Why is this so complicated?
Because security is important.
So, what to do?
A few options:
Make a bucket publicly writable, but not publicly readable. Tell your customer how to upload. The downside is that anyone could upload to the bucket (if they know about it), so it is only security by obscurity. But, it might be a simple solution for you.
Generate a very long-lived pre-signed URL. You can create a URL that works for months or years. Provide this to them, and they can upload (eg via a static HTML page that you give them).
Generate some IAM User credentials for them, then have them use a utility like the AWS Command-Line Interface (CLI) or Cloudberry. Give them just enough credentials for upload access. This assumes you only have a few customers that need access.
Bottom line: Security is important. Yet, you wish to "avoid the hassle of sending them credentials", nor do you wish to run a system to perform the authentication checks. You can't have security without doing some work, and the cost of poor security will be much more than the cost of implementing good security.
you could deploy a lambda function to call "signed URL" then use that URL to upload the file. here is an example
https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/
In my application we have to open some pdf files in a new tab on click of an icon using the direct s3 bucket url like this:
http://MyBucket.s3.amazonaws.com/Certificates/1.pdf?AWSAccessKeyId=XXXXXXXXXXXXX&Expires=1522947975&Signature=XXXXXXXXXXXXXXXXX
Some how i feel this is not secure as the user could see the bucket name, AWSAccessKeyId,Expiration and Signature. Is this still considered secure ? Or is there a better way to handle this ?
Allowing the user to see these parameters is not a problem because;
AWSAccessKeyId can be public (do not confuse with SecretAccessKey)
Expires and signature is signed with your SecretAccessKey so no one will be able to manipulate it (aws will validate it against you SecretKey)
Since you don't have public objects and your bucket itself is not public, then it is ok to the user knowing your bucket name - you will always need a valid signature to access the objects.
But I have two suggestions for you; 1. Use your own domain, so the bucket is not visible (you can use free SSL provided by AWS if you use CloudFornt), 2. Use HTTPS instead of plain HTTP.
And if for any reason you absolutely dont want your users to see AWS parameters, then I suggest that you proxy the access to S3 via your own API. (though I consider it unnecessary)
I see you access with http (with no SSL). You can do virtual hosting with S3 for multiple domains.
https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
and create signed url based on your domain and you are good to go.
If you are using SSL, you can use Cloudfront
and configure cloudfront origin to point to your S3 bucket.
Hope it helps.