Amazon s3 Pre-signed URL restricted by IP address - amazon-web-services

The client app requests a presigned URL for S3. Currently we limit the time the URL is valid but would like to also restrict it to the client's IP address.
Is it possible to create a S3 presigned URL that is restricted to a specific IP address?
From what I can tell only CloudFront will let me do this.

Yes!
First, it is worth explaining the concept of a Pre-Signed URL.
Objects in Amazon S3 are private by default. Therefore, if somebody tries to access it without providing credentials, access will be denied.
For example, this would not work for a private object:
https://my-bucket.s3.amazonaws.com/foo.json
To grant temporary access to the object, a Pre-signed URL can be generated. It looks similar to:
https://my-bucket.s3.amazonaws.com/x.json?AWSAccessKeyId=AKIAIVJQM12345CY3A3Q&Expires=1531965074&Signature=g7Jz%2B%2FYyqc%2FDeL1rzo7WM61RusM%3D
The URL says "I am *this* particular Access Key and I authorize temporary access until *this* time and here is my calculated signature to prove it is me".
When the Pre-Signed URL is used, it temporarily uses the permissions of the signing entity to gain access to the object. This means that I can generate a valid pre-signed URL for an object that I am permitted to access, but the pre-signed URL will not work if I am not normally permitted to access the object.
Therefore, to "create a S3 presigned URL that is restricted to a specific IP address", you should:
Create an IAM entity (eg IAM User) that has access to the object (or the whole bucket) with a IP address restriction, and
Use that entity to generate the pre-signed URL
Here is a sample policy for the IAM User:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "54.22.33.44/32"
}
}
}
]
}
The result will be a pre-signed URL that works successfully to access the object, but is then rejected by S3 because the IP address restriction is not met. The user will receive an Access Denied message.

Related

s3 bucket policy to access object url

What is s3 bucket policy permission to provide an IAM user to access object url which is basically an HTTPs url for the object that i have uploaded to S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
With above policy i can download the object into my local , but i cant access it with object url which includes Https link. If i keep the s3 bucket full public , only then i can have the https access to the object url.
I dont want to provide full public access and how to provide access to this with bucket policy?
You can get https url by generating s3 pre-signed urls for the objects. This will allow for temporary access using the urls generated.
Other than that, a common choice is to share your s3 objects with an outside world without making your bucket public using CloudFront as explained in:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
Objects in Amazon S3 are private by default. They are not accessible via an anonymous URL.
If you want a specific IAM User to be able to access the bucket, then you can add permissions to the IAM User themselves. Then, when accessing the bucket, they will need to identify themselves and prove their identity. This is best done by making API calls to Amazon S3, which include authentication.
If you must access the private object via a URL, then you can create an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object. This proves your ownership and will let S3 serve the content to you. A pre-signed URL can be generated with a couple of lines of code.

S3 Security regarding Restrict S3 object access

I use S3 to stock static files for my website. Since my website has a login password, I would like to limit access to the static files on S3.
I successfully set the access permission like below.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite.com/*",
"http://127.0.0.1:8000/*"
]
}
}
}
]
}
And then, I tried to access the image directly by inputting the URL. I got the result(please see the attached).
My question:
Do you think it is safe to expose RequestID and HostID from a security perspective?
XML image. This is what I got
The Request ID and Host ID are identifiers within Amazon S3 that can be used for debugging and support purposes. There is no harm in S3 exposing that information, and you cannot prevent that information from appearing.
Also, please note that using aws:referer is a rather insecure method of protecting your content, since it can be easily spoofed (faked) when making a request to S3.
If you wish to protect valuable/confidential information in Amazon S3, then you should:
Keep all content in S3 as private (no bucket policy)
Users authenticate to your back-end app
When a user wants to access some private content from S3, your back-end app checks that they are entitled to access the content. If so, the back-end generates an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object.
This can be provided as a direct link, or included in an HTML page (eg <img src="...">)
When S3 receives the pre-signed URL, it verifies the signature and checks the expiry time. If they are valid, it then returns the private object from the S3 bucket.
This way, you can use S3 to serve static content, but your application has full control over who is permitted to access the content. It cannot be faked like referer since each request is signed with a hash of the Secret Key.

Restrict read-write access from my S3 bucket

I am hosting a website where users can write and read files, which are stored into another S3 Bucket. However, I want to restrict the access of these files only to my website.
For example, loading a picture.
If the request comes from my website (example.com), I want the read (or write if I upload a picture) request to be allowed by the AWS S3 storing bucket.
If the request comes from the user who directly writes the Object URL in his browser, I want the storing bucket to block it.
Right now, even with all I have tried, people can access ressources from the Object URL.
Here is my Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Id",
"Statement": [
{
"Sid": "Sid",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": "arn:aws:s3:::storage-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://example.com/*"
}
}
}
]
}
Additionnal informations:
All my "Block public access" are unchecked as you can see here. (I think that the problem comes from here. When I check the two boxes about ACL, my main problem is fixed, but I got a 403 error - Forbidden - when it comes to upload files to the Bucket, another problem);
My ACL looks like this;
My website is statically hosted on another S3 Bucket.
If you need more informations or details, ask me.
Thank you in advance for your answers.
This message has been written by a French speaking guy. Sorry for the mistakes
"aws:Referer": "http://example.com/*
The referer is an http header passed by the browser and any client could just freely set the value. It provides no real security
However, I want to restrict the access of these files only to my website
Default way restrict access to S3 resources for a website is using the pre-signed url. Basically your website backend can create an S3 url to download or upload an s3 object and pass the url only to authenticated /allowed client. Then your resource bucket can restrict the public access. Allowing upload without authentication is usually a very bad idea.
Yes, in this case your website is not static anymore and you need some backend logic to do so.
If your website clients are authenticated, you may use the AWS API Gateway and Lambda to create this pre-signed url for the clients.

Restrict S3 bucket access by STS token age?

I have an S3 bucket that I want to restrict access to on the basis of how old the credentials used to access it are. For example if the token used to access the bucket is greater than X days old, I want to deny access. How can I achieve this? Something like this policy -
{
"Version": "2012-10-17",
"Statement": {
"Sid": "RejectLongTermCredentials",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${bucket}“,
"arn:aws:s3:::${bucket}/*”
],
"Condition": {
aws:TokenIssueTime > 90 days
}
}
}
Is there a way to calculate the age of a token? Any help would be appreciated!
What you are describing sounds very similar to Amazon S3 pre-signed URLs.
A pre-signed URL provides time-limited access to a private object.
Imagine a photo-sharing app. It would work like this:
All photos are kept in private Amazon S3 buckets
A user authenticates to the app
When a user wishes to view a private photo (or the app generates an HTML page that links to a photo, using <img> tags), the app will:
Verify that the user is entitled to view that photo
If they are, the app generates a pre-signed URL, which includes an expiry period (eg 5 minutes)
When the user's browser access the pre-signed URL, Amazon S3 verifies the URL and checks that it is within the expiry period:
If it is, then the private object is private object is returned
If it is not, then the user receives an Access Denied error
It only takes a couple of lines of code to generate a pre-signed URL and it does not require an API call to S3.
In difference to your question, the above process does not require the use of Security Token Service (STS) tokens (which need to be linked to IAM Users or IAM Roles). It is designed to be used for applications rather than IAM Users.

Bucket policy that respects pre-signed URLs OR IP Address deny?

I would like to be able to restrict access to files in a S3 bucket in multiple ways. This is due to the fact that the files stored can be accessed in different manners. We do this because we have TBs of files, so we don't want to duplicate the bucket.
One access method is through tokenized CDN delivery which uses the S3 bucket as a source. So that the files may be pulled, I've set the permissions for the files to allow download for everybody. Using a bucket policy, I can restrict IP addresses which can get the files in the bucket. So I've restricted them to the CDN IP block and anyone outside those IP addresses can't grab the file.
The other is access method is by direct downloads using our store system which generates S3 time expiring pre-signed URLS.
Since the CDN pull effectively needs the files to be publicly readable, is there a way to:
Check first for a valid pre-signed URL and serve the file if the request is valid
If not valid, fall back to the IP address restriction to prevent further access?
I've got a working IP restriction bucket policy working, but that stomps out the pre-signed access...removing the bucket policy fixes the pre-signed access but then the files are public.
Objects in Amazon S3 are private by default. Access then can be granted via any of these methods:
Per-object ACLs (mostly for granting public access)
Bucket Policy with rules to define what API calls are permitted in which circumstances (eg only from a given IP address range)
IAM Policy -- similar to Bucket Policy, but can be applied to specific Users or Groups
A Pre-signed URL that grants time-limited access to an object
When attempting to access content in Amazon S3, as long as any of the above permit access, then access is granted. It is not possible to deny access via a different method -- for example, if access is granted via a pre-signed URL, then a Bucket Policy cannot cause that access to be denied.
Therefore, the system automatically does what you wish... If the pre-signed URL is valid, then access is granted. If the IP address comes from the desired range, then access is granted. It should work correctly.
It is very strange that you say the IP restriction "stomps out the pre-signed access" -- that should not be possible.
Issue solved -- here's what I ended up with. I realized I was using a "deny" for the IP Address section (saw that code posted somewhere, which worked on it's own) which does override any allows, so I needed to flip that.
I also made sure I didn't have any anonymous permissions on objects in the buckets as well.
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "Allow our access key",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:user/myuser"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.168.0.1/27",
"186.168.0.1//32",
"185.168.0.1/26"
]
}
}
}
]