AWS policy to prevent hotlinking - amazon-web-services

I am trying to set up things on S3 to prevent hotlinking.
I've taken advice from here: How do I prevent hotlinking on Amazon S3 without using signed URLs?
And also this one: https://kinsta.com/blog/hotlinking/
However, I can't get it to work.
First, I prevent all public access to the bucket so the settings on the Permissions tab are like this:
I have set the policy like this:
{
"Version": "2008-10-17",
"Id": "HTTP referer policy example",
"Statement": [
{
"Sid": "prevent hotlinking",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.co.uk/*",
"http://www.example.co.uk/*",
"https://example.co.uk/*",
"https://www.example.co.uk/*",
"http://localhost/01-example/*"
]
}
}
}
]
}
However, when I try to access content from the bucket from the referring site, I cannot see the S3 content.
What am I doing wrong?

I prevent all public access to the bucket so the settings on the Permissions tab are like this.
That's why it does not work. Your policy allows for public/anonymous access ("Principal": {"AWS": "*"}), but at the same time you explicitly "prevent all public access". You have to enable the public access. From docs:
Before you use a bucket policy to grant read-only permission to an anonymous user, you must disable block public access settings for your bucket.

Blocking public access options will override any other configuration you're using, for this reason your bucket policy will not take effect.
To allow your policy to work you will need to disable this, you might choose to keep several of the options enabled to prevent further changes being made to the bucket policy.
On a related note to your policy, the Referer header can be faked to still access these assets so it should not be treated as a silver bullet.
Another solution to use would be to either use an S3 signed URL or to take a look at using a CloudFront distribution in front of your S3 bucket and then making use of a signed cookie.

Related

how to allow AWS Textract access to a protected S3 bucket

I have bucket policy which allows access only from a VPC:
{
"Version": "2012-10-17",
"Id": "aksdhjfaksdhf",
"Statement": [
{
"Sid": "Access-only-from-a-specific-VPC",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::zzzz",
"arn:aws:s3:::zzzz/*"
],
"Condition": {
"StringNotEquals": {
"aws:SourceVpc": "vpc-xxxx"
}
}
}
]
}
I'd like to allow traffic coming from AWS Textract to this bucket as well. I've tried various methods but because of the absolute precedence of 'explicit deny' (which I require), I cannot make it work.
Is there a different policy formulation or a different method altogether to restrict the access to this S3 Bucket to traffic from the VPC AND from Textract service exclusively?
This will not be possible.
In general, it's a good idea to avoid Deny policies since they override any Allow policy. They can be notoriously hard to configure correctly.
One option would be to remove the Deny and be very careful in who is granted Allow access to the bucket.
However, if this is too hard (eg Admins are given access to all buckets by default), then a common practice is to move sensitive data to an S3 bucket in a different AWS Account and only grant cross-account access to specific users.

My AS3 Bucket Policy only applies to some Objects

I'm having a really hard time setting up my bucket policy, it looks like my bucket policy only applies to some objects in my bucket.
What I want is pretty simple: I store video files in the bucket and I want them to be exclusively downloadable from my webiste.
My approach is to block everything by default, and then add allow rules:
Give full rights to root and Alice user.
Give public access to files in my bucket from only specific referers (my websites).
Note:
I manually made all the objects 'public' and my settings for Block Public Access are all set to Off.
Can anyone see any obvious errors in my bucket policy?
I don't understand why my policy seems to only work for some files.
Thank you so much
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::426873019732:root",
"arn:aws:iam::426873019732:user/alice"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MY_BUCKET",
"arn:aws:s3:::MY_BUCKET/*"
]
}
]
}
Controlling access via aws:Referer is not secure. It can be overcome quite easily. A simple web search will provide many tools that can accomplish this.
The more secure method would be:
Keep all objects in your Amazon S3 bucket private (do not "Make Public")
Do not use a Bucket Policy
Users should authenticate to your application
When a user wishes to access one of the videos, or when your application creates an HTML page that refers/embeds a video, the application should determine whether the user is entitled to access the object.
If the user is entitled to access the object, the application creates an Amazon S3 pre-signed URL, which provides time-limited access to a private object.
When the user's browser requests to retrieve the object via the pre-signed URL, Amazon S3 will verify the contents of the URL. If the URL is valid and the time limit has not expired, Amazon S3 will return the object (eg the video). If the time has expired, the contents will not be provided.
The pre-signed URL can be created in a couple of lines of code and does not require and API call back to Amazon S3.
The benefit of using pre-signed URLs is that your application determines who is entitled to view objects. For example, a user could choose to share a video with another user. Your application would permit the other user to view this shared video. It would not require any changes to IAM or bucket policies.
See: Amazon S3 pre-signed URLs
Also, if you wish to grant access to an Amazon S3 bucket to specific IAM Users (that is, users within your organization, rather than application users), it is better to grant access on the IAM User rather than via an Amazon S3 bucket. If there are many users, you can create an IAM Group that contains multiple IAM Users, and then put the policy on the IAM Group. Bucket Policies should generally be used for granting access to "everyone" rather than specific IAM Users.
In general, it is advisable to avoid using Deny policies since they can be difficult to write correctly and might inadvertently deny access to your Admin staff. It is better to limit what is being Allowed, rather than having to combine Allow and Deny.

Should I make my S3 bucket public for static site hosting?

I have an s3 bucket that is used to host a static site that is accessed through cloudfront.
I wish to use the s3 <RoutingRules> to redirect any 404 to the root of the request hostname. To do this I need to set the cloudfront origin to use the s3 "website endpoint".
However, it appears that to allow Cloudfront to access the s3 bucket via the "website endpoint" and not the "s3 REST API endpoint", I need to explicitly make the bucket public, namely, with a policy rule like:
{
"Sid": "AllowPublicGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::dev.ts3.online-test/*"
},
{
"Sid": "AllowPublicListBucket",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::dev.ts3.online-test"
}
That's all well and good. It works. However AWS gives me a nice big shiny warning saying:
This bucket has public access. You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.
So I have two questions I suppose:
Surely this warning should be caveated, and is just laziness on AWS' part? Everything in the bucket is static files that can be freely requested. There is no protected or secret content in the bucket. I don't see why giving public read is a security hole at all...
For peace of mind, is there any way to specify a principalId in the bucket policy that will only grant this to cloudfront? (I know if I use the REST endpoint I can set it to the OAI, but I can't use the rest endpoint)
The first thing about the warning.
The list buckets view shows whether your bucket is publicly accessible. Amazon S3 labels the permissions for a bucket as follows:
Public –
Everyone has access to one or more of the following: List objects, Write objects, Read and write permissions.
Objects can be public –::
The bucket is not public, but anyone with the appropriate permissions can grant public access to objects.
Buckets and objects not public –:
- The bucket and objects do not have any public access.
Only authorized users of this account –:
Access is isolated to IAM users and roles in this account and AWS service principals because there is a policy that grants public access.
So the warning due to first one. Recomended policy by AWS for s3 static website is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}
Add a bucket policy to the website bucket that grants everyone access
to the objects in the bucket. When you configure a bucket as a
website, you must make the objects that you want to serve publicly
readable. To do so, you write a bucket policy that grants everyone
s3:GetObject permission. The following example bucket policy grants
everyone access to the objects in the example-bucket bucket.
BTW public access should be only GET, not anything else, Its totally fine to allow GET request for your static website on S3.
static-website-hosting

File in Amazon S3 bucket denied after making bucket public

I have made my Amazon S3 bucket public, by going to its Permissions tab, and setting public access to everyone:
List objects
Write objects
List bucket permissions
Write bucket permissions
There is now an orange "Public" label on the bucket.
But when I go into the bucket, click on one of the images stored there, and click on the Link it provides, I get Access Denied. The link looks like this:
https://s3.eu-central-1.amazonaws.com/[bucket-name]/images/36d03456fcfaa06061f.jpg
Why is it still unavailable despite setting the bucket's permissions to public?
You either need to set Object Level Permissions on each object that you want to be available to the internet as Read Object.
or, you can use Bucket Policies to make this more widely permissioned, and not worry about resetting the permissions on each upload:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}

Allow S3 Bucket access from either specific VPC or console

I have some app configuration stored in a file in an S3 bucket (api keys). I have the S3 bucket configured to only allow access via a specific VPC endpoint, which ties the keys to specific environments, and prevents e.g. production keys being accidentally used in a staging or test environment.
However occasionally I need to amend these keys, and it's a pain. Currently the bucket policy prevents console access, so I have to remove the bucket policy, update the file, then replace the policy.
How can I allow access from the console, a specific VPC endpoint, and no where else?
Current policy, where I've tried and failed already:
{
"Version": "2012-10-17",
"Id": "Policy12345",
"Statement": [
{
"Sid": "Principal-Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account-id:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-keys-staging",
"arn:aws:s3:::my-keys-staging/*"
]
},
{
"Sid": "Access-to-specific-VPCE-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-keys-staging",
"arn:aws:s3:::my-keys-staging/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-vpceid"
}
}
}
]
}
As mentioned in the comments, having an explicit Deny cannot be overridden. By including the Deny tied to a particular VPC, you cannot add any other Allow elements to counteract that Deny statement.
Option 1
One option is to change your "deny if not from VPC abc" statement to "allow if from VPC abc". This would allow you to add additional Allow statements to your policy to allow you to access the bucket from elsewhere.
However, there are 2 very important caveats that goes along with doing that:
Any user with "generic" S3 access via IAM policies would have access to the bucket, and
Any role/user from said VPC would be allowed into your bucket.
So by changing Deny to Allow, you will no longer have a VPC-restriction at the bucket level.
This may or may not be within your organization's security requirements.
Option 2
Instead, you can amend your existing Deny to add additional conditions which will work in an AND situation:
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-vpceid",
"aws:username": "your-username"
}
}
This type of condition will deny the request if:
The request is not coming from your magic VPC, AND
The request is not coming from YOUR username
So you should be able to maintain the restriction of limiting requests to your VPC with the exception that your user sign-in would be allowed access to the bucket from anywhere.
Note the security hole you are opening up by doing this. You should ensure you restrict the username to one that (a) does not have any access keys assigned, and (b) has MFA enabled.