AccessDenied on files uploaded from Django to private S3 bucket with Cloudfront - django

I'm an AWS noob setting up a hobby site using Django and Wagtail CMS. I followed this guide to connecting an S3 bucket with django-storages. I then added Cloudfront to my bucket, and everything works as expected: I'm able to upload images from Wagtail to my S3 bucket and can see that they are served through Cloudfront.
However, the guide I followed turned off Block all public access on this bucket, which I've read is bad security practice. For that reason, I would like to set up Cloudfront so that my bucket is private and my Django media files are only accessible through Cloudfront, not S3. I tried turning Block all public access back on, and adding this bucket policy:
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
The problem I'm encountering is that when I have Block all public access turned on, I receive AccessDenied messages on all my files. I can still upload new images and view them as stored in my AWS console. But I get AccessDenied if I try to view them at their CloudFront or S3 URLs.
What policies do I need to fix so that I can upload to my private S3 bucket from Django, but only allow those images to be viewable through CloudFront?
Update 1 for noob confusion: Realized I don't really understand how CDNs work and am perhaps confused about caching. Hopefully my edited question is clearer.
Update 2: Here's a screenshot of my CloudFront distribution and a screenshot of origins.
Update 3 (Possible solution): I seem to have this working after making a change to my bucket policy statements. When I created the OAI, I chose Yes, update the bucket policy, which added the OAI to my-s3-bucket. That policy was appended as a second statement to the original one made following the tutorial I linked above. My entire policy looked like this:
{
"Version": "2012-10-17",
"Id": "Policy1620442091089",
"Statement": [
{
"Sid": "Stmt1620442087873",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
I removed the original, top statement and left the new OAI CloudFront statement in place. My S3 bucket is now private and I no longer receive AccessDenied on CloudFront URLs.
Does anyone know if conflicting statements can have this effect? Or is it just a coincidence that the issue resolved after removing the original one?

Related

How can I find what external S3 buckets (AWS-owned) are being accessed?

I'm using WorkSpaces Web (not WorkSpaces!) with an S3 VPC endpoint. I would like to be able to restrict S3 access via the S3 endpoint policy to only the buckets required by WorkSpaces Web. I cannot find any documentation with the answers, and AWS support does not seem to know what these buckets are. How can I find out what buckets the service is talking to? I see the requests in VPC flow logs, but that obviously doesn't show what URL or bucket it is trying to talk to. I have tried the same policy used for WorkSpaces (below), but it was not correct (or possibly not enough). I have confirmed that s3:GetObject is the only action needed.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Access-to-specific-bucket-only",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::aws-windows-downloads-us-east-1/*",
"arn:aws:s3:::amazon-ssm-us-east-1/*",
"arn:aws:s3:::amazon-ssm-packages-us-east-1/*",
"arn:aws:s3:::us-east-1-birdwatcher-prod/*",
"arn:aws:s3:::aws-ssm-distributor-file-us-east-1/*",
"arn:aws:s3:::aws-ssm-document-attachments-us-east-1/*",
"arn:aws:s3:::patch-baseline-snapshot-us-east-1/*",
"arn:aws:s3:::amazonlinux.*.amazonaws.com/*",
"arn:aws:s3:::repo.*.amazonaws.com/*",
"arn:aws:s3:::packages.*.amazonaws.com/*"
]
}
]
}

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Items in my Amazon S3 bucket are publicly accessible. How do I restrict access so that the Bucket link is only accessible from within my app?

I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.

How to get a file in a S3 bucket that was uploaded using Cloudfront?

I am currently trying to implement Cloudfront upload (POST/PUT methods) on an existing S3 bucket.
My Cloudfront distribution seems well-configured.
I am using Cloudfront signed urls to upload my files in the S3 bucket. It works fine.
Once the files uploaded, I can access them using Cloudfront signed url. It is fine too.
But I observe that I cannot access the uploaded files (via Cloudfront) using the AWS credentials (access_key_id & secret_key).
Everytime, I try this, I receive an AccessDenied error code.
I feel like something is missing in the configuration of the S3 bucket policy.
Here is my current S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXX",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::XXXXX-XXXXXX-XXXX/*"
}
]
}
Did I miss something or is it just impossible?
I did have the same issue once.
Try to add the header "x-amz-acl=bucket-owner-full-control" to the upload request and that should do the trick.

S3 Bucket Policy for anonymous uploads

I've setup an S3 bucket to allow anonymous uploads. The goal is to allow uploading but not downloading, but the problem I have found is that not only can I not block downloading of these files, but I do not own them and cannot delete copy or manipulate them in any way. The only way I am able to get rid of them is to delete the bucket.
Here is the policy:
{
"Version": "2008-10-17",
"Id": "policy",
"Statement": [
{
"Sid": "allow-public-put",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::legendstemp/*"
}
]
}
This works, but I no longer have access to these files using either the Amazon Console or programmatically.
The bucket policy also does not apply to these files because the file owner is not the same as the bucket owner. I cannot take ownership of them either.
How can setup a bucket policy to allow anonymous upload but not download?
I know it's been a while since this was asked, but I came across this while trying to get this to work myself, and wrote about it in some length in:
https://gist.github.com/jareware/d7a817a08e9eae51a7ea
The gist of it (heh!) is that you can do anonymous upload and disallow other actions, but you won't be able to carry out the actions using authenticated requests. At least as far as I can tell.
Hope this helps.