Unable to access folders and files from amazon s3 - django

I have deployed my django code to heroku and trying to serve static and media files from amazon S3. S3 is giving access to folders created in interface but not to the folders copied by collectstatic. I have enabled s3 to act as static website and used policy below.
{
"Version": "2008-10-17",
"Id": "Policy1380877762691",
"Statement": [
{
"Sid": "Stmt1380877761162",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harpoons1/*"
}
]
}
Non Working link:
http://harpoons1.s3-website-us-east-1.amazonaws.com/elate/
Working link:
http://harpoons1.s3-website-us-east-1.amazonaws.com/
Is there anyway i can give public access to every folder and file inside my S3.So can i serve them on my website. Help will be really appreciated.

What is inside the folder "elate/" ?
To display a default page you will need to setup a default index document such as index.html.
This is setup in the "Static website hosting" dialog under "Properties" for the bucket.

Related

AccessDenied on files uploaded from Django to private S3 bucket with Cloudfront

I'm an AWS noob setting up a hobby site using Django and Wagtail CMS. I followed this guide to connecting an S3 bucket with django-storages. I then added Cloudfront to my bucket, and everything works as expected: I'm able to upload images from Wagtail to my S3 bucket and can see that they are served through Cloudfront.
However, the guide I followed turned off Block all public access on this bucket, which I've read is bad security practice. For that reason, I would like to set up Cloudfront so that my bucket is private and my Django media files are only accessible through Cloudfront, not S3. I tried turning Block all public access back on, and adding this bucket policy:
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
The problem I'm encountering is that when I have Block all public access turned on, I receive AccessDenied messages on all my files. I can still upload new images and view them as stored in my AWS console. But I get AccessDenied if I try to view them at their CloudFront or S3 URLs.
What policies do I need to fix so that I can upload to my private S3 bucket from Django, but only allow those images to be viewable through CloudFront?
Update 1 for noob confusion: Realized I don't really understand how CDNs work and am perhaps confused about caching. Hopefully my edited question is clearer.
Update 2: Here's a screenshot of my CloudFront distribution and a screenshot of origins.
Update 3 (Possible solution): I seem to have this working after making a change to my bucket policy statements. When I created the OAI, I chose Yes, update the bucket policy, which added the OAI to my-s3-bucket. That policy was appended as a second statement to the original one made following the tutorial I linked above. My entire policy looked like this:
{
"Version": "2012-10-17",
"Id": "Policy1620442091089",
"Statement": [
{
"Sid": "Stmt1620442087873",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
I removed the original, top statement and left the new OAI CloudFront statement in place. My S3 bucket is now private and I no longer receive AccessDenied on CloudFront URLs.
Does anyone know if conflicting statements can have this effect? Or is it just a coincidence that the issue resolved after removing the original one?

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Amazon S3 domain level privacy

I have to use the Amazon S3 to store interactive video files created by Camtasia. Then I have to display these videos on a website with iframe on a specific domain, without turning these files public. It is a feature at Vimeo called domain-level privacy, which means that you can choose what specific website you want to allow your video to be embedded on.
How can I achieve it at the Amazon S3?
I've already reached the point where I can control the access of a file with the Bucket Policy this way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://myurl.hu/*"
}
}
}
]
}
It would be fine if I call a single file in the iframe but I have to get a .html file which has links in it to other .js files from the same bucket directory. And the I get the 403 codes.
Problem solved.
The iframe redefined the "referer", that's why the amazon blocked the sources within it.

Items in my Amazon S3 bucket are publicly accessible. How do I restrict access so that the Bucket link is only accessible from within my app?

I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.

How to get a file in a S3 bucket that was uploaded using Cloudfront?

I am currently trying to implement Cloudfront upload (POST/PUT methods) on an existing S3 bucket.
My Cloudfront distribution seems well-configured.
I am using Cloudfront signed urls to upload my files in the S3 bucket. It works fine.
Once the files uploaded, I can access them using Cloudfront signed url. It is fine too.
But I observe that I cannot access the uploaded files (via Cloudfront) using the AWS credentials (access_key_id & secret_key).
Everytime, I try this, I receive an AccessDenied error code.
I feel like something is missing in the configuration of the S3 bucket policy.
Here is my current S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXX",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::XXXXX-XXXXXX-XXXX/*"
}
]
}
Did I miss something or is it just impossible?
I did have the same issue once.
Try to add the header "x-amz-acl=bucket-owner-full-control" to the upload request and that should do the trick.