I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.
Related
I'm an AWS noob setting up a hobby site using Django and Wagtail CMS. I followed this guide to connecting an S3 bucket with django-storages. I then added Cloudfront to my bucket, and everything works as expected: I'm able to upload images from Wagtail to my S3 bucket and can see that they are served through Cloudfront.
However, the guide I followed turned off Block all public access on this bucket, which I've read is bad security practice. For that reason, I would like to set up Cloudfront so that my bucket is private and my Django media files are only accessible through Cloudfront, not S3. I tried turning Block all public access back on, and adding this bucket policy:
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
The problem I'm encountering is that when I have Block all public access turned on, I receive AccessDenied messages on all my files. I can still upload new images and view them as stored in my AWS console. But I get AccessDenied if I try to view them at their CloudFront or S3 URLs.
What policies do I need to fix so that I can upload to my private S3 bucket from Django, but only allow those images to be viewable through CloudFront?
Update 1 for noob confusion: Realized I don't really understand how CDNs work and am perhaps confused about caching. Hopefully my edited question is clearer.
Update 2: Here's a screenshot of my CloudFront distribution and a screenshot of origins.
Update 3 (Possible solution): I seem to have this working after making a change to my bucket policy statements. When I created the OAI, I chose Yes, update the bucket policy, which added the OAI to my-s3-bucket. That policy was appended as a second statement to the original one made following the tutorial I linked above. My entire policy looked like this:
{
"Version": "2012-10-17",
"Id": "Policy1620442091089",
"Statement": [
{
"Sid": "Stmt1620442087873",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
I removed the original, top statement and left the new OAI CloudFront statement in place. My S3 bucket is now private and I no longer receive AccessDenied on CloudFront URLs.
Does anyone know if conflicting statements can have this effect? Or is it just a coincidence that the issue resolved after removing the original one?
I am trying to setup my staging server to be served via S3 and cloudfront. Here is my bucket policy below.
a) If I access the S3 url directly, everything works fine.
b) If I access the cloudfront root domain, www.staging.example.com, everything works fine.
However, once I go to www.staging.example.com/login (or any non-root url), I get a 403 Forbidden AccessDenied error. How do I fix this?
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::staging-server/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"XXX",
"XXX",
]
}
}
}
]
}
If your intention is to serve the site via CloudFront only, then you should reconfigure the S3 bucket policy to allow access to the CloudFront Origin Access Identity of your CloudFront distribution, and remove all IP address conditions from the bucket policy.
To restrict access to the distribution to an IP whitelist, configure AWS WAF and an IPSet. Use WAF v2, not the original WAF.
On the non-root url question, do you actually have a document named login?
I have my assets (images/videos etc) stored in S3 and everything is working great.
The videos however need to be safe from download by the user. I have thought about numerous ways using Ajax and blobs and hiding context menus etc but would prefer a more simple but stronger technique.
The idea I've thought of is to add protection on the S3 bucket so that the assets can only be accessed from the website itself (an Iam role that the EC2 instance has access to).
Just unsure how this works. The bucket is set to static website hosting so everything is public in it, I'm guessing I need to change that then add some direct permissions. Has anyone done this or can anyone provide info on whether this is possible.
Thanks
You can serve video content through Amazon CloudFront, which serves content using video protocols rather than as file downloads. This can keep your content (mostly) safe.
See: On-Demand and Live Streaming Video with CloudFront - Amazon CloudFront
You would then keep the videos private in S3, but use an Origin Access Identity that permits CloudFront to access the content and serve it to users.
In addition to already mentioned Cloudfront, you could also use AWS Elastic Transcoder (https://aws.amazon.com/elastictranscoder/) to convert video files to mpeg-dash or hls format (https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP). These formats basically consists of short (for example 10s long) video parts allowing adaptive bitrate and making it much harder to download as 1 long video.
For CloudFront to work with S3 static website endpoints, AWS generally recommends having a public read permissions on the S3 bucket. There is no native way of achieving security between CloudFront and S3 static website endpoint, however in this case, we can use a workaround to satisfy your use case.
By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.
I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?
There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.
I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.