Protecting S3 assets from direct download - amazon-web-services

I have my assets (images/videos etc) stored in S3 and everything is working great.
The videos however need to be safe from download by the user. I have thought about numerous ways using Ajax and blobs and hiding context menus etc but would prefer a more simple but stronger technique.
The idea I've thought of is to add protection on the S3 bucket so that the assets can only be accessed from the website itself (an Iam role that the EC2 instance has access to).
Just unsure how this works. The bucket is set to static website hosting so everything is public in it, I'm guessing I need to change that then add some direct permissions. Has anyone done this or can anyone provide info on whether this is possible.
Thanks

You can serve video content through Amazon CloudFront, which serves content using video protocols rather than as file downloads. This can keep your content (mostly) safe.
See: On-Demand and Live Streaming Video with CloudFront - Amazon CloudFront
You would then keep the videos private in S3, but use an Origin Access Identity that permits CloudFront to access the content and serve it to users.

In addition to already mentioned Cloudfront, you could also use AWS Elastic Transcoder (https://aws.amazon.com/elastictranscoder/) to convert video files to mpeg-dash or hls format (https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP). These formats basically consists of short (for example 10s long) video parts allowing adaptive bitrate and making it much harder to download as 1 long video.

For CloudFront to work with S3 static website endpoints, AWS generally recommends having a public read permissions on the S3 bucket. There is no native way of achieving security between CloudFront and S3 static website endpoint, however in this case, we can use a workaround to satisfy your use case.
By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}

Related

Restrict read-write access from my S3 bucket

I am hosting a website where users can write and read files, which are stored into another S3 Bucket. However, I want to restrict the access of these files only to my website.
For example, loading a picture.
If the request comes from my website (example.com), I want the read (or write if I upload a picture) request to be allowed by the AWS S3 storing bucket.
If the request comes from the user who directly writes the Object URL in his browser, I want the storing bucket to block it.
Right now, even with all I have tried, people can access ressources from the Object URL.
Here is my Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Id",
"Statement": [
{
"Sid": "Sid",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectAcl"
],
"Resource": "arn:aws:s3:::storage-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://example.com/*"
}
}
}
]
}
Additionnal informations:
All my "Block public access" are unchecked as you can see here. (I think that the problem comes from here. When I check the two boxes about ACL, my main problem is fixed, but I got a 403 error - Forbidden - when it comes to upload files to the Bucket, another problem);
My ACL looks like this;
My website is statically hosted on another S3 Bucket.
If you need more informations or details, ask me.
Thank you in advance for your answers.
This message has been written by a French speaking guy. Sorry for the mistakes
"aws:Referer": "http://example.com/*
The referer is an http header passed by the browser and any client could just freely set the value. It provides no real security
However, I want to restrict the access of these files only to my website
Default way restrict access to S3 resources for a website is using the pre-signed url. Basically your website backend can create an S3 url to download or upload an s3 object and pass the url only to authenticated /allowed client. Then your resource bucket can restrict the public access. Allowing upload without authentication is usually a very bad idea.
Yes, in this case your website is not static anymore and you need some backend logic to do so.
If your website clients are authenticated, you may use the AWS API Gateway and Lambda to create this pre-signed url for the clients.

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

My AS3 Bucket Policy only applies to some Objects

I'm having a really hard time setting up my bucket policy, it looks like my bucket policy only applies to some objects in my bucket.
What I want is pretty simple: I store video files in the bucket and I want them to be exclusively downloadable from my webiste.
My approach is to block everything by default, and then add allow rules:
Give full rights to root and Alice user.
Give public access to files in my bucket from only specific referers (my websites).
Note:
I manually made all the objects 'public' and my settings for Block Public Access are all set to Off.
Can anyone see any obvious errors in my bucket policy?
I don't understand why my policy seems to only work for some files.
Thank you so much
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite1.com/*",
"https://mywebsite2.com/*"
]
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::426873019732:root",
"arn:aws:iam::426873019732:user/alice"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::MY_BUCKET",
"arn:aws:s3:::MY_BUCKET/*"
]
}
]
}
Controlling access via aws:Referer is not secure. It can be overcome quite easily. A simple web search will provide many tools that can accomplish this.
The more secure method would be:
Keep all objects in your Amazon S3 bucket private (do not "Make Public")
Do not use a Bucket Policy
Users should authenticate to your application
When a user wishes to access one of the videos, or when your application creates an HTML page that refers/embeds a video, the application should determine whether the user is entitled to access the object.
If the user is entitled to access the object, the application creates an Amazon S3 pre-signed URL, which provides time-limited access to a private object.
When the user's browser requests to retrieve the object via the pre-signed URL, Amazon S3 will verify the contents of the URL. If the URL is valid and the time limit has not expired, Amazon S3 will return the object (eg the video). If the time has expired, the contents will not be provided.
The pre-signed URL can be created in a couple of lines of code and does not require and API call back to Amazon S3.
The benefit of using pre-signed URLs is that your application determines who is entitled to view objects. For example, a user could choose to share a video with another user. Your application would permit the other user to view this shared video. It would not require any changes to IAM or bucket policies.
See: Amazon S3 pre-signed URLs
Also, if you wish to grant access to an Amazon S3 bucket to specific IAM Users (that is, users within your organization, rather than application users), it is better to grant access on the IAM User rather than via an Amazon S3 bucket. If there are many users, you can create an IAM Group that contains multiple IAM Users, and then put the policy on the IAM Group. Bucket Policies should generally be used for granting access to "everyone" rather than specific IAM Users.
In general, it is advisable to avoid using Deny policies since they can be difficult to write correctly and might inadvertently deny access to your Admin staff. It is better to limit what is being Allowed, rather than having to combine Allow and Deny.

What to write in bucket policy for secure my video

I hosted my video on Amazon S3 for selling online course like Udemy.
Can you guide me about, what bucket policy do I need for secure my video that student can view them but don't download or someone else can't find URL for that video. What should I write in the bucket policy? And which player do I need on my wordpress website to play these videos.Please help me out.
{
"Version": "2008-10-17",
"Id": "Policy1414368633278",
"Statement": [
{
"Sid": "Stmt1414368595009",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOURBUCKETNAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://YOURDOMAINNAME.com/*"
}
}
}
]
}
A bucket policy alone is not sufficient to secure your content as you describe.
You will require some application logic to determine whether a user is permitted to access the object. If the application then wishes to grant access, it can create a time-limited pre-signed URL. This allows the object to be accessed for a specific time period, after which access is denied.
Companies like Udemy implement their own form of access control. If you were to supply a video to them, they would host it and control access.
Bottom line: You need an application to control access, which then provides a link that tells Amazon S3 to grant access to the object.

Amazon S3 Permissions by Client code

I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?
There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.
I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.