I use Amazon S3 to store my website's images. I have a bucket policy that stops other websites hotlinking my images.
To get this to work, I have the file permission set to "private" in S3 and then the bucket policy opens access up to my website only.
This works fine, but because the file is "private" I cannot view the image directly in a browser, and this is something I want to allow.
Here is the policy
{
"Version": "2008-10-17",
"Id": "preventHotLinking",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://mydomain.com/*",
"http://www.mydomain.com/*"
]
}
}
}
]
}
So, to summarise:
the file itself is set to private
the above policy allows images to be displayed on pages from the domain "mydomain.com" but not on "someoneelsesdomain.com"
This also blocks direct access though, so pasting http://jbtestyt.s3.amazonaws.com/archie.jpg into a browser does not show the image (because it is private).
But I would like the image to display for direct access.
Possibly the solution is to make the file public then deny all referrers apart from ones I list? But I was not sure, and could not find anything like that.
Many thanks in advance.
I am inferring that you are looking for a "native" S3 solution, so here goes. If you really think about it, hotlinking in essence is "directly accessing" the file. Therefore, "the file itself is set to private" will always ensure that (on AWS at least) you cannot directly access it no matter what. This is by design on S3. What you suggest for whitelisting referrers is probably the only straightforward way that you're going to accomplish this using S3 alone.
Related
I am trying to set up things on S3 to prevent hotlinking.
I've taken advice from here: How do I prevent hotlinking on Amazon S3 without using signed URLs?
And also this one: https://kinsta.com/blog/hotlinking/
However, I can't get it to work.
First, I prevent all public access to the bucket so the settings on the Permissions tab are like this:
I have set the policy like this:
{
"Version": "2008-10-17",
"Id": "HTTP referer policy example",
"Statement": [
{
"Sid": "prevent hotlinking",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://example.co.uk/*",
"http://www.example.co.uk/*",
"https://example.co.uk/*",
"https://www.example.co.uk/*",
"http://localhost/01-example/*"
]
}
}
}
]
}
However, when I try to access content from the bucket from the referring site, I cannot see the S3 content.
What am I doing wrong?
I prevent all public access to the bucket so the settings on the Permissions tab are like this.
That's why it does not work. Your policy allows for public/anonymous access ("Principal": {"AWS": "*"}), but at the same time you explicitly "prevent all public access". You have to enable the public access. From docs:
Before you use a bucket policy to grant read-only permission to an anonymous user, you must disable block public access settings for your bucket.
Blocking public access options will override any other configuration you're using, for this reason your bucket policy will not take effect.
To allow your policy to work you will need to disable this, you might choose to keep several of the options enabled to prevent further changes being made to the bucket policy.
On a related note to your policy, the Referer header can be faked to still access these assets so it should not be treated as a silver bullet.
Another solution to use would be to either use an S3 signed URL or to take a look at using a CloudFront distribution in front of your S3 bucket and then making use of a signed cookie.
I am creating a video on demand platform similar to netflix. I want the users that have purchased the subscription to view my videos to be able to watch them (not download them). I also do not want the users to be able to copy the source of the video url, and access it through a new tab (this is working for now, it says access denied).
So what I have done for now is this, I have copied the official code from amazon's documentation, which allegedly only allows the content (in my case the video) to be played on the website that I specify. This is the policy:
{
"Version": "2008-10-17",
"Id": "Policy1408118342443",
"Statement": [
{
"Sid": "Stmt1408118336209",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite/*",
"https://mywebsite/*"
]
}
}
}
]
So what hapenned was, I was not able to play the video on my site and I was not able to access the video by direct URL. I have tried selecting the video file, and allowing "Read object" for public access, but that only made my video be direcctly accessable by URL, which I don't want.
My "Block public access" permissions are all currently off, because if they are On I cannot edit the bucket policy (it says "Access denied" when I hit save).
My question is, how do I protect my video content from bandwith theft? I don't want a person to buy my membership, then send the direct video link to his friends so everyone can see. Allegedly to amazon this is possible, but what seems to be the problem?
Also I am planning to use Cloudfront after I solve this issue, so hopefully that won't interfere.
I have my assets (images/videos etc) stored in S3 and everything is working great.
The videos however need to be safe from download by the user. I have thought about numerous ways using Ajax and blobs and hiding context menus etc but would prefer a more simple but stronger technique.
The idea I've thought of is to add protection on the S3 bucket so that the assets can only be accessed from the website itself (an Iam role that the EC2 instance has access to).
Just unsure how this works. The bucket is set to static website hosting so everything is public in it, I'm guessing I need to change that then add some direct permissions. Has anyone done this or can anyone provide info on whether this is possible.
Thanks
You can serve video content through Amazon CloudFront, which serves content using video protocols rather than as file downloads. This can keep your content (mostly) safe.
See: On-Demand and Live Streaming Video with CloudFront - Amazon CloudFront
You would then keep the videos private in S3, but use an Origin Access Identity that permits CloudFront to access the content and serve it to users.
In addition to already mentioned Cloudfront, you could also use AWS Elastic Transcoder (https://aws.amazon.com/elastictranscoder/) to convert video files to mpeg-dash or hls format (https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP). These formats basically consists of short (for example 10s long) video parts allowing adaptive bitrate and making it much harder to download as 1 long video.
For CloudFront to work with S3 static website endpoints, AWS generally recommends having a public read permissions on the S3 bucket. There is no native way of achieving security between CloudFront and S3 static website endpoint, however in this case, we can use a workaround to satisfy your use case.
By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.
I've setup an S3 bucket to allow anonymous uploads. The goal is to allow uploading but not downloading, but the problem I have found is that not only can I not block downloading of these files, but I do not own them and cannot delete copy or manipulate them in any way. The only way I am able to get rid of them is to delete the bucket.
Here is the policy:
{
"Version": "2008-10-17",
"Id": "policy",
"Statement": [
{
"Sid": "allow-public-put",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::legendstemp/*"
}
]
}
This works, but I no longer have access to these files using either the Amazon Console or programmatically.
The bucket policy also does not apply to these files because the file owner is not the same as the bucket owner. I cannot take ownership of them either.
How can setup a bucket policy to allow anonymous upload but not download?
I know it's been a while since this was asked, but I came across this while trying to get this to work myself, and wrote about it in some length in:
https://gist.github.com/jareware/d7a817a08e9eae51a7ea
The gist of it (heh!) is that you can do anonymous upload and disallow other actions, but you won't be able to carry out the actions using authenticated requests. At least as far as I can tell.
Hope this helps.