Amazon S3 Permissions by Client code - amazon-web-services

I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?

There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.

I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.

Related

How to setup Amazon S3 policy for ip address

I am using an S3-compatible storage (digital ocean spaces) to host images from my web application.
To prevent hotlinking and minimize direct downloads I applied this policy:
{
"Id": ip referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from my server.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "server-ip-address"
}
}
}
]
}
The trick seemed to work and I am now unable to access the files directly, however, neither can my web application. Have I done something wrong?
Is there a way to debug a referrer or something?
Content is private by default. Your policy is not granting any access via an Allow statement, so the content is not accessible. The Deny can be used to remove permissions granted by an Allow, but does not itself grant access.
You could change it into an Allow policy, and change NotIpAddress into IpAddress. This would grant access to your server to download content. However, it would be better to use an S3-style API call to download content from your own bucket rather than using an anonymous HTTP request.
If you are putting a link to the object in an HTML page, then the policy will provide the security that you expect because the user's browser will attempt to access the object and it will be denied access since the request is not originating from your server's IP address.

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Protecting S3 assets from direct download

I have my assets (images/videos etc) stored in S3 and everything is working great.
The videos however need to be safe from download by the user. I have thought about numerous ways using Ajax and blobs and hiding context menus etc but would prefer a more simple but stronger technique.
The idea I've thought of is to add protection on the S3 bucket so that the assets can only be accessed from the website itself (an Iam role that the EC2 instance has access to).
Just unsure how this works. The bucket is set to static website hosting so everything is public in it, I'm guessing I need to change that then add some direct permissions. Has anyone done this or can anyone provide info on whether this is possible.
Thanks
You can serve video content through Amazon CloudFront, which serves content using video protocols rather than as file downloads. This can keep your content (mostly) safe.
See: On-Demand and Live Streaming Video with CloudFront - Amazon CloudFront
You would then keep the videos private in S3, but use an Origin Access Identity that permits CloudFront to access the content and serve it to users.
In addition to already mentioned Cloudfront, you could also use AWS Elastic Transcoder (https://aws.amazon.com/elastictranscoder/) to convert video files to mpeg-dash or hls format (https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP). These formats basically consists of short (for example 10s long) video parts allowing adaptive bitrate and making it much harder to download as 1 long video.
For CloudFront to work with S3 static website endpoints, AWS generally recommends having a public read permissions on the S3 bucket. There is no native way of achieving security between CloudFront and S3 static website endpoint, however in this case, we can use a workaround to satisfy your use case.
By default, all the S3 resources are private, so only the AWS account that created the resources can access them. To allow read access to these objects from your website, you can add a bucket policy that allows s3:GetObject permission with a condition, using the aws:referer key, that the get request must originate from specific webpages. The following policy specifies the StringLike condition with the aws:Referer condition key.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}

Items in my Amazon S3 bucket are publicly accessible. How do I restrict access so that the Bucket link is only accessible from within my app?

I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.

What to write in bucket policy for secure my video

I hosted my video on Amazon S3 for selling online course like Udemy.
Can you guide me about, what bucket policy do I need for secure my video that student can view them but don't download or someone else can't find URL for that video. What should I write in the bucket policy? And which player do I need on my wordpress website to play these videos.Please help me out.
{
"Version": "2008-10-17",
"Id": "Policy1414368633278",
"Statement": [
{
"Sid": "Stmt1414368595009",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOURBUCKETNAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "http://YOURDOMAINNAME.com/*"
}
}
}
]
}
A bucket policy alone is not sufficient to secure your content as you describe.
You will require some application logic to determine whether a user is permitted to access the object. If the application then wishes to grant access, it can create a time-limited pre-signed URL. This allows the object to be accessed for a specific time period, after which access is denied.
Companies like Udemy implement their own form of access control. If you were to supply a video to them, they would host it and control access.
Bottom line: You need an application to control access, which then provides a link that tells Amazon S3 to grant access to the object.