I've spent over a week on this issue, and Amazon does not provide any resources that can answer this for me. I have built a custom CMS that allows thousands of users to upload their own files. Those files need to be migrated to a CDN, as they are beginning to overwhelm the file system at near 50GB. I have already integrated with Amazon's S3 PHP SDK. My application must be able to do the following:
Create and remove buckets through the API, not the console. This is working.
Perform all CRUD operations on the files uploaded to the buckets, again explicitly through the console. Creating and removing files is working.
As part of CRUD, these files must then be readable via HTTP/HTTPS as they are required assets in the web application. These are all registering as 'Access Denied', due to the buckets not being public by default.
As I understand it, the point of a CDN is that Content can be Delivered via a Network. I need to understand how to make these files visible in a web application without the use of the console, as these buckets will be dynamically created by users and it's a game-breaker to require administration to update them manually.
I'd appreciate it if someone help me resolve this.
Objects in Amazon S3 are private by default. If you wish to make the objects public (meaning accessible to everyone in the world), there are two methods:
When uploading the objects, mark them as ACL=public-read. This Access Control List will make the object itself public. OR
Add a bucket policy to the bucket that will make the entire bucket (or, if desired, a portion of the bucket) public.
From Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicRead",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Such a policy can be added after bucket creation by using putBucketPolicy().
Also, please be aware that Amazon S3 Block Public Access is turned on by default on buckets to prevent exposing private content. The above method will require this block to be turned off. This can be done programmatically with putPublicAccessBlock() or deletePublicAccessBlock().
Related
I'm new to AWS tools and although I have tried to search thoroughly for an answer I wasn't able to fixate on a solution.
My usecase is this:
I have a bucket where I need to store images, upload them via my server however I need to display them on my website.
Should my bucket be public?
If not, what should I do to allow everyone to read those images but not be able to mass upload on it from origins who are not my server?
If you want the images to be publicly accessible for your website, then the objects need to be public.
This can be done by creating a Bucket Policy that makes the whole bucket, or part of the bucket, publicly accessible.
Alternatively, when uploading the images, you can use ACL='public-read', which makes the individual objects public even if the bucket isn't public. This way, you can have more fine-grained control over what content in the bucket is public.
Both of these options require you to turn off portions of S3 Block Public Access to allow the Bucket Policy or ACLs.
When your server uploads to S3, it should be using Amazon S3 API calls using a set of AWS credentials (Access Key, Secret Key) from an IAM User. Grant the IAM User permission to put objects in the bucket. This way, that software can upload to the bucket totally independently to whether the bucket is public. (Never make a bucket publicly writable/uploadable, otherwise people can store anything in there without your control.)
upload them via my server however I need to display them on my website.
In that case only your server can upload the images. So if you are hosting your web app on EC2 or ECS, then you use instance role and task role to provide S3 write access.
Should my bucket be public?
It does not have to. Often CloudFront is used to host images or files from S3 using OAI. This way your bucket remains fully private.
To start, I'll try and make sure to supply any information that might be needed, and really appreciate any help with this issue. I've been following basic AWS Tutorials for the past couple days to try to build a basic outline for a website idea, but found myself stuck when following this tutorial: https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html
The goal with this is to enable my website to CRUD PDF files to an S3 bucket via API Gateway.
So far, I've followed the tutorial steps, set up the S3 Bucket, and attached the role (S3FullAccess) to the different APIs. The result is that, while other requests (GET/POST) seem to be working correctly, DELETE object results in a 405 method not allowed. I've looked around a bunch (been working on this particular issue for the past couple hours) and am at the point of:
Doubting it's the policy, since JSON shows {"Effect":"Allow", "Action": "s3:*", "Resource": " *"}
Doubting it's the S3 Bucket, as anything that looks like it could block access has been disabled
Wondering if Object ACL is the culprit, since the Grantee settings for my objects (S3 Console -> Bucket -> Object -> Permissions) shows that only the "Object owner" has permissions [Object: Read, Object ACL: Read/Write].
So now I'm trying to figure out if sending ACL configuration as part of the Gateway PUT request is the solution (and if so how). Additionally, I might be able to use a lambda function to reconfigure the object's ACL on the event trigger of a PUT request to S3, but that sounds like bad design for what's intended.
Additionally:
I'm not using Versioning, MFA, Encryption, or Object Lock
All "Block Public Access" settings are set to Off
No Bucket Policy is shown (since I'm using IAM)
AWS Regions are properly selected
Let me know if there's anything you need for additional info (such as screenshots of Gateway, IAM, or S3) and I'll update the post with them.
Thanks so much.
Recently created a static website, hosted on s3 and I noticed when users check the source of the website, they can click links which allow them to access items such as images in a separate tab. Is there a way to allow the website to access the images but to limit users trying to access the source image.
A user simply opening assets in a tab should be fine, if you're simply trying to prevent the content from accessible other than on your domain you can use the Referer head to lock it down to only via your web site.
This can be done in S3 via a bucket policy similar to the one below.
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":["s3:GetObject","s3:GetObjectVersion"],
"Resource":"arn:aws:s3:::awsexamplebucket1/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
You can always enhance this by using a CloudFront distribution combined with a AWS WAF to included a rule to block by referrer.
If you're trying to lock this content (whether someone needs to either login to see it or pay to get it) you have a couple of options.
You can create pre-signed URLs for your S3 objects and expose these in the HTML. This will be valid for a limited time (depending on the parameters passed into the generation).
You can use a CloudFront distribution with either signed cookies or signed URLs.
Sadly you can't limit ability to download your images without proper authentication. If your users can see images in their browsers, they can download them, as they've had been already downloaded.
But you can limit your users from directly going to your S3 bucket. For this you can front your S3 bucket with CloudFront (CF).
Specifically, you could setup Origin Access Identity in CF and make your website and all images accessible only through CF:
Restricting Access to Amazon S3 Content by Using an Origin Access Identity
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
How do I use CloudFront to serve a static website hosted on Amazon S3?
I just tried the new "Bucket Policy Only" setting in a preexisting test bucket. I want to be able to anonymously download objects by URL, but prevent the public from listing objects in the bucket.
If I add the Storage Object Viewer role to allUsers, then the public can both list the bucket and download objects. If I don't add that role, the public can't download files.
What's the trick? I have this working fine with the old ACL system.
It seems to work the way I want if I use the role Storage Legacy Object Reader.
It does seem odd to use something called "Legacy" for such a basic use case.
I have an HTML file which contains screenshots of automation test run. These screenshots are stored in an s3 bucket. I will be attaching this HTML file to an email. Hence will like these screenshots to be rendered and be visible to anyone who opens the HTML report on their laptop.
The challenge
- I am behind a corporate firewall hence cannot allow public access to the s3 bucket
I can access the s3 bucket via IAM access from ec2-instance and will be uploading the screenshots to s3 using the same.
I am currently exploring the following options
Accessing S3 via a cloudfront url (not sure regarding the access control policies available via cloudfront). This option will require lots of back forth with IT hence would be a last resort
Embed javascript in the HTML file to access a hosted service on EC2. This service then fetches the objects from S3.
You could simply set the View only Public policy(refer bottom). That will allow anyone accessing and view the Images having correct URL.
Accessing S3 via a cloudfront url (not sure regarding the access control policies available via cloudfront). This option will require lots of back forth with IT hence would be a last resort
In my opinion this is not a correct solution. It is over engineering.
Embed javascript in the HTML file to access a hosted service on EC2. This service then fetches the objects from S3.
In my opinion this also unnecessary overhead, you may be dealing with.
Simple, solution will be--
Allow Public GET Only access. SO whomsoever having full path correct URL will be able to access, in your case it will be embeded HTML report that will have links of S3, sometthing like https://s3.amazonaws.com/bucket-name/somepath/somepath/../someimage.jpg
Create some complicated URL pattern, so that no one could guess it easily, hence no security breach.
Public access policy will look something like below.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
} ] }