To start, I'll try and make sure to supply any information that might be needed, and really appreciate any help with this issue. I've been following basic AWS Tutorials for the past couple days to try to build a basic outline for a website idea, but found myself stuck when following this tutorial: https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html
The goal with this is to enable my website to CRUD PDF files to an S3 bucket via API Gateway.
So far, I've followed the tutorial steps, set up the S3 Bucket, and attached the role (S3FullAccess) to the different APIs. The result is that, while other requests (GET/POST) seem to be working correctly, DELETE object results in a 405 method not allowed. I've looked around a bunch (been working on this particular issue for the past couple hours) and am at the point of:
Doubting it's the policy, since JSON shows {"Effect":"Allow", "Action": "s3:*", "Resource": " *"}
Doubting it's the S3 Bucket, as anything that looks like it could block access has been disabled
Wondering if Object ACL is the culprit, since the Grantee settings for my objects (S3 Console -> Bucket -> Object -> Permissions) shows that only the "Object owner" has permissions [Object: Read, Object ACL: Read/Write].
So now I'm trying to figure out if sending ACL configuration as part of the Gateway PUT request is the solution (and if so how). Additionally, I might be able to use a lambda function to reconfigure the object's ACL on the event trigger of a PUT request to S3, but that sounds like bad design for what's intended.
Additionally:
I'm not using Versioning, MFA, Encryption, or Object Lock
All "Block Public Access" settings are set to Off
No Bucket Policy is shown (since I'm using IAM)
AWS Regions are properly selected
Let me know if there's anything you need for additional info (such as screenshots of Gateway, IAM, or S3) and I'll update the post with them.
Thanks so much.
Related
I have followed this guide on how to deploy my Strapi app on AWS. I have also read other Strapi guides on the same subject, all having the exact same way of configuring the S3 interaction.
Everything works fine, except the previews/downloads of images from S3. Uploads work as intended.
For the previews, I first had issues with CSP, but after having changed my config/middlewares.ts to something similar to this answer, that seems to work. At least I guess so, because the CSP error disappeared, but instead I started getting GET https://<bucket>.amazonaws.com/<file>.jpg?width=736&height=920 403 (Forbidden)...
My guess is that there's something wrong with my S3 permissions settings, but they are exactly as instructed in the guide above (my first link):
Block public access:
I haven't touched the Bucket policy, Object ownership, ACL and CORS settings, so they look as follows:
Bucket policy: none
Object Ownership: Bucket owner preferred (as instructed by the guide above).
ACL: "Bucket owner (your AWS account)" has List, Write access on Objects, and Read, Write on Bucket ACL. The other roles (Everyone, Authenticated users group, S3 log delivery group) have no access whatsoever.
CORS: None
I have configured the Strapi application with the credentials (access key id + access key secret) of the IAM user which is browsing the above settings (bucket owner).
I could of course fool around with the S3 settings (like unchecking ALL boxes under "Block public access", and open READ access for "Everyone" under "ACL"), but I of course don't want to be less restrictive than what is specified by the available guides...
Can anyone see anything that looks off in this setup?
I initially found some more information than what was present in all guides about what kind of configuration was expected on the S3 side, on the bottom of the upload-aws-s3 provider page. So I added the specified Policy actions and CORS config. However, I still got 403 when trying to preview the uploaded images in the deployed admin panel...
I finally got it working accidentally a day later when I tested around different bucket settings. I temporarily blocked all public access (checked all four check boxes), and then unchecked the first two checkboxes again (as specified in the image in my original post).
I guess the Policy & CORS settings weren't properly updated once I changed them, and just needed a shake (through updating the settings again) in order to get applied...
I've spent over a week on this issue, and Amazon does not provide any resources that can answer this for me. I have built a custom CMS that allows thousands of users to upload their own files. Those files need to be migrated to a CDN, as they are beginning to overwhelm the file system at near 50GB. I have already integrated with Amazon's S3 PHP SDK. My application must be able to do the following:
Create and remove buckets through the API, not the console. This is working.
Perform all CRUD operations on the files uploaded to the buckets, again explicitly through the console. Creating and removing files is working.
As part of CRUD, these files must then be readable via HTTP/HTTPS as they are required assets in the web application. These are all registering as 'Access Denied', due to the buckets not being public by default.
As I understand it, the point of a CDN is that Content can be Delivered via a Network. I need to understand how to make these files visible in a web application without the use of the console, as these buckets will be dynamically created by users and it's a game-breaker to require administration to update them manually.
I'd appreciate it if someone help me resolve this.
Objects in Amazon S3 are private by default. If you wish to make the objects public (meaning accessible to everyone in the world), there are two methods:
When uploading the objects, mark them as ACL=public-read. This Access Control List will make the object itself public. OR
Add a bucket policy to the bucket that will make the entire bucket (or, if desired, a portion of the bucket) public.
From Bucket Policy Examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicRead",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
}
]
}
Such a policy can be added after bucket creation by using putBucketPolicy().
Also, please be aware that Amazon S3 Block Public Access is turned on by default on buckets to prevent exposing private content. The above method will require this block to be turned off. This can be done programmatically with putPublicAccessBlock() or deletePublicAccessBlock().
First i was hardcoded my aws "accessKey" and "securityKey" in client side JS file, but it was very insecure so i read about "aws-cognito", and implemented new JS in following manner :
Still i am confused with one thing that can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded ? Or any other security steps should i take ?
Thanks,
Jaikey
Definition of Hack
I am not sure what hacking means in the context of your question.
I assume that you actually mean "that anyone can do something different than uploading a file" which includes deleting or accessing objects inside your bucket.
Your solution
As Ninad already mentioned above, you can use your current approach by enabling "Enable access to unauthenticated identities" [1]. You will then need to create two roles of which one is for "unauthenticated users". You could grant that role PutObject permissions to the S3 bucket. This would allow everyone who visits your page to upload objects to the S3 bucket. I think that is what you intend and it is fine from a security point of view since the IdentityPoolId is a public value (i.e. not confidential).
Another solution
I guess, you do not need to use Amazon Cognito to achieve what you want. It is probably sufficient to add a bucket policy to S3 which grants permission for PutObject to everyone.
Is this secure?
However, I would not recommend to enable direct public write access to your S3 bucket.
If someone would abuse your website by spamming your upload form, you will incure S3 charges for put operations and data storage.
It would be a better approach to send the data through Amazon CloudFront and apply a WAF with rate-based rules [2] or implement a custom rate limiting service in front of your S3 upload. This would ensure that you can react appropriately upon malicious activity.
References
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
[2] https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Yes, s3 bucket is secure if you are using through "AWS-Cognito-Identity-Pool" at client side, also enable CORS which allow action only from specific domain that ensure if someone try direct upload or list bucket, will get "access-denied".
Also make sure that you have set the file r/w credentials of the hard-coded access so that it can only be read by the local node and nobody else. By the way, the answer is always a yes, it is only a matter how much someone is willing to engage themselves to "hack". Follow what people said here, and you are safe.
I am trying to use Lambda#Edge functions in the article below on an already existing s3 bucket and its distribution:
https://aws.amazon.com/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/
I can reach images but whenever I try to resize I get "Access Denied" error.
S3 bucket is publicly readable.
In bucket policy I gave put object get object permissions to both public and the IAM role lambda functions are using.
I have attached various lambda policies to IAM role of functions as you can see below:
AWSLambdaFullAccess, CloudFrontFullAccess, AdministratorAccess, AWSLambdaExecute, AWSLambdaBasicExecutionRole, AWSLambdaRole
Distrbutions view protocol policy HTTP and HTTPS so request type shouldn't be a problem
Can anyone help? I am going crazy :(
I have followed the same article and had the same problem. For me, the query string was not being forwarded to the origin response function. The function just returns the original response(403 though I made the bucket public) when no query string is found. The article uses cache policy settings from the Cloudfront configuration to forward the query string which is now legacy. (Since the article was authored on 20 FEB 2018)
You can either configure querystring forwarding with the same configuration via the now legacy cache policy. Preferably use the new origin request policy which lets you explicitly control the paramters sent. You can use the Managed-AllViewer policy to forward all headers, cookies & query string or create your own policy to cater to your needs.
Its been a long time since the question was posted, but I hope it helps someone facing the issue because of the modified configuration.
I have an HTML file which contains screenshots of automation test run. These screenshots are stored in an s3 bucket. I will be attaching this HTML file to an email. Hence will like these screenshots to be rendered and be visible to anyone who opens the HTML report on their laptop.
The challenge
- I am behind a corporate firewall hence cannot allow public access to the s3 bucket
I can access the s3 bucket via IAM access from ec2-instance and will be uploading the screenshots to s3 using the same.
I am currently exploring the following options
Accessing S3 via a cloudfront url (not sure regarding the access control policies available via cloudfront). This option will require lots of back forth with IT hence would be a last resort
Embed javascript in the HTML file to access a hosted service on EC2. This service then fetches the objects from S3.
You could simply set the View only Public policy(refer bottom). That will allow anyone accessing and view the Images having correct URL.
Accessing S3 via a cloudfront url (not sure regarding the access control policies available via cloudfront). This option will require lots of back forth with IT hence would be a last resort
In my opinion this is not a correct solution. It is over engineering.
Embed javascript in the HTML file to access a hosted service on EC2. This service then fetches the objects from S3.
In my opinion this also unnecessary overhead, you may be dealing with.
Simple, solution will be--
Allow Public GET Only access. SO whomsoever having full path correct URL will be able to access, in your case it will be embeded HTML report that will have links of S3, sometthing like https://s3.amazonaws.com/bucket-name/somepath/somepath/../someimage.jpg
Create some complicated URL pattern, so that no one could guess it easily, hence no security breach.
Public access policy will look something like below.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::examplebucket/*"]
} ] }