I have two buckets, and someone else set up the permission for them. One allows uploads, and the second one isn't. I checked the permissions on both, and neither have a bucket policy or CORS Configuration that I can see. These are the permissions for the one that is allowing uploads
I've opened up the permissions even more for the other bucket, but it still doesn't allow uploads.
Besides those places is there somewhere else that you would set permissions that I'm missing? The Amazon docs just talk about this and bucket policy, but as I said, the Bucket Policy and CORS configuration for the one that is working is blank. I'm not sure what I need to do here.
Do you have anything under Identity & Access Management (IAM)? There might be policy preventing you from access bucket
Here is a link explain how IAM policy work:
http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3
I don't know what the issue was, but the original app was written to use the East Coast data center. When I changed to bucket to that location suddenly it started working again. My only assumption is that someone must have hard coded it somewhere.
Related
I have followed this guide on how to deploy my Strapi app on AWS. I have also read other Strapi guides on the same subject, all having the exact same way of configuring the S3 interaction.
Everything works fine, except the previews/downloads of images from S3. Uploads work as intended.
For the previews, I first had issues with CSP, but after having changed my config/middlewares.ts to something similar to this answer, that seems to work. At least I guess so, because the CSP error disappeared, but instead I started getting GET https://<bucket>.amazonaws.com/<file>.jpg?width=736&height=920 403 (Forbidden)...
My guess is that there's something wrong with my S3 permissions settings, but they are exactly as instructed in the guide above (my first link):
Block public access:
I haven't touched the Bucket policy, Object ownership, ACL and CORS settings, so they look as follows:
Bucket policy: none
Object Ownership: Bucket owner preferred (as instructed by the guide above).
ACL: "Bucket owner (your AWS account)" has List, Write access on Objects, and Read, Write on Bucket ACL. The other roles (Everyone, Authenticated users group, S3 log delivery group) have no access whatsoever.
CORS: None
I have configured the Strapi application with the credentials (access key id + access key secret) of the IAM user which is browsing the above settings (bucket owner).
I could of course fool around with the S3 settings (like unchecking ALL boxes under "Block public access", and open READ access for "Everyone" under "ACL"), but I of course don't want to be less restrictive than what is specified by the available guides...
Can anyone see anything that looks off in this setup?
I initially found some more information than what was present in all guides about what kind of configuration was expected on the S3 side, on the bottom of the upload-aws-s3 provider page. So I added the specified Policy actions and CORS config. However, I still got 403 when trying to preview the uploaded images in the deployed admin panel...
I finally got it working accidentally a day later when I tested around different bucket settings. I temporarily blocked all public access (checked all four check boxes), and then unchecked the first two checkboxes again (as specified in the image in my original post).
I guess the Policy & CORS settings weren't properly updated once I changed them, and just needed a shake (through updating the settings again) in order to get applied...
I have an Amazon S3 bucket that is being used by CloudTrail.
However, the S3 bucket is not visible in S3.
When I click on the bucket in CloudTrail, it links to S3 but I get access denied.
The bucket is currently in use by CloudTrail, and based on the icons that seems to be working fine.
So, it seems this is an existing bucket but I cannot access it!
I also tried to access the S3 bucket with the root account, but the same issue occurs there.
Please advise on how I would regain access.
Just because cloudtrail has access to the bucket, doesn't mean your account also does.
You would need to talk to whoever manages your security and request access. or if this is your account, make sure you are logged in with credentials that have the proper access.
First i was hardcoded my aws "accessKey" and "securityKey" in client side JS file, but it was very insecure so i read about "aws-cognito", and implemented new JS in following manner :
Still i am confused with one thing that can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded ? Or any other security steps should i take ?
Thanks,
Jaikey
Definition of Hack
I am not sure what hacking means in the context of your question.
I assume that you actually mean "that anyone can do something different than uploading a file" which includes deleting or accessing objects inside your bucket.
Your solution
As Ninad already mentioned above, you can use your current approach by enabling "Enable access to unauthenticated identities" [1]. You will then need to create two roles of which one is for "unauthenticated users". You could grant that role PutObject permissions to the S3 bucket. This would allow everyone who visits your page to upload objects to the S3 bucket. I think that is what you intend and it is fine from a security point of view since the IdentityPoolId is a public value (i.e. not confidential).
Another solution
I guess, you do not need to use Amazon Cognito to achieve what you want. It is probably sufficient to add a bucket policy to S3 which grants permission for PutObject to everyone.
Is this secure?
However, I would not recommend to enable direct public write access to your S3 bucket.
If someone would abuse your website by spamming your upload form, you will incure S3 charges for put operations and data storage.
It would be a better approach to send the data through Amazon CloudFront and apply a WAF with rate-based rules [2] or implement a custom rate limiting service in front of your S3 upload. This would ensure that you can react appropriately upon malicious activity.
References
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
[2] https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Yes, s3 bucket is secure if you are using through "AWS-Cognito-Identity-Pool" at client side, also enable CORS which allow action only from specific domain that ensure if someone try direct upload or list bucket, will get "access-denied".
Also make sure that you have set the file r/w credentials of the hard-coded access so that it can only be read by the local node and nobody else. By the way, the answer is always a yes, it is only a matter how much someone is willing to engage themselves to "hack". Follow what people said here, and you are safe.
I would like the get the email notification to specific mail id, if anyone updated the permission like list, write, read, make public of objects inside the specific S3 Bucket.
we have the situation where multiple people in our organization can allowed to access the S3 buckets. Each can upload/download their own team related files. While doing this some people make mistake by making the whole bucket as public, or enabled the write and list permission. We are unable to identify this problem when this permission enabled and couldn't take immediate action revoke that permission . To avoid this we require to notification mail service when someone changed the permission on particular S3 Bucket.
Please help how to handle this situation.
Write better permissions, attach exclusive deny permissions to low level users to specific buckets, which will overrule the existing bad policies. Even if you get a notification when someone changes a bucket policy it might take you some time to act on that. CloudTrail API detection and email notification can be done but it will still be open to vulnerabilities. I believe efforts should be focused on figuring out access permissions primarily, then an event based email trigger.
#Amit shared the correct article for that: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events | AWS Security Blog
I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!