AWS Cost and Usage Report - Files not created in S3 bucket - amazon-web-services

I created the S3 bucket first. With the necessary permission. And enabled notification to lambda function on all object:put.
I then created a Cost and Usage report and selected the above S3 bucket as the storage. Permissions look correct, as it could create/update the test file with the name "aws-programmatic-access-test-object"
Its been few days now, and CUR say it has been generating reports. But I cannot see the files in the S3 bucket.
Interestingly my lambda function is being invoked with notifications about object:put.
But the files are nowhere to be found.
Can someone help understand what might be happening please ?

Related

What does PutBucketLogging from AWS S3 API exactly do?

As per the documentation,
Set the logging parameters for a bucket and to specify permissions for who can view and modify the logging parameters.
My understanding is that this API helps in capturing operations on an S3 bucket in an S3 location. I have the following questions: -
What are "logging parameters" here?
What kind of operations are captured?
When I ran this command on a bucket, it took some time for the s3 location to be visible in the UI. What exactly happens in the background? Does AWS store logs on each S3 bucket somewhere and this command brings those logs, since the API call is made, to the specified S3 location?
Thanks.

Can you trigger an AWS lambda function when a file is uploaded to a specific folder in S3?

I wish to trigger an AWS lambda function I upload a file to a specific folder in S3. There are multiple folders in the s3 bucket now. Is this possible and how do i do so?
Yes, you can Configure Amazon S3 event notifications, filtering on object key prefixes (and/or suffixes).
See Configuring notifications with object key name filtering. A prefix could be dogs/, for example. That way, all uploads to a key beginning with dogs/, e.g. dogs/alsatian.png would notify.
Note that you probably don't actually have any folders in your S3 bucket, just objects, unless you created them using the AWS Console. There really aren't any folders in S3.

Who has deleted files in S3 bucket?

Which is the best way to find out who deleted files in AWS S3 bucket?
I am working on AWS S3 Bucket. Going through the AWS docs and haven't found the best way to monitor S3 buckets so thought of checking if anyone can help me here.
For monitoring S3 object operations, such as DeleteObject, you have to enable CloudTrail with S3 data events:
How do I enable object-level logging for an S3 bucket with AWS CloudTrail data events?
Examples: Logging Data Events for Amazon S3 Objects
However, the trials don't work retrospectively. Thus, you have to check if you have such trial enabled in CloudTrail console. If not, then you can create one to monitor any future S3 object level activities for all, or selected, buckets.
To reduce the impact of accidental deletions you can enable object version. And to fully protect against that for important objects, you can use MFA delete.
You can check S3 access logs or CloudTrail to check who deleted files from your S3 bucket. More information here - https://aws.amazon.com/premiumsupport/knowledge-center/s3-audit-deleted-missing-objects/

Understanding how AppSync + S3 work together

I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!

Unable to upload files to my S3 bucket

I recently created an AWS free tier account and created an S3 bucket for an experimental project using rails deployed in heroku for production. But I am getting an error telling that something went wrong.
Through my heroku logs, I received this description :-
<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-east-2'</Message><Region>us-east-2</Region><RequestId>08B714808971C8B8</RequestId><HostId>lLQ+li2yctuI/sTI5KQ74icopSLsLVp8gqGFoP8KZG9wEnX6somkKj22cA8UBmOmDuDJhmljy/o=</HostId></Error>
I had put my S3 location to US East (Ohio) instead of the US Standard (I think) while creating the bucket. Is it because of of this?
How I can resolve this error? Is there any way to change the properties of my S3 bucket? If not should I build a fresh bucket and set up a new policy allowing access to that bucket?
Please let me know if there is anything else you need from me regarding this question
The preferred authentication mechanism for AWS services, known as Signature Version 4, creates different security credentials for each user, for each service, in each region, for each day. When a request is signed, it is signed with a signing key specific to that user, date, region, and service.
the region 'us-east-1' is wrong; expecting 'us-east-2'
This error means that a request was sent to us-east-2 using the credentials for us-east-1.
The 'region' that is wrong, here, refers to the region of the credentials.
You should be able to specify the correct region in your code, and resolve the issue. For legacy reasons, S3 is a little different than most AWS services, because if you specify the wrong region in your code (or the default region isn't the same as the region of the bucket) then your request is still automatically routed to the correct region... but the credentials don't match. (Most other services will not route to the correct region automatically, so the request will typically fail in a different way if the region your code is using is incorrect.)
Otherwise, you'll need to create a new bucket in us-east-1, because buckets cannot be moved between regions.
You can keep the same bucket name for the new bucket if you delete the old bucket, first, but there is typically a delay of a few minutes between the time you delete a bucket and the time that the service allows you to reuse the same name to create a new bucket, because the bucket directory is a global resource and it takes some time for directory changes (the bucket deletion) to propagate to all regions. Before you can delete a bucket, it needs to be empty.
Yup, you nailed the solution to your problem. Just create a bucket in the correct region and use that. If you want it to be called the same thing as your original bucket you'll need to delete it on us-east-2, then create it in us-east-1 as bucket names are globally unique.