S3 bucket not displaying Metrics - amazon-web-services

I created a S3 bucket and populated it by uploading few files in it. But I am not able to validate my use case to check the bucket size as the default metrics of total bucket size returns no data always.
Note: I am created the bucket in the AWS Console UI with default settings.
I waited for more than a week and even then I still see no data under Metrics tab and hence the s3 bucket is not listed in cloudwatch as well to configure the alarms.
Has any one faced the similar issue and help with how to resolve it?
Attaching screenshot of Metrics tab of my S3 bucket for reference,

The S3 storage metrics are visible under the Metrics Tab in the Bucket and CloudWatch after several hours after creating and bucket showing up at an unspecified time. Link for reference, https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudwatch-monitoring.html

Related

finding where my log files are being sent to aws

In my company we are storing log files in cloudwatch and then after 7days it will get sent to s3 however I have trouble finding exactly where log files are being stored in s3.
Since process of moving from cloudwatch to s3 is automated I've followed https://medium.com/tensult/exporting-of-aws-cloudwatch-logs-to-s3-using-automation-2627b1d2ee37 in hope to find the path.
We are not using step functions so I've check lambda services however there were no function that move log file from cloudwatch to s3.
I've tried looking at cloudwatch rules in hope to fine something like:
{
"region":"REGION",
"logGroupFilter":"prod",
"s3BucketName":"BUCKET_NAME",
"logFolderName":"backend"
}
so I can find which bucket log files are going to and into which folder.
How can I find where my logs are stored, if moving data is being automated why is there no functions visible?
addtional note: I am new to aws, if there is good resource on aws architecture please recommend.
Thanks in advance!
If the rule exists or was created properly then you must see it in the AWS console and same is the true for S3 bucket.
One common problem when it comes to visibility of an asset in AWS console is wrong region selection. So verify in which region the rule and the S3 bucket was created, if they were ever created and selecting the right region on the top right corner should show the assets in that region.
Hope it helps!
Have you tried using the View all exports to Amazon S3 in the CloudWatch -> >Logs console. It is one of the items in the Actions menu.

aws s3, why doesn't it list all the objects in the bucket?

I have an s3 bucket with around 80 objects which I can confirm from Cloudwatch metric. No prefixes/folders. All objects are in root path of the bucket.
When I do aws s3 ls bucket it is only showing current month's objects but not all and not previous month's objects. Even in AWS S3 console it is the same. I even tried aws s3 ls bucket --recursive
In the console, I see "viewing 1 of 24", but there are no buttons to navigate to see older objects.
Why is that? How can I see all objects in my bucket?
My s3 bucket storage class is standard.
Cloudwatch metric for NumberOfObjects shows current and noncurrent objects and the total number of parts for all incomplete multipart uploads to the bucket.
You probably have Versioning enabled on the bucket and "s3 ls" only list current versions, this command doesn't return non current versions of an object. You can click on "Show all" in S3 console to see versioned objects or list-object-versions to get the total number of objects.
Reference:
Get all versions of an object in an AWS S3 bucket?
https://www.npmjs.com/package/s3-list-all-objects
You say that you believe that the number from the Cloudwatch metric, NumberOfObjects. Here is the definition of it from S3 docs
The total number of objects stored in a bucket for all storage classes
except for the GLACIER storage class. This value is calculated by
counting all objects in the bucket (both current and noncurrent
objects) and the total number of parts for all incomplete multipart
uploads to the bucket.
So the discrepancy between what you are viewing in the console and the metric is probably that you have versioning on and there are old ("noncurrent") versions that are being counted
There are instructions for seeing the old versions here

My S3 life-cycle policy is not working, how can I debug?

I have a lifecycle policy setup in the AWS Console. In my S3 bucket, it has a folder called "backups". My policy has a prefix of "backups" and current and previous transition to glacier set to 1 day after creation. S3 files are still shown as Standard and nothing in Glacier.
I have waited a month to see if it was slow. But nothing happens.
To test this, I did the following:
Created a new Amazon S3 bucket
Created a backups folder through the S3 management console
Uploaded a file to the backups folder
Added a Lifecycle Rule with a filter of backups (displayed in the console as "prefix backups" for current version to Glacier after 1 day
I then waited a couple of days and it transitioned to Glacier:
Bottom line: It can take a couple of days for the transition to happen.

Identifying and deleting S3 Objects that are not being accessed?

I have recently joined a company that uses S3 Buckets for various different projects within AWS. I want to identify and potentially delete S3 Objects that are not being accessed (read and write), in an effort to reduce the cost of S3 in my AWS account.
I read this, which helped me to some extent.
Is there a way to find out which objects are being accessed and which are not?
There is no native way of doing this at the moment, so all the options are workarounds depending on your usecase.
You have a few options:
Tag each S3 Object (e.g. 2018-10-24). First turn on Object Level Logging for your S3 bucket. Set up CloudWatch Events for CloudTrail. The Tag could then be updated by a Lambda Function which runs on a CloudWatch Event, which is fired on a Get event. Then create a function that runs on a Scheduled CloudWatch Event to delete all objects with a date tag prior to today.
Query CloudTrail logs on, write a custom function to query the last access times from Object Level CloudTrail Logs. This could be done with Athena, or a direct query to S3.
Create a Separate Index, in something like DynamoDB, which you update in your application on read activities.
Use a Lifecycle Policy on the S3 Bucket / key prefix to archive or delete the objects after x days. This is based on upload time rather than last access time, so you could copy the object to itself to reset the timestamp and start the clock again.
No objects in Amazon S3 are required by other AWS services, but you might have configured services to use the files.
For example, you might be serving content through Amazon CloudFront, providing templates for AWS CloudFormation or transcoding videos that are stored in Amazon S3.
If you didn't create the files and you aren't knowingly using the files, can you probably delete them. But you would be the only person who would know whether they are necessary.
There is recent AWS blog post which I found very interesting and cost optimized approach to solve this problem.
Here is the description from AWS blog:
The S3 server access logs capture S3 object requests. These are generated and stored in the target S3 bucket.
An S3 inventory report is generated for the source bucket daily. It is written to the S3 inventory target bucket.
An Amazon EventBridge rule is configured that will initiate an AWS Lambda function once a day, or as desired.
The Lambda function initiates an S3 Batch Operation job to tag objects in the source bucket. These must be expired using the following logic:
Capture the number of days (x) configuration from the S3 Lifecycle configuration.
Run an Amazon Athena query that will get the list of objects from the S3 inventory report and server access logs. Create a delta list with objects that were created earlier than 'x' days, but not accessed during that time.
Write a manifest file with the list of these objects to an S3 bucket.
Create an S3 Batch operation job that will tag all objects in the manifest file with a tag of "delete=True".
The Lifecycle rule on the source S3 bucket will expire all objects that were created prior to 'x' days. They will have the tag given via the S3 batch operation of "delete=True".
Expiring Amazon S3 Objects Based on Last Accessed Date to Decrease Costs

How does Lifecycle policy for moving s3 to glacier work?

I have created a lifecycle policy for one of my buckets as below:
Name and scope
Name MoveToGlacierAndDeleteAfterSixMonths
Scope Whole bucket
Transitions
For previous versions of objects Transition to Amazon Glacier after 1 days
Expiration Permanently delete after 360 days
Clean up incomplete multipart uploads after 7 days
I would like to get answer for the following questions:
When would the data be deleted from s3 as per this policy ?
Do i have to do anything on the glacier end inorder to move my s3 bucket to glacier ?
My s3 bucket is 6 years old and all the versions of the bucket are even older. But i am not able to see any data in the glacier console though my transition policy is set to move to glacier after 1 day from the creation of the data. Please explain this behavior.
Does this policy affect only new files which will be added to the bucket post lifepolicy creation or does this affect all the files in s3 bucket ?
Please answer these questions.
When would the data be deleted from s3 as per this policy ?
Never, for current versions. A lifecycle policy to transition objects to Glacier doesn't delete the data from S3 -- it migrates it out of S3 primary storage and over into Glacier storage -- but it technically remains an S3 object.
Think of it as S3 having its own Glacier account and storing data in that separate account on your behalf. You will not see these objects in the Glacier console -- they will remain in the S3 console, but if you examine an object that has transitioned, is storage class will change from whatever it was, e.g. STANDARD and will instead say GLACIER.
Do i have to do anything on the glacier end inorder to move my s3 bucket to glacier ?
No, you don't. As mentioned above, it isn't "your" Glacier account that will store the objects. On your AWS bill, the charges will appear under S3, but labeled as Glacier, and the price will be the same as the published pricing for Glacier.
My s3 bucket is 6 years old and all the versions of the bucket are even older. But i am not able to see any data in the glacier console though my transition policy is set to move to glacier after 1 day from the creation of the data. Please explain this behavior.
Two parts: first, check the object storage class displayed in the console or with aws s3api list-objects --output=text. See if you don't see some GLACIER-class objects. Second, it's a background process. It won't happen immediately but you should see things changing within 24 to 48 hours of creating the policy. If you have logging enabled on your bucket, I believe the transition events will also be logged.
Does this policy affect only new files which will be added to the bucket post lifepolicy creation or does this affect all the files in s3 bucket ?
This affects all objects in the bucket.