AWS Cloudwatch monitoring for S3 - amazon-web-services

Amazon Cloudwatch provides some very useful metrics for monitoring my EC2s, load balancers, elasticache and RDS databases, etc and allows me to set alarms for a whole range of criteria; but is there any way to configure it to monitor my S3s as well? Or are there any other monitoring tools (besides simply enabling logging) that will help me monitor the numbers of POST/GET requests and data volumes for my S3 resources? And to provide alarms for thresholds of activity or increased datastorage?

AWS S3 is a managed storage service. The only metrics available in AWS CloudWatch for S3 are NumberOfObjects and BucketSizeBytes. In order to understand your S3 usage better you need to do some extra work.
I have recently written an AWS Lambda function to do exactly what you ask for and it's available here:
https://github.com/maginetv/s3logs-cloudwatch
It works by parsing S3 Server side log files and aggregates/exports metrics to AWS Cloudwatch (CloudWatch allows you to publish custom metrics).
Example graphs that you will get in AWS CloudWatch after deploying this function on your AWS account are:
RestGetObject_RequestCount
RestPutObject_RequestCount
RestHeadObject_RequestCount
BatchDeleteObject_RequestCount
RestPostMultiObjectDelete_RequestCount
RestGetObject_HTTP_2XX_RequestCount
RestGetObject_HTTP_4XX_RequestCount
RestGetObject_HTTP_5XX_RequestCount
+ many others
Since metrics are exported to CloudWatch, you can easily set up alarms for them as well.
CloudFormation template is included in GitHub repo and you can deploy this function very quickly to gain visibility into your S3 bucket usage.
EDIT 2016-12-10:
In November 2016 AWS has added extra S3 request metrics in CloudWatch that can be enabled when needed. This includes metrics like AllRequests, GetRequests, PutRequests, DeleteRequests, HeadRequests etc. See Monitoring Metrics with Amazon CloudWatch documentation for more details about this feature.

I was also unable to find any way to do this with CloudWatch. This question from April 2012 was answered by Derek#AWS as not having S3 support in CloudWatch. https://forums.aws.amazon.com/message.jspa?messageID=338089
The only thing I could think of would be to import the S3 access logs to a log service (like Splunk). Then create a custom cloud watch metric where you post the data that you parse from the logs. But then you have to filter out the polling of the access logs and…
And while you were at it, you could just create the alarms in Splunk instead of in S3.
If your use case is to simply alert when you are using it too much, you could set up an account billing alert for your S3 usage.

I think this might depend on where you are looking to track the access from. I.e. if you are trying to measure/watch usage of S3 objects from outside http/https requests then Anthony's suggestion if enabling S3 logging and then importing into splunk (or redshift) for analysis might work. You can also watch billing status on requests every day.
If trying to guage usage from within your own applications, there are some AWS SDK cloudwatch metrics:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/metrics/package-summary.html
and
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/metrics/S3ServiceMetric.html

S3 is a managed service, meaning that you don't need to take action based on system events in order to keep it up and running (as long as you can afford to pay for the service's usage). The spirit of CloudWatch is to help with monitoring services that require you to take action in order to keep them running.
For example, EC2 instances (which you manage yourself) typically need monitoring to alert when they're overloaded or when they're underused or else when they crash; at some point action needs to be taken in order to spin up new instances to scale out, spin down unused instances to scale back in, or reboot instances that have crashed. CloudWatch is meant to help you do the job of managing these resources more effectively.

To enable Request and Data transfer metrics in your bucket you can run the below command. Be aware that these are paid metrics.
aws s3api put-bucket-metrics-configuration \
--bucket YOUR-BUCKET-NAME \
--metrics-configuration Id=EntireBucket
--id EntireBucket
This tutorial describes how to do it in AWS Console with point and click interface.

Related

Amazon S3 publicly available tests

I had to write a bit of custom stuff on top of s3, where I later proxy traffic. Now I want to test that but with so many custom implemenations of amazons s3 and proxies, I was wondering if there are any publicly available tests, that I can just clone and run with pointing to my url to verify that all the basic functionalities are still functioning properly.
Do you know of any?
Thanks!
It appears that you are wanting to test whether "Amazon S3 is available".
Amazon S3 is a regional based service. This means that is runs across multiple Availability Zones, where each Availability Zone is a separate data center (or sometimes multiple data centers). Amazon S3 is designed for 99.99% availability.
You can certainly monitor Amazon S3 yourself, but an alternative is to use AWS Personal Health Dashboard.
From Getting started with the AWS Personal Health Dashboard - AWS Health:
You can use the AWS Personal Health Dashboard to learn about AWS Health events that can affect your AWS services or account. The Personal Health Dashboard presents information in two ways: a dashboard that shows recent and upcoming events organized by category, and a full event log that shows all events from the past 90 days.
From Monitor for AWS Health events with Amazon CloudWatch Events - AWS Health:
You can use Amazon CloudWatch Events to detect and react to changes for AWS Health events.
For example, you can configure Amazon SNS to send an email notification when an event occurs.

AWS Log Aggregator on the Cheap

Our CIO had a heart attack upon seeing our AWS bill.
I need to aggregate Apache and Tomcat logs from multiple EC2 (in scaling group) -- what could be the best way to initiate this without breaking the bank? The goal of the logs is to view events by IP address, account names, view the transaction flows (diagnostic/audit logging -- not so much as performance metrics).
ELK is out of the equation (political). Cloudwatch is allowed + anything else.
Depends on volume and access patterns, but pushing the logs to S3 and using Athena to query them is a good shout.
Its cheap because S3 is a really cheap datastore, and Athena is server-less, meaning you only pay for the queries you run.
Make sure you convert the logs to a compressed data format (like Apace Parquet) to save even more dosh.
https://aws.amazon.com/athena
https://docs.aws.amazon.com/athena/latest/ug/querying-apache-logs.html
https://aws.amazon.com/blogs/big-data/analyzing-data-in-s3-using-amazon-athena/
My arguments against S3/Athena would be that S3 may be the cheapest storage mechanism but how will you get the logs off your box and into S3? I'm not aware of any AWS agents that do this but there may be some commercial or open source projects to do it. Also, there is some setup required to get Athena to work for searching such as defining schemas and/or setting up AWS Glue Crawlers to discover data. You'll often find that Glue Crawlers won't be the great of identifying log data if it's not in something like JSON formatted.
I would highly recommend CloudWatch. AWS has created a CloudWatch agent that is available for multiple OSs that will pull and forward your logs from your EC2 instances. CloudWatch also has some free searching tools and now the more powerful CloudWatch Insights tool to help you search your data in a way similar to what other first-class log aggregators allow.
CloudWatch pricing is also pretty cheap. It's only $0.50/GB ingested and $0.02/GB long term storage (in us-east-1 at least). And there is no charge to use the CloudWatch agent which is the biggest advantage as you don't have to invent and test a new way to pull logs off of your boxes.

How to setup Cloudwatch SQL monitor?

I have a view on a PostgreSQL RDS instance that lists any ongoing deadlocks. Ideally, there are no deadlocks in the database, causing the view to show nothing, but on rare occasions, there are.
How would I setup an alarm in Cloudwatch to query this view and raise an alarm if any records return?
I found the cool script on Github specifically for this:
A Serverless MySQL RDS Data Collection script to push Custom Metrics to CloudWatch on AWS
Basically, there are 2 main possibilities to publish any custom metrics on CloudWatch:
Via API
You can run it on a schedule on EC2 instance (AWS example) or as a lambda function (great manual with code examples)
With CloudWatch agent
Here is the pretty example for Monitor your Microsoft SQL Server using custom metrics with Amazon CloudWatch and AWS Systems Manager.
After all, you should set up CloudWatch alarms with Metric Math and relevant thresholds.
It is not possible to configure Amazon CloudWatch to look inside an Amazon RDS database.
You will need some code running somewhere that regularly runs a query on the database and sends a custom metric to Amazon CloudWatch.
For example, you could trigger an AWS Lambda function, or use cron on an Amazon EC2 instance to trigger a script.

Idea and guidelines on end to end AWS solution

I want to build an end to end automated system which consists of the following steps:
Getting data from source to landing bucket AWS S3 using AWS Lambda
Running some transformation job using AWS Lambda and storing in processed bucket of AWS S3
Running Redshift copy command using AWS Lambda to push the transformed/processed data from AWS S3 to AWS Redshift
From the above points, I've completed pulling data, transforming data and running manual copy command from a Redshift using a SQL query tool.
Doubts:
I've heard AWS CloudWatch can be used to schedule/automate things but never worked on it. So, if I want to achieve the steps above in a streamlined fashion, how to go about it?
Should I use Lambda to trigger copy and insert statements? Or are there better AWS services to do the same?
Any other suggestion on other AWS Services and of the likes are most welcome.
Constraint: Want as many tasks as possible to be serverless (except for semantic layer, Redshift).
CloudWatch:
Your options here are either to use CloudWatch Alarms or Events.
With alarms, you can respond to any metric of your system (eg CPU utilization, Disk IOPS, count of Lambda invocations etc) when it crosses some threshold, and when this alarm is triggered, invoke a lambda function (or send SNS notification etc) to perform a task.
With events you can use either a cron expression or some AWS service event (eg EC2 instance state change, SNS notification etc) to then trigger another service (eg Lambda), so you could for example run some kind of clean-up operation via lambda on a regular schedule, or create a snapshot of an EBS volume when its instance is shut down.
Lambda itself is a very powerful tool, and should allow you to program a decent copy/insert function in a language you are familiar with. AWS has several GitHub repos with lots of examples too, see for example the serverless examples and many samples. There may be other services which could work for you in your specific case, but part of Lambda's power is its flexibility.

AWS CloudWatch Metrics directly to S3

Is there a way to get CloudWatch Metrics directly into S3? I don't need logs but ELB Metrics. I would like them logged to S3 on a regular basis (ideally as CSV).
Right now, I'm thinking of writing my own script to do it, but maybe there's there's an automatic way to put it in S3 (or Redshift)?
CloudWatch itself does not have a native export feature that will send data periodically to S3.
As you suggest, you would need to develop a scrip tthat pulls the CloudWatch metrics that you wish to store (in this case ELB metrics) using the AWS CLI and copy those metrics to your S3 bucket on a regular basis.
Using the get-metric-statistics command, the script would get the statistics for the specified metric, and store the data to your S3 bucket
See also Elastic Load Balancing Dimensions and Metrics