how to check number of times a dynamoDB table has been accessed - amazon-web-services

I have a dynamoDB table lets say sampleTable. I want to find out how many times this table has been accessed from cli. How do i check this?
PS. I have checked the metrics but couldnt find any particular metric which gives this information.

There is no CloudWatch metric to monitor API calls to DynamoDB.
However, there is CloudTrail (CT). Thus you can to go CT's event history and look for API calls to DynamoDB from the last 90 days. You can export the history to a CSV file, and investigate off line as well.
For ongoing monitoring of the API calls you can enable CT trail which will store event log details in S3 for as long as you require:
Logging DynamoDB Operations by Using AWS CloudTrail
If you have the trial created, you can use Amazon Athena to query the log data for the statistics of interests, such as number of specific API calls to DynamoDb:
Querying AWS CloudTrail Logs
Also, you could create custom metric based on the trial's log data (once you configure CloudWatch logs for the trial):
Analyzing AWS CloudTrail in Amazon CloudWatch
However, I don't think you can differentiate between API calls done using CLI, or SDK or by other means.

Related

Workaround for 2 subscription-filter limit in AWS Cloudwatch Logs

I have several lambda functions deployed on AWS that I want to monitor directly for errors to update a postgresql table with.
I have created a lambda to parse streamed log data and update the db. I want to set up subscription filters between this lambda and my other function logs.
There are 6 log streams I want to monitor and the AWS Console limits the subscription filters to 2 per log group.
Is there a workaround or a better way to implement this kind of monitoring?
Thanks

Log all requests made to DynamoDB

I would like to debug an issue with DynamoDB.
The provided expression refers to an attribute that does not exist in the item
For that I'd like to log all requests made to a DynamoDB Table from AWS (not from the lambda code).
I have the RequestId in the error and I wish to be able to search for it to find the exact requests with its parameters.
I have looked into AWS Cloudtrail but it seems to only log Management Operations not all gets and all puts done to DynamoDB.
Thanks
You will need to add this level of data plane logging to your application as currently CloudTrail only supports logging of control plane operations for DynamoDB.

How can I subscribe to cloudwatch metric data?

I am using Elasitsearch to get logs from cloudwatch log group by subscribing a lambda to the log group. So whenever there is a log event pushed to the log group, my lambda will be triggered and it will save the log to Elasticsearch. Then I can search the log via Kibana dashboard.
I'd like to put the metrics data to Elasticsearch as well but I couldn't find a way to subscribe to metrics data.
You can use AWS Module in MetricBeat from the Elastic Beat's family. Note that pulling metrics from cloudwatch will result in chargeable API calls. So you should carefully consider the scraping frequency.
Thanks

I need to create alerts based on the results returned by queries in Amazon Athena

I need to create alerts based on the results returned by queries in Amazon Athena. I don't see how I can do that now.
For example -
Schedule a query to be executed once an hour (I am not aware of a way to do this now)
Based on the results of the query (for example I would be checking the number of transactions the last hour), I might need to send an alert to someone that something may be wrong (number of transactions is too low).
I know this is different but I would do something similar, in SQL Server, using a SQL Server Agent job.
There is no in-built capability to run Amazon Athena queries on a schedule and send notifications. However, you could configure this using AWS services.
I would recommend:
Create an Amazon SNS topic that will receive notifications
Subscribe recipients to the SNS topic (eg via email, SMS)
Create an Amazon CloudWatch Event that triggers on a cron schedule
Configure the Event to trigger an AWS Lambda function
Write code for the AWS Lambda function to:
Run an Amazon Athena query
Compare the result to desired values
If the result is outside desired values, send a message to the Amazon SNS Topic

AWS Cloudwatch monitoring for S3

Amazon Cloudwatch provides some very useful metrics for monitoring my EC2s, load balancers, elasticache and RDS databases, etc and allows me to set alarms for a whole range of criteria; but is there any way to configure it to monitor my S3s as well? Or are there any other monitoring tools (besides simply enabling logging) that will help me monitor the numbers of POST/GET requests and data volumes for my S3 resources? And to provide alarms for thresholds of activity or increased datastorage?
AWS S3 is a managed storage service. The only metrics available in AWS CloudWatch for S3 are NumberOfObjects and BucketSizeBytes. In order to understand your S3 usage better you need to do some extra work.
I have recently written an AWS Lambda function to do exactly what you ask for and it's available here:
https://github.com/maginetv/s3logs-cloudwatch
It works by parsing S3 Server side log files and aggregates/exports metrics to AWS Cloudwatch (CloudWatch allows you to publish custom metrics).
Example graphs that you will get in AWS CloudWatch after deploying this function on your AWS account are:
RestGetObject_RequestCount
RestPutObject_RequestCount
RestHeadObject_RequestCount
BatchDeleteObject_RequestCount
RestPostMultiObjectDelete_RequestCount
RestGetObject_HTTP_2XX_RequestCount
RestGetObject_HTTP_4XX_RequestCount
RestGetObject_HTTP_5XX_RequestCount
+ many others
Since metrics are exported to CloudWatch, you can easily set up alarms for them as well.
CloudFormation template is included in GitHub repo and you can deploy this function very quickly to gain visibility into your S3 bucket usage.
EDIT 2016-12-10:
In November 2016 AWS has added extra S3 request metrics in CloudWatch that can be enabled when needed. This includes metrics like AllRequests, GetRequests, PutRequests, DeleteRequests, HeadRequests etc. See Monitoring Metrics with Amazon CloudWatch documentation for more details about this feature.
I was also unable to find any way to do this with CloudWatch. This question from April 2012 was answered by Derek#AWS as not having S3 support in CloudWatch. https://forums.aws.amazon.com/message.jspa?messageID=338089
The only thing I could think of would be to import the S3 access logs to a log service (like Splunk). Then create a custom cloud watch metric where you post the data that you parse from the logs. But then you have to filter out the polling of the access logs and…
And while you were at it, you could just create the alarms in Splunk instead of in S3.
If your use case is to simply alert when you are using it too much, you could set up an account billing alert for your S3 usage.
I think this might depend on where you are looking to track the access from. I.e. if you are trying to measure/watch usage of S3 objects from outside http/https requests then Anthony's suggestion if enabling S3 logging and then importing into splunk (or redshift) for analysis might work. You can also watch billing status on requests every day.
If trying to guage usage from within your own applications, there are some AWS SDK cloudwatch metrics:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/metrics/package-summary.html
and
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/metrics/S3ServiceMetric.html
S3 is a managed service, meaning that you don't need to take action based on system events in order to keep it up and running (as long as you can afford to pay for the service's usage). The spirit of CloudWatch is to help with monitoring services that require you to take action in order to keep them running.
For example, EC2 instances (which you manage yourself) typically need monitoring to alert when they're overloaded or when they're underused or else when they crash; at some point action needs to be taken in order to spin up new instances to scale out, spin down unused instances to scale back in, or reboot instances that have crashed. CloudWatch is meant to help you do the job of managing these resources more effectively.
To enable Request and Data transfer metrics in your bucket you can run the below command. Be aware that these are paid metrics.
aws s3api put-bucket-metrics-configuration \
--bucket YOUR-BUCKET-NAME \
--metrics-configuration Id=EntireBucket
--id EntireBucket
This tutorial describes how to do it in AWS Console with point and click interface.