I am looking for a way to monitor any changes that occur to my production envrionment. Such as security group changes, ec2 create/stop/deletes, database changes, s3 bucket changes, route table changes, subnet changes, etc... I was looking at using cloudtrail for this and monitoring all api calls. However, when testing, my subscribed SNS topic was not receiving any notifications when i was making some changes for a test. Curious if anyone else has a work around for this or if I am missing something? Maybe lambda? Just looking for the easiest way to receive email notifications when any changes are made within my prod environment. Thank you.
If you're looking to audit the entire event history of AWS API calls then you would use CloudTrail, remembering to create a trail and enabling the options if you want to audit S3 or Lambda API calls.
By itself CloudTrail will provide auditing, but it can be combined with CloudWatch/EventBridge to automate actions based on specific API calls such as triggering a Lambda or triggering an SNS topic.
Regarding your own implementation so far using SNS always ensure you've accepted the subscription first on the subscriber(s).
In addition you can use AWS Config with many resources in AWS providing 2 benefits to you. You will be able to maintain a history of changes to you resources, whilst also being able to configure compliance and resolution rules for your resources.
Related
Is there a simple way to change or add the recipients of the Instance Retirement notifications?
The owner of the account is getting it but he's not the one who's suppose to take care of it. We would also like to have multiple recipients (and not just the account official owner and the operations back-up).
It seems that aside from adding/removing tags and adding an operations back-up there's no way to change it but it is weird as almost anything else can be controlled in a simple way.
Thanks in advance!
You can use AWS Health events with Amazon CloudWatch Events to route the events and take actions necessary depending on the use case like as described in the doc you can even automate the workflow needed for specific actions.
You can use Amazon CloudWatch Events to detect and react to changes for AWS Health events. Then, based on the rules that you create, CloudWatch Events invokes one or more target actions when an event matches the values that you specify in a rule. Depending on the type of event, you can send notifications, capture event information, take corrective action, initiate events, or take other actions. For example, you can use AWS Health to receive email notifications if you have AWS resources in your AWS account that are scheduled for updates, such as Amazon Elastic Compute Cloud (Amazon EC2) instances.
For example say I build a workflow that uses 10 lambda functions that trigger each other and are triggered by a dynamodb table and an S3 bucket.
Is there any AWS tool that tracks how these triggers are tying together so I can easily visualize the whole workflow I’ve created?
Bang on, few months ago, I too was in a similar situation for my distributed architecture running on AWS.
So far, I have found the following options as possibilities. I'm still figuring out which is more suitable. But, hope this information helps you.
1. AWS-Native option :: Engineer your Lambda code to trigger Cloudwatch custom-metrics for any important events from within the code. Later, you may use Cloudwatch dashboard to visualize them.
2. Non-AWS options :: There are several of them, but all of them require you to engineer your code with their respective libraries / packages to transmit the needed information. Some of them support ASYNC invocations, so it shouldn't keep your master lambdas in the waiting state for log tracing.
IOPipe
Epsagon
3. Mix of AWS & Non-AWS :: This is a more traditional approach to our problem. You log events to Cloudwatch Logs (like how Lambda does it out of the box), "ingest" these logs into popular log management and analysis SaaS tooling to make sense between these logs via "pattern-matching" and other proprietary techniques.
Splunk Cloud
Datadog
All the best! Keep me posted how it is going.
cheers,
ram
If you use CloudFormation you can visualize the resource relations with CloudFormation Designer. However, if you don't have the resources in a CloudFormation stack, you can create one from all the existing resources.
We have a team shared AWS account, that sometimes things are hard to debug. Especially, for EMR APIs, throttling happens regularly, that it'll be nice to have CloudTrail logs tell people who is not being nice when using EMR. I think our CloudTrail logging is enabled, that I can see these API events with EMR as event source--
AddJobFlowSteps
RunJobFlow
TerminateJobFlows
I'm pretty sure that I'm calling DescribeCluster for plenty times and caused some throttling, but not sure why they are not showing up in my CloudTrail logs...
Can someone help understand --
Is there additional setting needed for DescribeCluster EMR API, in order to log events to CloudTrail?
And what about other EMR APIs? Can they be configured to log events to CloudTrails, without using SDK explicitly writing to CloudTrails?
I have read these articles, feels like much can be done in CloudTrails...
https://docs.aws.amazon.com/emr/latest/ManagementGuide/logging_emr_api_calls.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html#logging-management-events-with-the-cloudtrail-console.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-services.html
Appreciate any help!
A quick summary of AWS cloudtrail:
The events recorded by AWS cloudtrail are of two types: Management events and Data events.
Management events include actions like: stopping an instance, deleting a bucket etc.
Data events are only available for two services (S3 and lambda), which include actions like: object 'abc.txt' was read from the S3 bucket.
Under management events, we again have 4 types:
Write-only
Read-only
All (both reads and writes)
None
The DescribeCluster event that you are looking for comes under the management event 'Read-only' type. DescribeCluster - cloudtrail image:
Please ensure that you have selected "All" or "ReadOnly" management event type in your cloudtrail trail.
Selecting "WriteOnly" in management event type in your cloudtrail trail will not record 'DescribeCluster'.
There is no other AWS service specific setting that you can enable in cloudtrail.
Also note that the 'Event history' tab in AWS Cloudtrail console records all types of logs (including ReadOnly) for a period of 90 days. You can see the DescribeCluster event there too.
In our "standard" AWS account, I have a system that does something like this:
CloudWatch Rule (Scheduled Event) -> Lambda function (accesses DynamoDB table, makes computations, writes metrics) -> CloudWatch Alarm (consume metrics, etc.)
However, in our separate CN account, we need to do a similar thing, but in CN, there's no Lambda...
Is there any way we can do something similar to what was done above using the systems available to CN? For example, is it possible to create a rule and have it trigger a lambda function in our "standard/nonCN" AWS account that access the other account's DynamoDB table?
I ultimately accomplished this by having the Lambda and the CloudWatch alarm live in the non-CN account, and then having the Lambda access the dynamoDB table across accounts and across regions.
This actually ended up working, though it did involve me using user credentials instead of a role like I would have been able to had it not been CN.
If anyone is interested in more details on this solution, feel free to comment and I can add more.
You can mix and match between AWS resources between regions. When you do your code, you need to make sure you have the regions correctly configured to those resources.
With respect to trigger, Have the trigger where ever you have your lambda. That will ease your process.
Hope it helps.
Amazon Cloudwatch provides some very useful metrics for monitoring my EC2s, load balancers, elasticache and RDS databases, etc and allows me to set alarms for a whole range of criteria; but is there any way to configure it to monitor my S3s as well? Or are there any other monitoring tools (besides simply enabling logging) that will help me monitor the numbers of POST/GET requests and data volumes for my S3 resources? And to provide alarms for thresholds of activity or increased datastorage?
AWS S3 is a managed storage service. The only metrics available in AWS CloudWatch for S3 are NumberOfObjects and BucketSizeBytes. In order to understand your S3 usage better you need to do some extra work.
I have recently written an AWS Lambda function to do exactly what you ask for and it's available here:
https://github.com/maginetv/s3logs-cloudwatch
It works by parsing S3 Server side log files and aggregates/exports metrics to AWS Cloudwatch (CloudWatch allows you to publish custom metrics).
Example graphs that you will get in AWS CloudWatch after deploying this function on your AWS account are:
RestGetObject_RequestCount
RestPutObject_RequestCount
RestHeadObject_RequestCount
BatchDeleteObject_RequestCount
RestPostMultiObjectDelete_RequestCount
RestGetObject_HTTP_2XX_RequestCount
RestGetObject_HTTP_4XX_RequestCount
RestGetObject_HTTP_5XX_RequestCount
+ many others
Since metrics are exported to CloudWatch, you can easily set up alarms for them as well.
CloudFormation template is included in GitHub repo and you can deploy this function very quickly to gain visibility into your S3 bucket usage.
EDIT 2016-12-10:
In November 2016 AWS has added extra S3 request metrics in CloudWatch that can be enabled when needed. This includes metrics like AllRequests, GetRequests, PutRequests, DeleteRequests, HeadRequests etc. See Monitoring Metrics with Amazon CloudWatch documentation for more details about this feature.
I was also unable to find any way to do this with CloudWatch. This question from April 2012 was answered by Derek#AWS as not having S3 support in CloudWatch. https://forums.aws.amazon.com/message.jspa?messageID=338089
The only thing I could think of would be to import the S3 access logs to a log service (like Splunk). Then create a custom cloud watch metric where you post the data that you parse from the logs. But then you have to filter out the polling of the access logs and…
And while you were at it, you could just create the alarms in Splunk instead of in S3.
If your use case is to simply alert when you are using it too much, you could set up an account billing alert for your S3 usage.
I think this might depend on where you are looking to track the access from. I.e. if you are trying to measure/watch usage of S3 objects from outside http/https requests then Anthony's suggestion if enabling S3 logging and then importing into splunk (or redshift) for analysis might work. You can also watch billing status on requests every day.
If trying to guage usage from within your own applications, there are some AWS SDK cloudwatch metrics:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/metrics/package-summary.html
and
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/metrics/S3ServiceMetric.html
S3 is a managed service, meaning that you don't need to take action based on system events in order to keep it up and running (as long as you can afford to pay for the service's usage). The spirit of CloudWatch is to help with monitoring services that require you to take action in order to keep them running.
For example, EC2 instances (which you manage yourself) typically need monitoring to alert when they're overloaded or when they're underused or else when they crash; at some point action needs to be taken in order to spin up new instances to scale out, spin down unused instances to scale back in, or reboot instances that have crashed. CloudWatch is meant to help you do the job of managing these resources more effectively.
To enable Request and Data transfer metrics in your bucket you can run the below command. Be aware that these are paid metrics.
aws s3api put-bucket-metrics-configuration \
--bucket YOUR-BUCKET-NAME \
--metrics-configuration Id=EntireBucket
--id EntireBucket
This tutorial describes how to do it in AWS Console with point and click interface.