CloudWatch not receiving Cloudtrail logs from outside region - amazon-web-services

I am struggling with detecting activities performed outside of a given region in CloudWatch. For example, if an InternetGateway is created in the same region as the CloudWatch Event (let's say eu-central-1), it is detected by CloudWatch, however if it's somewhere else (let's say eu-west-1) it won't catch the event.
However, Cloudtrail does capture the event in the given region (it is activated across regions) as I can see it in the event history of this particular region. (let's say eu-west-1 again).
How can I get CloudWatch to act upon what is happening regardless of the region of creation?
Should I create the CloudWatch Event in each region, as well as the lambda function associated with the remediation?
Or is there a way to capture the logs of all regions and deal with them in a singular space?

You should be able to get cross-region cloudtrail logs into a single bucket:
Receiving CloudTrail Log Files from Multiple Regions You can configure CloudTrail to deliver log files from multiple regions to a
single S3 bucket for a single account. For example, you have a trail
in the US West (Oregon) Region that is configured to deliver log files
to a S3 bucket, and a CloudWatch Logs log group. When you apply the
trail to all regions, CloudTrail creates a new trail in all other
regions. This trail has the original trail configuration. CloudTrail
delivers log files to the same S3 bucket and CloudWatch Logs log
group.
from: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html

I have the a similar problem with CloudTrail going to CloudWatch Logs. I wanted to receive CloudTrail events for both eu-west-1 and global events for Route 53 (which seem to come from us-east-1) into a CloudWatch Logs stream so I could add some further monitoring and alerting of our AWS account.
The documentation for this at https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html is quite good and easy to follow, and even mentions:
Note
A trail that applies to all regions sends log files from all regions to the CloudWatch Logs log group that you specify.
However, I could not get this to work. I also tried making the log delivery IAM policy more permissive - the default policy includes the region name in the stream name and I thought this might change for logs from other regions - but this didn't help. Ultimately I could not get anything from outside eu-west-1 to be delivered to CloudWatch Logs, even though events were correctly appearing in the S3 bucket.
I ended up working around this by creating a second duplicate trail in us-east-1 and delivering logs for that region to Cloudwatch Logs also in that region.

Related

How to disable AWS Lambda Edge Logs

I'm using Cloud front + Lambda Edge. Each Lambda invocation creates a cloud watch log entry in the closest AWS region. This results in lots of Cloud watch log streams scattered around the globe in all possible regions.
As the default retention of Cloud watch logs is to never expire, both the data and the number of streams builds quickly.
Locating these logs and setting a reasonable retention is a chore.
Is there a way to disable these logs completely in Lambda Edge?
If you remove the CloudWatch permissions from your Lambda execution role, it will stop putting logs there. By default, every Lambda function gets this permission.

An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation

Trying to put cloud watch logs into kineses firehose.
Followed below:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample
Got this error
An error occurred (InvalidParameterException) when calling the PutSubscriptionFilter operation: Could not deliver test message to specified Firehose stream. Check if t
e given Firehose stream is in ACTIVE state.
aws logs put-subscription-filter --log-group-name "xxxx" --filter-name "xxx" --filter-pattern "{$.httpMethod = GET}" --destination-arn "arn:aws:firehose:us-east-1:12345567:deliverystream/xxxxx" --role-arn "arn:aws:iam::12344566:role/xxxxx"
You need to update the trust policy of your IAM role so that it gives permissions to the logs.amazonaws.com service principal to assume it, otherwise CloudWatch Logs won't be able to assume your role to publish events to your Kinesis stream. (Obviously you also need to double-check the permissions on your role to make sure it has permissions to read from your Log Group and write to your Kinesis Stream.)
It would be nice if they added this to the error message to help point people in the right direction...
The most likely problem that causes this error is a permissions issue. i.e. something wrong in the definition of the IAM role you passed to --role-arn. You may want to double check that the role and its permissions were set up properly as described in the doc.
I was getting a similar error when subscribing to a cloudwatch loggroup and publishing to a Kinesis stream. Cdk was not defining a dependency needed for the SubscriptionFilter to be created after the Policy that would allow the filtered events to be published in Kinesis. This is reported in this github cdk issue:
https://github.com/aws/aws-cdk/issues/21827
I ended up using the workaround implemented by github user AlexStasko: https://github.com/AlexStasko/aws-subscription-filter-issue/blob/main/lib/app-stack.ts
If your Firehose is active status and you can send log stream then the remaining issue is only policy.
I got the similar issue when follow the tutorial. The one confused here is Kinesis part and Firehose part, we may mixed up together. You need to recheck your: ~/PermissionsForCWL.json, with details part of:
....
"Action":["firehose:*"], *// You could confused with kinesis:* like me*
"Resource":["arn:aws:firehose:region:123456789012:*"]
....
When I did the tutorial you mentioned, it was defaulting to a different region so I had to pass --region with my region. It wasn't until I did the entire steps with the correct region that it worked.
For me I think this issue was occurring due to the time it takes for the IAM data plane to settle after new roles are created via regional IAM endpoints for regions that are geographically far away from us-east-1.
I have a custom Lambda CF resource that auto-subscribes all existing and future log groups to a Firehose via a subscription filter. The IAM role gets deployed for CW Logs then very quickly the Lambda function tries to subscribe the log groups. And on occasion this error would happen.
I added a time.sleep(30) to my code (this code only runs once a stack creation so it's not going to hurt anything to wait 30 seconds).

How to enable AWS EMR CloudTrail logging?

We have a team shared AWS account, that sometimes things are hard to debug. Especially, for EMR APIs, throttling happens regularly, that it'll be nice to have CloudTrail logs tell people who is not being nice when using EMR. I think our CloudTrail logging is enabled, that I can see these API events with EMR as event source--
AddJobFlowSteps
RunJobFlow
TerminateJobFlows
I'm pretty sure that I'm calling DescribeCluster for plenty times and caused some throttling, but not sure why they are not showing up in my CloudTrail logs...
Can someone help understand --
Is there additional setting needed for DescribeCluster EMR API, in order to log events to CloudTrail?
And what about other EMR APIs? Can they be configured to log events to CloudTrails, without using SDK explicitly writing to CloudTrails?
I have read these articles, feels like much can be done in CloudTrails...
https://docs.aws.amazon.com/emr/latest/ManagementGuide/logging_emr_api_calls.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html#logging-management-events-with-the-cloudtrail-console.
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-services.html
Appreciate any help!
A quick summary of AWS cloudtrail:
The events recorded by AWS cloudtrail are of two types: Management events and Data events.
Management events include actions like: stopping an instance, deleting a bucket etc.
Data events are only available for two services (S3 and lambda), which include actions like: object 'abc.txt' was read from the S3 bucket.
Under management events, we again have 4 types:
Write-only
Read-only
All (both reads and writes)
None
The DescribeCluster event that you are looking for comes under the management event 'Read-only' type. DescribeCluster - cloudtrail image:
Please ensure that you have selected "All" or "ReadOnly" management event type in your cloudtrail trail.
Selecting "WriteOnly" in management event type in your cloudtrail trail will not record 'DescribeCluster'.
There is no other AWS service specific setting that you can enable in cloudtrail.
Also note that the 'Event history' tab in AWS Cloudtrail console records all types of logs (including ReadOnly) for a period of 90 days. You can see the DescribeCluster event there too.

Cloudwatch alert on any instance creation?

I would like to send out alerts and create logs any time an instance is created within an aws account. The instances in the account are mostly static and are rarely changed, so an alert should go off when an unauthorized change is made.
How can I create a cloudwatch alarm that can do this?
I can think of 2 options:
Option 1 - You write code
Enable CloudTrail
Have S3 trigger a Lambda function for PutObject (gets triggered whenever CloudTrail delivers events)
Write a Lambda function that reads the passed S3 object, looks for RunInstances event and sends a mail including instance name, instance id, who launched the instance etc., using AWS SES
You pay for CloudTrail+S3 only (SES cost is negligible)
Option 2 - Let AWS do everything
Enable CloudTrail
Have CloudTrail logs delivered to CloudWatch
Add an alarm in CloudWatch to send you an alert using SNS when CloudWatch detects RunInstances
You pay for CloudTrail+S3+CloudWatch
More info: Sending Events to CloudWatch Logs

AWS Cloudwatch monitoring for S3

Amazon Cloudwatch provides some very useful metrics for monitoring my EC2s, load balancers, elasticache and RDS databases, etc and allows me to set alarms for a whole range of criteria; but is there any way to configure it to monitor my S3s as well? Or are there any other monitoring tools (besides simply enabling logging) that will help me monitor the numbers of POST/GET requests and data volumes for my S3 resources? And to provide alarms for thresholds of activity or increased datastorage?
AWS S3 is a managed storage service. The only metrics available in AWS CloudWatch for S3 are NumberOfObjects and BucketSizeBytes. In order to understand your S3 usage better you need to do some extra work.
I have recently written an AWS Lambda function to do exactly what you ask for and it's available here:
https://github.com/maginetv/s3logs-cloudwatch
It works by parsing S3 Server side log files and aggregates/exports metrics to AWS Cloudwatch (CloudWatch allows you to publish custom metrics).
Example graphs that you will get in AWS CloudWatch after deploying this function on your AWS account are:
RestGetObject_RequestCount
RestPutObject_RequestCount
RestHeadObject_RequestCount
BatchDeleteObject_RequestCount
RestPostMultiObjectDelete_RequestCount
RestGetObject_HTTP_2XX_RequestCount
RestGetObject_HTTP_4XX_RequestCount
RestGetObject_HTTP_5XX_RequestCount
+ many others
Since metrics are exported to CloudWatch, you can easily set up alarms for them as well.
CloudFormation template is included in GitHub repo and you can deploy this function very quickly to gain visibility into your S3 bucket usage.
EDIT 2016-12-10:
In November 2016 AWS has added extra S3 request metrics in CloudWatch that can be enabled when needed. This includes metrics like AllRequests, GetRequests, PutRequests, DeleteRequests, HeadRequests etc. See Monitoring Metrics with Amazon CloudWatch documentation for more details about this feature.
I was also unable to find any way to do this with CloudWatch. This question from April 2012 was answered by Derek#AWS as not having S3 support in CloudWatch. https://forums.aws.amazon.com/message.jspa?messageID=338089
The only thing I could think of would be to import the S3 access logs to a log service (like Splunk). Then create a custom cloud watch metric where you post the data that you parse from the logs. But then you have to filter out the polling of the access logs and…
And while you were at it, you could just create the alarms in Splunk instead of in S3.
If your use case is to simply alert when you are using it too much, you could set up an account billing alert for your S3 usage.
I think this might depend on where you are looking to track the access from. I.e. if you are trying to measure/watch usage of S3 objects from outside http/https requests then Anthony's suggestion if enabling S3 logging and then importing into splunk (or redshift) for analysis might work. You can also watch billing status on requests every day.
If trying to guage usage from within your own applications, there are some AWS SDK cloudwatch metrics:
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/metrics/package-summary.html
and
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/metrics/S3ServiceMetric.html
S3 is a managed service, meaning that you don't need to take action based on system events in order to keep it up and running (as long as you can afford to pay for the service's usage). The spirit of CloudWatch is to help with monitoring services that require you to take action in order to keep them running.
For example, EC2 instances (which you manage yourself) typically need monitoring to alert when they're overloaded or when they're underused or else when they crash; at some point action needs to be taken in order to spin up new instances to scale out, spin down unused instances to scale back in, or reboot instances that have crashed. CloudWatch is meant to help you do the job of managing these resources more effectively.
To enable Request and Data transfer metrics in your bucket you can run the below command. Be aware that these are paid metrics.
aws s3api put-bucket-metrics-configuration \
--bucket YOUR-BUCKET-NAME \
--metrics-configuration Id=EntireBucket
--id EntireBucket
This tutorial describes how to do it in AWS Console with point and click interface.