Here is documentation about creating Cloud Watch LogGroup via Cloud Formation. They said:
RetentionInDays
The number of days log events are kept in CloudWatch
Logs. When a log event expires, CloudWatch Logs automatically deletes
it. For valid values, see PutRetentionPolicy in the Amazon CloudWatch
Logs API Reference.
Required: No
So if I create LogGroup without RetentionInDays parameter will Cloud Watch keep those logs forever? Or what RetentionInDays value they use by default?
By default, log data is stored in CloudWatch Logs indefinitely. However, you can configure how long to store log data in a log group. Any data older than the current retention setting is automatically deleted. You can change the log retention for each log group at any time.
Source :- http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SettingLogRetention.html
Related
I have my app writing logs to /var/log/my_app.log. I have the logrotator set up daily to rotate the log, so presumably when the log rotate condition is met it will copy over my_app.log to my_app<date>.log. I also have the Cloudwatch agent on the same ec2 instance sending files over to Cloudwatch logs. There they will stay indefinitely I assume (or until a set time set in the aws console). Is it correct to assume that Cloudwatch will always have the first log created and logged regardless of how I rotate the actual log files on the ec2 instance? That is to say, no matter what happens with the rotated logs, I will always have ALL the logs that have been created because they've been sent to cloudwatch?
Any logs that is sent to CloudWatch will not be deleted because of the log rotation. Check out the FAQ section in the following link that has some important questions answered including the log rotation naming schemes and the scenarios in which log events can be truncated or skipped.
(Search for CloudWatch Logs Agent FAQs in the following link)
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Your assumption is correct on the log retention. CloudWatch logs are stored indefinitely by default.
Here is the quote from Amazon documentation
Log Retention – By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention period between 10 years and one day.
I am struggling with detecting activities performed outside of a given region in CloudWatch. For example, if an InternetGateway is created in the same region as the CloudWatch Event (let's say eu-central-1), it is detected by CloudWatch, however if it's somewhere else (let's say eu-west-1) it won't catch the event.
However, Cloudtrail does capture the event in the given region (it is activated across regions) as I can see it in the event history of this particular region. (let's say eu-west-1 again).
How can I get CloudWatch to act upon what is happening regardless of the region of creation?
Should I create the CloudWatch Event in each region, as well as the lambda function associated with the remediation?
Or is there a way to capture the logs of all regions and deal with them in a singular space?
You should be able to get cross-region cloudtrail logs into a single bucket:
Receiving CloudTrail Log Files from Multiple Regions You can configure CloudTrail to deliver log files from multiple regions to a
single S3 bucket for a single account. For example, you have a trail
in the US West (Oregon) Region that is configured to deliver log files
to a S3 bucket, and a CloudWatch Logs log group. When you apply the
trail to all regions, CloudTrail creates a new trail in all other
regions. This trail has the original trail configuration. CloudTrail
delivers log files to the same S3 bucket and CloudWatch Logs log
group.
from: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html
I have the a similar problem with CloudTrail going to CloudWatch Logs. I wanted to receive CloudTrail events for both eu-west-1 and global events for Route 53 (which seem to come from us-east-1) into a CloudWatch Logs stream so I could add some further monitoring and alerting of our AWS account.
The documentation for this at https://docs.aws.amazon.com/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html is quite good and easy to follow, and even mentions:
Note
A trail that applies to all regions sends log files from all regions to the CloudWatch Logs log group that you specify.
However, I could not get this to work. I also tried making the log delivery IAM policy more permissive - the default policy includes the region name in the stream name and I thought this might change for logs from other regions - but this didn't help. Ultimately I could not get anything from outside eu-west-1 to be delivered to CloudWatch Logs, even though events were correctly appearing in the S3 bucket.
I ended up working around this by creating a second duplicate trail in us-east-1 and delivering logs for that region to Cloudwatch Logs also in that region.
The AWS documentation indicates that multiple log event records are provided to Lambda when streaming logs from CloudWatch.
logEvents
The actual log data, represented as an array of log event
records. The "id" property is a unique identifier for every log event.
How does CloudWatch group these logs?
Time? Count? Randomly, from my perspective?
Currently you get one Lambda invocation for every PutLogEvents batch that CloudWatch Logs had received against that log group. However you should probably not rely on that because AWS could always change it (for example batch more, etc).
You can observe this behavior by running the CWL -> Lambda example in the AWS docs.
Some aws services allow you to configure the log intervals such as elastic load balancing. There's a choice between five and sixty minute log intervals. You may not see a specific increment or parameter in the docs because they are configurable based on each service.
I have an application publishing a custom cloudwatch metric using boto's put_metric_data. The metric shows the number of tasks waiting in a redis queue.
The 1-minute max shows '3', 1-minute min shows '0' and 1-minute average shows '1.5'.
It seems that the application is correctly setting the value to zero, but some other process is overwriting it with 3 at the same time, but I can't find this to stop it.
Is it possible to see logs for PutMetricData to diagnose where this value might be coming from?
Normally, Amazon CloudTrail would be the ideal way to discover information about API calls being made to your AWS account. Unfortunately, PutMetricData is not captured in Amazon CloudTrail.
From Logging Amazon CloudWatch API Calls in AWS CloudTrail:
The CloudWatch GetMetricStatistics, ListMetrics, and PutMetricData API actions are not supported.
Is there any expiry date of log files, generated by EC2 instance in CloudWatch logs using CloudWatch Logs Agent ?
By default, log data is stored indefinitely. However, you can
configure how long you want to store log data in a log group. Any data
older than the current retention setting is automatically deleted. You
can change the log retention for each log group at any time.
For more information:
Changing Log Retention