Once logs are sent to CloudWatch using the CloudWatch client, we want to cleanup the disk. We have following 2 use cases.
We don't login into some of the servers. We don't need to keep the logs on disk. So cleanup can happen immediately after sending logs to Server.
On some servers, we want to keep logs for the last N days, after which these need to be deleted.
The Cloudwatch Logs Agent is compatible with logrotate, just make sure that you use one of the supported patterns.
See: CloudWatch Logs Agent Reference - Amazon CloudWatch Logs
Related
I have my app writing logs to /var/log/my_app.log. I have the logrotator set up daily to rotate the log, so presumably when the log rotate condition is met it will copy over my_app.log to my_app<date>.log. I also have the Cloudwatch agent on the same ec2 instance sending files over to Cloudwatch logs. There they will stay indefinitely I assume (or until a set time set in the aws console). Is it correct to assume that Cloudwatch will always have the first log created and logged regardless of how I rotate the actual log files on the ec2 instance? That is to say, no matter what happens with the rotated logs, I will always have ALL the logs that have been created because they've been sent to cloudwatch?
Any logs that is sent to CloudWatch will not be deleted because of the log rotation. Check out the FAQ section in the following link that has some important questions answered including the log rotation naming schemes and the scenarios in which log events can be truncated or skipped.
(Search for CloudWatch Logs Agent FAQs in the following link)
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Your assumption is correct on the log retention. CloudWatch logs are stored indefinitely by default.
Here is the quote from Amazon documentation
Log Retention – By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention period between 10 years and one day.
I have an on-premise app deployed in an Application Server (e.g. Tomcat) and it generates its own log file. If I decide to migrate this to an AWS EC2, including the Application Server, is it possible to port my application logs in Cloudwatch instead? or is Cloudwatch only capable of logging the runtime logs in my application server? is it a lot of work to do this or is this even possible?
Kind of confuse on Cloudwatch. Seems it can do multiple things but is it really right to make it do that? Its only supposed to log metrics right, so it can alert whatever or whoever needs to be alerted.
If you have already developed application that produces its own log files, you can use CloudWatch Logs Agent to ingest the logs into CloudWatch Logs:
After installation is complete, logs automatically flow from the instance to the log stream you create while installing the agent. The agent confirms that it has started and it stays running until you disable it.
The metrics, such as RAM usage, disk space, can also be monitored and pushed to CloudWatch through the agent.
In both cases, logs and metrics, you can setup CloudWatch Alarms to automatically detect anomalies and notify you, or perform other actions, when they are detected. For logs, this is done through metric filters:
You can search and filter the log data coming into CloudWatch Logs by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.
update
You can also have your application to inject logs directly to CloudWatch logs using AWS SDK. For example, in python, you can use put_log_events.
I have a serverless architecture with a few AWS lambdas up and running sending logs to Cloudwatch right now.
Question: Is there any option to avoid send logs to Cloudwatch and redirect them to another tool?
Example: Catch all logs from stdout, avoid send it to Cloudwatch (of course, I don't need to pay for Cloudwatch storage) and send all these logs to another external tool such as New Relic, Splunk, etc?
Thank you very much for your help!
You can do the tricky via removing Cloudwatch logs permissions from the role of your lambda.
I have recently started learning about AWS cloud watch and I want to understand the concept of creating Logs so I went through a lot of links like
https://aws.amazon.com/answers/logging/centralized-logging/
I could understand that we can create log groups but and logs are basically to track activity. Is there anything more to it. When do the logs get created.
Any help would be highly appreciated!
You can get more details about Log Groups and CloudWatch Logs Concepts here
Following is the extract from that page
Log Events
A log event is a record of some activity recorded by the application or resource being monitored. The log event record that
CloudWatch Logs understands contains two properties: the timestamp of
when the event occurred, and the raw event message. Event messages
must be UTF-8 encoded.
Log Streams
A log stream is a sequence of log events that share the same source. More specifically, a log stream is generally intended to
represent the sequence of events coming from the application instance
or resource being monitored. For example, a log stream may be
associated with an Apache access log on a specific host. When you no
longer need a log stream, you can delete it using the aws logs
delete-log-stream command. In addition, AWS may delete empty log
streams that are over 2 months old.
Log Groups
Log groups define groups of log streams that share the same retention, monitoring, and access control settings. Each log stream
has to belong to one log group. For example, if you have a separate
log stream for the Apache access logs from each host, you could group
those log streams into a single log group called
MyWebsite.com/Apache/access_log.
And to answer your question "When do the logs get created.", basically that is completely dependent on your application. However, whenever they are created they get streamed to cloudwatch streams (if you have installed the cloudwatch agent and are streaming that particular log)
The advantage of using cloudwatch is that you can retain logs even after your EC2 instance is terminated and you dont need to SSH into the resource to check the logs, you can simply get that from AWS Console
Environment – Two Different ec2 instances running tomcat separately.
Requirement – If there is any Error in logs – we should get an alert.
Implementation –
We implemented AWS customer logging for this which is successfully sending alerts on the Error Pattern Matching.
It automatically created a log groups – “/opt/tomcat/logs/catalina.out”.
Under this log group – there are two log streams – two instances separately showing.
Problem –
Now I want separate alarm for separate instances
Problem is when I create an alarm – it does not let me choose the instance. It takes both instance by default, which means one alarm – monitoring both instances simultaneously. And sending alert without mentioning instance name. so it is difficult to find which instance has actually sent alert.
And the second problem is - we created few log metrics for testing – like on keyword – info – which we want to delete and not able to do so.
It appears that you are using the CloudWatch Logs functionality that permits automated sending of log files from an EC2 instance (or elsewhere) to the CloudWatch service. CloudWatch Logs can then be configured to look for strings in the log files, which will trigger the recording of metrics.
To create separate alarms for separate instances, each EC2 instance should be configured to use a different CloudWatch Log stream. The CloudWatch Logs agent takes a Destination Log Group name.
See: Quick Start: Install and Configure the CloudWatch Logs Agent on an Existing EC2 Instance
As for the metrics that you wish to delete, it is not possible to delete metrics from Amazon CloudWatch. However, metrics will automatically disappear after 14 days.