I want to send logs data from my EC2 instance (Ubuntu) to an AWS eventbridge where I can then send them to multiple endpoints. e.g. if someone performs a root user operation on the server this is written to/var/log/auth.log, I would then like this change in the log to be sent to eventbridge where it can then be routed to other locations, how can achieve this?
cheers
N.B.
I have tried using the cloudwatch agent but I can't figure how to get the logs to eventbridge once they're in a log group, so if there is a way I can this that would also work.
I can't figure how to get the logs to eventbridge once they're in a log group, so if there is a way I can this that would also work.
Once your CloudWatch Agent writes relevant logs to CloudWatch logs, you can setup a subscription filter on your log group.
The filter would stream logs of interest (e.g. those that contain ssh) into a lambda function. How to set it up is shown in:
Example 2: Subscription Filters with AWS Lambda
The lambda, using events api, e.g. in boto3, could process the log stream, filter out messages, construct events and publish them to the event bridge.
Related
I am using Elasitsearch to get logs from cloudwatch log group by subscribing a lambda to the log group. So whenever there is a log event pushed to the log group, my lambda will be triggered and it will save the log to Elasticsearch. Then I can search the log via Kibana dashboard.
I'd like to put the metrics data to Elasticsearch as well but I couldn't find a way to subscribe to metrics data.
You can use AWS Module in MetricBeat from the Elastic Beat's family. Note that pulling metrics from cloudwatch will result in chargeable API calls. So you should carefully consider the scraping frequency.
Thanks
I want to create CloudWatch Rule that would be triggered upon creation of Log Event. For that reason as an event pattern I selected CloudWatch Logs service but when I try to generate some Cloud Watch logs the rule is not getting triggered. I can not find any example of using aws.logs as a source for an event and hence my question if I'm doing something wrong.
This is because the only events for logs available are AWS API Call via CloudTrail. CloudWatch Logs does not generate CloudWatch events on receiving new log entries.
For the Logs API call events to work, you need to setup CloudTrial trial.
However, if you want to trigger your lambda function based on log entries, I can recommend using subscription filters for lambda:
You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as a Amazon Kinesis stream, Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems.
We are using cloudwatch metrics filter and setup alarm and send notification through SNS - Email.
We are wondering if it is possible to also see logs that triggers the alarm in the email? or is it possible to do so with the help with a custom lambda function?
Thanks
Locally, you may try to have the following set up.
CloudWatch Alarm
SNS-log-reader with lambda as a subscriber
SNS-send-mails with e-mails as a subscriber
So the logic would be - Alarm trigger SNS-log-reader. Then lambda which is mapped to *SNS-log-reader can read the logs from LogGroup, build the message in needed format and send to e-mail/s via SNS-send-mails (or Simple Email Service (Amazon SES))
AWS CloudWatch has Log Groups and Log streams. A log group seems reasonable to me: Each product (e.g. each Lambda function, each Sagemaker endpoint) has its own log group.
But then there are log streams. When does AWS CloudWatch create new log streams? Can I search all log streams of a log group?
From the AWS Cloudwatch documentation you can see that a log stream is created each time the logs come from a different event source. In case of Lambda, it's one stream per Lambda container where each container might process multiple events.
A log stream is a sequence of log events that share the same source. Each separate source of logs into CloudWatch Logs makes up a separate log stream.
Yes, you can search all log streams of a log group using the CloudWatch Logs API. The FilterLogEvents action allows you to search through a log group.
I need to receive notifications whenver my instance in terminated. I know it can be done by cloudtrail and then using sns and sqs to get email for it, if you receive event of termination.
Is there a simpler way to do that ?
Any solution will is appreciated, but I prefer is doing using boto.
While it is not possible to receive a notification directly from Amazon EC2 when an instance is terminated, there are a couple of ways this could be accomplished:
Auto Scaling can send a notification when an instance managed by Auto Scaling is terminated. See: Configure Your Auto Scaling Group to Send Notifications
AWS Config can also be configured to send a Simple Notification Service (SNS) notification when resources change. This would send many notifications, so you would need to inspect and filter the notifications to find the one(s) indicating an instance termination. See the SNS reference in: Set Up AWS Config Using the Console and Example Amazon SNS Notification and Email from AWS Config.
Amazon Simple Notification Service (SNS) can also push a message to Amazon Queueing Service (SQS), which can be easily polled with the boto python SDK.
Receiving notifications via CloudTrail and CloudWatch Logs is somewhat messier, so I'd recommend the AWS Config method.
Now AWS introduced "rules" Under "Events" in AWS CloudWatch. In your case, you can select EC2 as Event Selector and SNS or SQS as Targets.
https://aws.amazon.com/blogs/aws/new-cloudwatch-events-track-and-respond-to-changes-to-your-aws-resources/
According to the AWS doc: Spot Instance Interruptions, it is possible to pool the instance-metadata in order to get an approximation of the termination time. You can build any custom monitoring solution around that.
> curl http://169.254.169.254/latest/meta-data/spot/instance-action
{"action": "stop", "time": "2017-09-18T08:22:00Z"}
If the instance is not scheduled for termination a http:400 will be returned.