I want to send my custom cloudWatch metrics namespace filtered events to an s3 bucket to monitor specific events. I have data coming in from cloudTrail and am filtering the logs I want through metrics in a custom namespace.
I've looked into creating a cloudWatch subscription which will let me filter by specific options like eventName. I'm not sure if this is the only option though.
This is the AWS filtering subscription setup and I've done a similar filtering in cloudWatch by specifying {$.userIdentity.type = Root} with other filters set. So I imagine if I need to go the route of doing a subscription I would do multiple --filter-pattern {$.userIdentity.type = Root} / lines
aws logs put-subscription-filter \
--log-group-name "CloudTrail" \
--filter-name "RecipientStream" \
--filter-pattern "{$.userIdentity.type = Root}" \
--destination-arn "arn:aws:logs:region:999999999999:destination:testDestination"
I have a cloudWatch log that is pulling my cloudTrail logs and filtering them through metrics. I would like to sends those results to an s3 bucket I have in place to store the filtered logs from my metrics name space. Is this possible or do I need to use a subscription configuration?
Related
I'm using AWS and I tried to generate a policy using access analyzer. The generated policy never contains the items I expect and am interested in most. I cannot figure out why. Moreover, the events I can see in the cloudtrail event logs do not include data events even though I've configured data events.
I have executed the following action
DynamoDB CreateTable aws dynamodb create-table --tablename ....
DynamoDB PutItem aws dynamodb put-item --table-name xxx --item file://contents.json
S3 list aws s3 ls s3://mygreatbucket
S3 download aws s3 cp s3://mygreatbucket/theevengreater/file .
The only relevant event that is being logged in the cloudtrail is the create-table event. The data events are missing. I can't figure out what I'm doing wrong. The cloud trail config says in the "data events" section "Log All Events" for both S3 and DynamoDB.
I followed the instructions in https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html. I know it says Data events not available – IAM Access Analyzer does not identify action-level activity for data events, such as Amazon S3 data events, in generated policies. but if so, what is the purpose of adding "Logging of data events" to the cloudtrail configuration?
In my AWS organization we have multiple (5+) accounts and a Cloudtrail trail for management events with Cloudwatch log group for all the accounts and regions in the org.
For the log group there is a series of generic metric filters setup, each with a Cloudwatch alarm. These are all tied to one or more SNS topics and associated subscriptions and all out of the box.
For example:
aws logs put-metric-filter --log-group-name CLOUDWATCH-LOG-GROUP --filter-name FILTER-NAME --metric-transformations metricName=METRIC-NAME,metricNamespace=NAMESPACE,metricValue=1 --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }'
Creates a metric filter for any failed logins on console for any of the accounts that sets the metric value to 1 when a match for the filter pattern is found in the log.
Paired with
aws cloudwatch put-metric-alarm --alarm-name METRIC-NAME-alarm --metric-name METRIC-NAME --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --namespace NAMESPACE --alarm-actions SNS-TOPIC-ARN
This creates a simple alarm that fires an event to the SNS topic.
The major drawback here however is if my org account ID is #1234 and my worker account ID is #4321 and the failed login happens at the worker account, the Alarm that is sent refers only to the org account ID and I need to login and look to figure out where this happened.
Now, I can add a dimension to the metric-transformation to include something to help me differentiate (for example recipientAccountId). That would look something like
aws logs put-metric-filter --log-group-name CLOUDWATCH-LOG-GROUP --filter-name FILTER-NAME --metric-transformations metricName=METRIC-NAME,metricNamespace=NAMESPACE,metricValue=1,dimensions={"Account"=($.recipientAccountId)} --filter-pattern '{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }'
But, in order for the Cloudwatch alarm to trigger, it appears that the dimension must also be included there, it isn't possible to have one Alarm for all the possible Value that exist. Which then seems to require one Alarm per recipientAccountId (ACCOUNT-ID in the below).
Ex:
aws cloudwatch put-metric-alarm --alarm-name METRIC-NAME-alarm --metric-name METRIC-NAME --statistic Sum --period 300 --threshold 1 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --dimensions Name=Account,Value=ACCOUNT-ID --namespace NAMESPACE --alarm-actions SNS-TOPIC-ARN
Is this correct?
All of this is really just to include some metadata with the alarm to let me know which Account triggered, is there a better solution?
I have created a Lambda function which will be triggered via a subscription to a CloudWatch Log Pattern and the function will in-turn pass the logs to a web-hook (Refer https://gist.github.com/tomfa/f4e090cbaff0189eba17c0fc301c63db).
Now, I need this lambda function to EXECUTE only if the the function is called "x" times in "y" minutes.
Is it possible to disable/enable a lambda through SNS. Another idea is to
1. Create CloudWatch Events on State Change
2. Subscribe this to a SNS which will
enables the lambda, if state goes from OK to ALARM
disables the lambda, if state goes back to OK
You can use CloudWatch Events to send a message to an Amazon SNS topic on a schedule. make sure you are in correct region as as CloudWatch Events is not available in every region.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
How to configure Cloudwatch :
AWS Lambda Scheduled Tasks
run scheduled task in AWS without cron
AWS Lambda Scheduled Tasks
Use CloudWatch and get metrics about the lambda invocation and error and you can find successful call and error , threshold count. now you can use AWS SDK
https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-data.html
export.handler = function(event, context, callback) {
apiCall().then(resp => callback(null, resp).catch(err => callback(err));
}
You could create a custom CloudWatch Metric based of your search filter of the CloudWatch Logs
Examples of this can be found in the Amazon CloudWatch Logs User Guide
Count Log Events
aws logs put-metric-filter \
--log-group-name MyApp/access.log \
--filter-name EventCount \
--filter-pattern "" \
--metric-transformations \
metricName=MyAppEventCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
Count Occurrences
aws logs put-metric-filter \
--log-group-name MyApp/message.log \
--filter-name MyAppErrorCount \
--filter-pattern 'Error' \
--metric-transformations \
metricName=ErrorCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
Then you can go in and create a CloudWatch Alarm that will fire based on x of these events being logged in y time span. The CloudWatch Alarm can send a message to an SNS topic that triggers your Lambda function
I'm creating a logs aggregator lambda to send Cloudwatch logs to a private log analysis service. Given the number of resources used by my employer, it was decided to create a subscription lambda that handles log group subscription to the aggregator.
The solution works fine, but it requires to manually search a resource's log group via amazon console and then invoke the subscription lambda with it.
My question:
Is there a way to, given a resource arn, find which log group is mapped to it? Since I'm using Cloudformation to create resources it is easy to export a resource's arn.
UPDATE
To present an example:
Let's say I have the following arn:
arn:aws:appsync:<REGION>:<ACCOUNTID>apis/z3pihpr4gfbzhflthkyjjh6yvu
which is an Appsync GraphQL API.
What I want it a method (using te API or some automated solution) to get the Cloudwatch log group of that resource.
You can try the describe-log-groups command. It is available on the cli, must also be there on the API.
To get the names of the log groups you can go with:
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --log-group-name-prefix '/aws/appsync/[name-of-the-resource]'
Output will look like this:
[
"/aws/appsync/[name-of-your-resource]"
]
Is there a way to stream an AWS Log Group to multiple Elasticsearch Services or Lambda functions?
AWS only seems to allow one ES or Lambda, and I've tried everything at this point. I've even removed the ES subscription service for the Log Group, created individual Lambda functions, created the CloudWatch Log Trigger, and I can only apply the same CloudWatch Log trigger on one Lambda function.
Here is what I'm trying to accomplish:
CloudWatch Log Group ABC -> No Filter -> Elasticsearch Service #1
CloudWatch Log Group ABC -> Filter: "XYZ" -> Elasticsearch Service #2
Basically, I need one ES cluster to store all logs, and another to only have a subset of filtered logs.
Is this possible?
I've ran into this limitation as well. I have two Lambda's (doing different things) that need to subscribe to the same CloudWatch Log Group.
What I ended up using is to create one Lambda that subscribes to the Log Group and then proxy the events into an SNS topic.
Those two Lambdas are now subscribed to the SNS topic instead of the Log Group.
For filtering events, you could implement them inside the Lambda.
It's not a perfect solution but it's a functioning workaround until AWS allows multiple Lambdas to subscribe to the same CloudWatch Log Group.
I was able to resolve the issue using a bit of a workaround through the Lambda function and also using the response provided by Kannaiyan.
I created the subscription to ES via the console, and then unsubscribed, and modified the Lambda function default code.
I declared two Elasticsearch endpoints:
var endpoint1 = '<ELASTICSEARCH ENDPOINT 1>';
var endpoint2 = '<ELASTICSEARCH ENDPOINT 2>';
Then, declared an array named "endpoint" with the contents of endpoint1 and endpoint2:
var endpoint = [endpoint1, endpoint2];
I modified the "post" function which calls the "buildRequest" function that then references "endpoint"...
function post(body, callback) {
for (index = 0; index < endpoint.length; ++index) {
var requestParams = buildRequest(endpoint[index], body);
...
So every time the "post" function is called it cycles through the array of endpoints.
Then, I modified the buildRequest function that is in charge of building the request. This function by default calls the endpoint variable, but since the "post" function cycles through the array, I renamed "endpoint" to "endpoint_xy" to make sure its not calling the global variable and instead takes the variable being inputted into the function:
function buildRequest(endpoint_xy, body) {
var endpointParts = endpoint_xy.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
...
Finally, I used the response provided by Kannaiyan on using the AWS CLI to implement the subscription to the logs, but corrected a few variables:
aws logs put-subscription-filter \
--log-group-name <LOG GROUP NAME> \
--filter-name <FILTER NAME>
--filter-pattern <FILTER PATTERN>
--destination-arn <LAMBDA FUNCTION ARN>
I kept the filters completely open for now, but will now code the filter directly into the Lambda function like dashmug suggested. At least I can split one log to two ES clusters.
Thank you everyone!
Seems like AWS console limitation,
You can do it via command line,
aws logs put-subscription-filter \
--log-group-name /aws/lambda/testfunc \
--filter-name filter1 \
--filter-pattern "Error" \
--destination-arn arn:aws:lambda:us-east-1:<ACCOUNT_NUMBER>:function:SendToKinesis
You also need to add permissions as well.
Full detailed instructions,
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Hope it helps.
As of September 2020 CloudWatch now allows two subscriptions to a single CloudWatch Log Group, as well as multiple Metric filters for a single Log Group.
Update: AWS posted October 2, 2020, on their "What's New" blog that "Amazon CloudWatch Logs now supports two subscription filters per log group".