I'm creating a logs aggregator lambda to send Cloudwatch logs to a private log analysis service. Given the number of resources used by my employer, it was decided to create a subscription lambda that handles log group subscription to the aggregator.
The solution works fine, but it requires to manually search a resource's log group via amazon console and then invoke the subscription lambda with it.
My question:
Is there a way to, given a resource arn, find which log group is mapped to it? Since I'm using Cloudformation to create resources it is easy to export a resource's arn.
UPDATE
To present an example:
Let's say I have the following arn:
arn:aws:appsync:<REGION>:<ACCOUNTID>apis/z3pihpr4gfbzhflthkyjjh6yvu
which is an Appsync GraphQL API.
What I want it a method (using te API or some automated solution) to get the Cloudwatch log group of that resource.
You can try the describe-log-groups command. It is available on the cli, must also be there on the API.
To get the names of the log groups you can go with:
aws logs describe-log-groups --query 'logGroups[*].logGroupName' --log-group-name-prefix '/aws/appsync/[name-of-the-resource]'
Output will look like this:
[
"/aws/appsync/[name-of-your-resource]"
]
Related
I'm attempting to set up a Cloudwatch Event Rule to notify on any AWS IAM actions like DeleteUser or CreateUser. But when I tried to create an event pattern I couldn't find IAM in the service Name list even though when I searched in the AWS documentation i cant's find a mention of IAM not being supported by Cloudwatch event rules. So I tried to create a custom event but i didn't receive any email from SNS (my target), and yes I made sure cloudwatch has permissions to invoke SNS as we already have other working events, any idea on why this is not working ?
{
"source":[
"aws.iam"
],
"detail-type":[
"AWS API Call via CloudTrail"
],
"detail":{
"eventSource":[
"iam.amazonaws.com"
],
"eventName":[
"CreateUser",
"DeleteUser"
]
}
}
I figure it out, IAM emits cloudtrail events only in us-eas-1 and I'm using a different region, it worked when I created the Cloudwatch event in N. Virgenia
The source parameter needs to be "aws.cloudtrail" not "aws.iam".
IAM policy is a global service. It can only report in US-East-1(N.Virginia).
I have same exact config and the region is same as well but creating a new user still don't trigger the event as there is event in clouldtrail as well as in the monitoring of the event rule created. I see that they say in document that cloudtrail has to be enabled but when I create a rule for security group modification which is ec2 events then it is working fine but not with iam one. Is there any permission that I am missing for aws events to send logs to clould trail , if so how did you guys resolved it.
I am not able to delete these subscriptions attached to the CloudWatch Logs Groups.
These subscriptions are created by CloudFormation stack via Serverless Framework. However, when I finished testing and deployed to the template, there was a permission error during the cleanup. Hence, these subscriptions became dangled and I am not able to locate it.
Tried with CLI and seems no relevant info regarding that.
$ aws logs describe-log-groups --log-group-name-prefix yyy
{
"logGroups": [
{
"logGroupName": "yyy",
"creationTime": 1555604143719,
"retentionInDays": 1,
"metricFilterCount": 0,
"arn": "arn:aws:logs:us-east-1:xxx:log-group:yyy:*",
"storedBytes": 167385869
}
]
}
Select the Log Group using the radio button on the left of the Log Group name. Then click Actions, Remove Subscription Filter.
Via CLI is listed in AWS document => This link
Via Console UI -> This capture
As you created the subscription with cloudformation stack via serverless, manually removing the subscription filter as jarmod is not a best practice.
What you should do is remove the cloudwatchLog event from the lambda functions and deploy, it should remove the subscriptions.
I have a Lambda, that copies data from Redshift to S3.
I am trying to find the logs in CloudWatch when I manually trigger the Lambda. I click logs and search under "log groups" and cannot see these.
I have enabled logs on Redshift and S3, and assume any Lambda generated has logs.
The end goal is to set up "log groups" per service so that I can subscribe through Kinesis and send the data to Redshift.
If I try to 'create a log group' under actions, I can create '/aws-s3/test' for example, but I don't know what log stream is, or how to send all S3 logs from a particular folder to S3.
Where are the logs?
The logs from the AWS Lambda function will be automatically created in Amazon CloudWach Logs.
However, you must ensure that the Lambda function has permission to use CloudWatch Logs.
This is normally done by assigning the AWSLambdaBasicExecutionRole managed policy to the AIM Role used by the Lambda function. It contains the permissions:
logs:CreateLogStream
logs:PutLogEvents
They will allow the Lambda function to create the log entries.
See: AWS Lambda Execution Role - AWS Lambda
I am working with a serverless project and I have only the access to aws cli, so I want to get the trigger information of a function such as event and since I am using a sns topic to trigger the function, I want to get the topic infomation and arn, I tried diffrent options, such as,
list-event-source-mapping - which returns a empty array
get-function: which doesn't hold that value
Do I have means to get the trigger information of a function with aws cli?
In this case, I believe the only way to get that information would be from the get-policy API call as that will contain the resource based policy(AKA trigger) which allows the other service to invoke the Lambda.
The get-event-source-mappings API returns the stream based event sources in the region such as:
Kinesis
Dynamo
SQS
So for example, if I have a lambda function which is configured to be invoked from SNS then the policy returned would be similar to:
aws lambda get-policy --function-name arn:aws:lambda:us-east-1:111122223333:function:YOUR_LAMBDA_NAME_HERE --query Policy --output text | jq '.Statement[0].Condition.ArnLike["AWS:SourceArn"]'
OUTPUT:
"arn:aws:sns:REGION:111122223333:TOPIC_NAME"
Though that assumes that the policy in the Lambda function only has that one statement but if you know the specific statement id then you should be able to select it in jq using a filter
Is there a way to stream an AWS Log Group to multiple Elasticsearch Services or Lambda functions?
AWS only seems to allow one ES or Lambda, and I've tried everything at this point. I've even removed the ES subscription service for the Log Group, created individual Lambda functions, created the CloudWatch Log Trigger, and I can only apply the same CloudWatch Log trigger on one Lambda function.
Here is what I'm trying to accomplish:
CloudWatch Log Group ABC -> No Filter -> Elasticsearch Service #1
CloudWatch Log Group ABC -> Filter: "XYZ" -> Elasticsearch Service #2
Basically, I need one ES cluster to store all logs, and another to only have a subset of filtered logs.
Is this possible?
I've ran into this limitation as well. I have two Lambda's (doing different things) that need to subscribe to the same CloudWatch Log Group.
What I ended up using is to create one Lambda that subscribes to the Log Group and then proxy the events into an SNS topic.
Those two Lambdas are now subscribed to the SNS topic instead of the Log Group.
For filtering events, you could implement them inside the Lambda.
It's not a perfect solution but it's a functioning workaround until AWS allows multiple Lambdas to subscribe to the same CloudWatch Log Group.
I was able to resolve the issue using a bit of a workaround through the Lambda function and also using the response provided by Kannaiyan.
I created the subscription to ES via the console, and then unsubscribed, and modified the Lambda function default code.
I declared two Elasticsearch endpoints:
var endpoint1 = '<ELASTICSEARCH ENDPOINT 1>';
var endpoint2 = '<ELASTICSEARCH ENDPOINT 2>';
Then, declared an array named "endpoint" with the contents of endpoint1 and endpoint2:
var endpoint = [endpoint1, endpoint2];
I modified the "post" function which calls the "buildRequest" function that then references "endpoint"...
function post(body, callback) {
for (index = 0; index < endpoint.length; ++index) {
var requestParams = buildRequest(endpoint[index], body);
...
So every time the "post" function is called it cycles through the array of endpoints.
Then, I modified the buildRequest function that is in charge of building the request. This function by default calls the endpoint variable, but since the "post" function cycles through the array, I renamed "endpoint" to "endpoint_xy" to make sure its not calling the global variable and instead takes the variable being inputted into the function:
function buildRequest(endpoint_xy, body) {
var endpointParts = endpoint_xy.match(/^([^\.]+)\.?([^\.]*)\.?([^\.]*)\.amazonaws\.com$/);
...
Finally, I used the response provided by Kannaiyan on using the AWS CLI to implement the subscription to the logs, but corrected a few variables:
aws logs put-subscription-filter \
--log-group-name <LOG GROUP NAME> \
--filter-name <FILTER NAME>
--filter-pattern <FILTER PATTERN>
--destination-arn <LAMBDA FUNCTION ARN>
I kept the filters completely open for now, but will now code the filter directly into the Lambda function like dashmug suggested. At least I can split one log to two ES clusters.
Thank you everyone!
Seems like AWS console limitation,
You can do it via command line,
aws logs put-subscription-filter \
--log-group-name /aws/lambda/testfunc \
--filter-name filter1 \
--filter-pattern "Error" \
--destination-arn arn:aws:lambda:us-east-1:<ACCOUNT_NUMBER>:function:SendToKinesis
You also need to add permissions as well.
Full detailed instructions,
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html
Hope it helps.
As of September 2020 CloudWatch now allows two subscriptions to a single CloudWatch Log Group, as well as multiple Metric filters for a single Log Group.
Update: AWS posted October 2, 2020, on their "What's New" blog that "Amazon CloudWatch Logs now supports two subscription filters per log group".