I am using SAM and CloudFormation to deploy multiple Lambda functions and other resources. The function names are generated by CloudFormation and have the following format:
stack-name-function-name-8H2609XXXXX with the suffix automatically generated by CloudFormation
The CloudWatch log groups automatically created for all individual Lambda functions have therefore following name format:
/aws/lambda/stack-name-function-name-8H2609XXXXX
I am trying to find a way to trigger CloudWatch Alarm for the Lambda functions in a stack. When creating a MetricFilter in CloudFormation I have to specify CloudWatch LogGroupName. With 100s of Lambdas with names generated by CloudFormation, following the basic pattern would mean creating 100s of filters that would somehow have to know the names of all functions (outputs and imports maybe but that would be really inflexible when function names change, would require constant redeploying of the "alarm" stack that I was hoping to keep filters in)
Is there a way to maybe trigger alarm for a group of log groups? I imagined using wildcard would be easy on a filter for log group name like /aws/lambda/stack-name-*. Or any other nicer approach than having to manage all the filters?
Related
I'm working with a CloudFormation template which is defining a lot of parameters for static values out of the scope of the template.
For example, the template is creating some EC2, and it has parameters for each VPC subnet. If this was Terraform, I would just remove all of these parameters and use data to fetch the information.
Is it possible to do that with CloudFormation?
Notice that I'm not talking about referencing another resource created within the same template, but about a resource that already exists in the account that could have been created by different means (manual, Terraform, CloudFormation, whatever...)
No, CloudFormation does not have any native ability to look up existing resources. You can, however, achieve this using a Cloudformation macro.
A CloudFormation macro leverages a lambda function, which you can implement with whatever logic you need (e.g. using boto3) so that it returns the value you're after. You can even pass parameters to it.
Once the macro has been created, you can then consume it in your existing template.
You can find a full example on how to implement a macro, and on how to consume it, here: https://stackoverflow.com/a/70475459/3390419
I have followed the Datadog's documentation (here) for manually configure AWS account with Datadog. Set-up includes a Lambda function provided by Datadog (here) which is triggered by Cloudwatch log group and lambda pushes logs to Datadog.
Problem is that when logs are pushed, Datadog's log forwarder lambda changes name of function, tags and rest of the to small case. Now, when I use '!Ref ' while creating query for Datadog monitor using Cloudformation, then query contains autogenerated name of lambda which is mixture of small and upper case alphabets. But query does not work as Datadog changes name of the function while pushing logs.
Can someone suggest the work around here?
You could use a Cloudformation macro to transform strings in your template, in this case using the Lower operation to make your !Refs lowercase. Keep in mind that you need to define and deploy macros as lambdas.
I have a Logger Lambda function that listens on a specific LogGroup and process specific log details.
I would like to attach the newly created LogGroups of other lambdas to that specific LogGroup, so it will process them as well - but since these other lambdas are created automatically, I need to do this automatically. Can I do it? How?
So there's no way to directly replicate the logs stored in a CloudWatch log group to another log group. You could do this by creating a subscription filter with a Lambda function to push the logs from each log group to the common one, but this would increase the costs for CloudWatch.
What I would suggest is either of the following:
Create a subscription filter for each of the log groups used by your Lambda functions to the common Lambda function so that it is triggered when logs are pushed to any of the log groups. This event can be set up after creating each function. Note, you would have to update the function policy of the common Lambda to allow it to be invoked from each log group (or just set up a wildcard).
Push all the logs for all of the functions to a single log group. This would take the least effort, but you would have to figure out how to effectively separate the logs per function (if that is required for your use case).
I'm new to this and I'd like to get some ideas in terms of a code that can dynamically tag AWS resources. I'm confuse as to what will trigger the execution of the code that will tag it. Can someone please point me to right resources and sample codes?
You need to monitor CloudTrail events for creation of resources you would like to tag and invoke a Lambda function for the matching events, which tags
the resources accordingly.
CloudWatch Event Rule is setup to monitor :create* API calls via CloudTrail.
This rule triggers the lambda function whenever a matching event found.
The Lambda function fetches the resource identifier and principal information from the event and tags the resources accordingly.
I've devised a solution to tag EC2 resources for governance. It is developed in CDK Python and uses Boto3 to attach tags.
You can further extend this code to cover other resource types or maintain a DynamoDb table to store additional tags per principal
such as Project, Team, Cost Center. You can then simply fetch the tags of a principal and apply them all at once.
You can write lambda functions and use Cloudwatch events to trigger that function which will assign tag to your resources.
You can use AWS nodejs-sdk or boto3 for Python.
I'm creating a bunch of application resources with AWS CloudFormation, and when the resources are created, CloudFormation adds a hash at the end of the name to make it unique.
i.e. If you wanted to create a Kinesis stream names MyStream, the actually name would be something like my-stack-MyStream-1F8ISNCLP0W4O.
I want to be able to programmatically access the resources without having to know the hash, without having to query AWS for my resources to match the names myself, and without manual steps. Does anybody know a convenient way to use AWS resources in your application programmatically and predictably?
Here are the less ideal options I can think of:
Set a tag on the resource (i.e. name -> MyStream) and query AWS to get the actual resource name.
Query AWS for a list of resources names and look for a partial match on the expected name.
After you create your resources, manually copy the actual names into your config file (probably the sanest of these options)
You can use the CloudFormation API to get a list of resources in your stack. This will give you a list of logical ids (i.e. the name in your CloudFormation template without the hash) and matching physical ids (with the stack name and hash). Using the AWS CLI, this will show a mapping between the two ids:
aws cloudformation describe-stack-resources
--query StackResources[].[LogicalResourceId,PhysicalResourceId]
--stack-name <my-stack>
CloudFormation APIs to do the same query are provided in all the various language SDKs provided by Amazon.
You can use this as an alternative to #1, by querying CloudFormation at runtime, or #3, by querying CloudFormation at buildtime and embedding the results in a config file. I don't see any advantage to using your own tags over simply querying the CF API. #2 will cause problems if you want two or more stacks from the same template to coexist.
I've used both the runtime and build time approaches. The build time approach lets you remove the dependency on or knowledge of CloudFormation, but needs stack specific information in your config file. I like the runtime approach to allow the same build to be deployed to multiple stacks and all it needs is the stack name to find all the related resources.