For an automatic deployment workflow, I want to start a cloudformation deployment and trigger a lambda function when done.
I know, I can add a cloudwatch event that triggers whenever an event occurs on cloudformation in my account. But I do not want to trigger the lambda on any cloudformation template being deployed, but only on templates, where I decide on deployment that the lambda should be triggered.
I could add code to the lambda function, making it decide for itself if it was supposed to be triggered. That would probably work, but I wonder if there is a better more direct solution?
Ideas?
Custom resources enable you to write custom provisioning logic in
templates that AWS CloudFormation runs anytime you create, update
Ex: Custom Lambda resource to enable RDS logs after the RDS DB is created.
Resources:
EnableLogs:
Type: Custom::EnableLogs
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:acc:function:rds-EnableRDSLogs-1O6XLL6LWNR5Z
DBInstanceIdentifier: mydb
See my python gist here
Related
I'm creating a rule that should fire every time there is a change in status in a SageMaker batch transform job.
I'm using Serverless Framework but to simplify it even further, here's what I did:
The rule, exported from AWS console:
AWSTemplateFormatVersion: '2010-09-09'
Description: >-
CloudFormation template for EventBridge rule
'sagemaker-transform-status-to-CWL'
Resources:
EventRule0:
Type: AWS::Events::Rule
Properties:
EventBusName: default
EventPattern:
source:
- aws.sagemaker
detail-type:
- SageMaker Training Job State Change
Name: sagemaker-transform-status-to-CWL
State: ENABLED
Targets:
- Id: XXX
Arn: >-
arn:aws:logs:us-east-1:XXX:log-group:/aws/events/sagemaker-notifications
Eventually I want this to trigger a step function or a lambda function, but for now I am configuring the target to be CloudWatch with log group 'sagemaker-notifications'
I expect that everytime I run a batch transform job in SageMaker, this will get notified and the log would show up on cloudwatch.
But I'm not getting any logs, so when I tried to PutEvents manually to test it, I was getting this:
Error. NotAuthorizedForSourceException. Not authorized for the source.
It's probably an issue with roles, but I'm not sure which kind of role to configure, where and who should assume it.
Tried going through AWS tutorials, adding permissions to the default event bus, using serverless framework
See some sample event patterns here - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Your source should be a custom source, and cannot contain aws. (Reference -https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html)
I have set up 2 lambda functions, deployed with AWS SAM. The first one uses the JS AWS SDK to run putRule and putTarget to trigger the second lambda with a cron job. When I run the first lambda, I see both the rule and target correctly set up in EventsBridge.
I also create the following permission for the second Lambda in my AWS SAM template
InvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref MyLambda
Action: lambda:InvokeFunction
Principal: 'events.amazonaws.com'
and can see the Policy in the console
The only result I see of this cron event (at the timestamp I've chosen for the rule) is a failed invocation of the second Lambda, and CloudWatch doesn't provide any useful information
Any idea of why this is failing or how to retrieve any error? Might "events.amazonaws.com" be the wrong Principal for that?
I am looking into EventSourceMapping but I can't see my case anywhere in the docs
Im creating API gateway stage using cloudformation.
ApiDeployment:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
Here is the problem, Whenever I create a new API, I just need to deploy the stage using AWS console. is there any way that I can automate the deploy process so that no further console action is required.
When you define a Deployment resource like this, CloudFormation will create the deployment only on the first run. On the second run it will observe that the resource already exists and the CloudFormation definition did not change, so it won't create another deployment. To work around that, you can add something like a UUID/timestamp placeholder to the resource ID and replace it everytime before doing the CloudFormation update:
ApiDeployment#TIMESTAMP#:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
This way you are still able to see your deployment history in the API Gateway console.
If you don't want to manipulate your template like this, you can also add a Lambda-backed Custom Resource to your CloudFormation stack. Using an AWS SDK, you can have the Lambda function automatically creating new deployments for you when the API was updated.
I've found berenbums response to be mostly correct, but there are a few things I don't like.
The proposed method of creating a resource like ApiDeployment#TIMESTAMP# doesn't keep the deployment history. This makes sense, since the old ApiDeployment#TIMESTAMP# element is being deleted and a new one is being created every time.
Using ApiDeployment#TIMESTAMP# creates a new deployment every time the template is deployed, which might be undesirable if the template is being deployed to create/update other resources.
Also, using ApiDeployment#TIMESTAMP# didn't work well when adding the StageDescription property. A potential solution is to add a static APIGwDeployment resource for the initial deployment (with StageDescription) and ApiDeployment#TIMESTAMP# for the updates.
The fundamental issue though, is that creating a new api gw deployment is not well suited for cloudformation (beyond the initial deployment). I think after the initial deployment, it's better to do an AWS API invocation to update the deployment (see https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html).
In my particular case I created a small Ansible module to invoke aws apigateway create-deployment which updates an existing stage in one operation.
I have this usecase where I need to trigger a lambda every time my cloudformation stack updates/deletes. Cloudformation does not emit any cloudwatch metrics. Is there a way to get the cloudformation events to trigger a lambda. Any existing examples I can refer to.
What you can do is add reference your lambda function within the cloudformation script as a custom resource. You can then have the custom resource run (which executes your Lambda) on every update of the stack.
Basic syntax is:
MyCustomResource:
Type: "Custom::TestLambdaCrossStackRef"
Properties:
ServiceToken:
!Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${LambdaFunctionName}
StackName:
Ref: "NetworkStackName"
More information here:
AWS Documentation
Configure an SNS Topic as a Notification Option in the CFT via https://docs.aws.amazon.com/en_pv/AWSCloudFormation/latest/UserGuide/cfn-console-add-tags.html. Have your lambda be a subscriber to that topic.
I describe existing AWS Lambda function in CloudFormation template and I face with the next issue. In our Lambda we configured few test events which helps us to verify some usecases (I mean functionality from the screenshot below).
But I don't see any abilities to add these test events to the CloudFormation template. AWS documentation don't help me with that. Is that possible at all or are there any workarounds how to export and import Lambda function test events?
Lambda test functionality is available only in the UI console, You can use Cloudformation Custom Resource to invoke a function from a cloudformation template. Resource properties allow AWS CloudFormation to create a custom payload to send to the Lambda function.
Sample code:
Resources:
EnableLogs:
Type: Custom::EnableLogs
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:acc:function:rds-EnableRDSLogs-1O6XLL6LWNR5Z
DBInstanceIdentifier: mydb
the event parameter provides the resource properties. ex:
event['ResourceProperties']['DBInstanceIdentifier']