I'm validating a CloudFormation template that was generated via a Troposphere script. The resource in question which seems to be causing the error is a Metric Transformation which is defined as follows in the Troposphere script:
t.add_resource(logs.MetricTransformation(
"planReconciliationFiduciaryStepMetricTransformation",
MetricNamespace=Ref("metricNamespace"),
MetricName=Join("", [Ref("springProfile"), "-", "plan-reconciliation-step-known-to-fiduciary"]),
MetricValue="1"
))
The parameters it depends on are defined beforehand in the script as follows:
t.add_parameter(Parameter(
"metricNamespace",
Type="String",
Default="BATCH-ERRORS",
Description="Metric namespace for CloudWatch filters"
))
t.add_parameter(Parameter(
"springProfile",
Type="String",
Default=" ",
Description="SPRING PROFILE"
))
The exact command that I'm running is
aws cloudformation validate-template --template-body
file://hor-ubshobackgroundtaskdefinition.template --profile saml
with the resulting output
An error occurred (ValidationError) when calling the ValidateTemplate operation:
Invalid template resource property 'MetricName'
My properties for the MetricTransformation appear to be well-defined from the AWS documentation. For visibility, this is also what the metric transformation resource in the template that's being validated looks like too:
"planReconciliationFiduciaryStepMetricTransformation": {
"MetricName": {
"Fn::Join": [
"",
[
{
"Ref": "springProfile"
},
"-",
"plan-reconciliation-step-known-to-fiduciary"
]
]
},
"MetricNamespace": {
"Ref": "metricNamespace"
},
"MetricValue": "1"
}
Any ideas?
UPDATE:
As requested, adding the metric filter resource:
"PlanReconciliationFiduciaryStepMetricFilter": {
"Properties": {
"FilterPattern": "INFO generatePlanReconciliationStepKnownToFiduciary",
"LogGroupName": {
"Ref": "logGroupName"
},
"MetricTransformations": [
{
"Ref": "planReconciliationFiduciaryStepMetricTransformation"
}
]
},
"Type": "AWS::Logs::MetricFilter"
}
As it turns out, the MetricTransformation resource needs to be initialized within the MetricFilter itself in order to produce the correct CloudFormation template from the Troposphere script. If you declare the MetricTransformation as a separate resource and try to reference it within the the MetricFilter resource, the ensuing template will be incorrectly formatted.
The correct format in the Troposphere script will look like the following:
t.add_resource(logs.MetricFilter(
"PlanReconciliationFiduciaryStepMetricFilter",
FilterPattern="INFO generatePlanReconciliationStepKnownToFiduciary",
LogGroupName=Ref("logGroupName"),
MetricTransformations=[logs.MetricTransformation(
"planReconciliationFiduciaryStepMetricTransformation",
MetricNamespace=Ref("metricNamespace"),
MetricName=Join("", [Ref("springProfile"), "-", "plan-reconciliation-fiduciary-step"]),
MetricValue="1")]
))
Related
I was able to setup AutoScaling events as rules in EventBridge to trigger SSM Commands, but I've noticed that with my chosen Target Value the event is passed to all my active EC2 Instances. My Target key is a tag shared by those instances, so my mistake makes sense now.
I'm pretty new to EventBridge, so I was wondering if there's a way to actually target the instance that triggered the AutoScaling event (as in extracting the "InstanceId" that's present in the event data and use that as my new Target Value). I saw the Input Transformer, but I think that just transforms the event data to pass to the target.
Thanks!
EDIT - help with js code for Lambda + SSM RunCommand
I realize I can achieve this by setting EventBridge to invoke a Lambda function instead of the SSM RunCommand directly. Can anyone help with the javaScript code to call a shell command on the ec2 instance specified in the event data (event.detail.EC2InstanceId)? I can't seem to find a relevant and up-to-date base template online, and I'm not familiar enough with js or Lambda. Any help is greatly appreciated! Thanks
Sample of Event data, as per aws docs
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "EC2 Instance Launch Successful",
"source": "aws.autoscaling",
"account": "123456789012",
"time": "yyyy-mm-ddThh:mm:ssZ",
"region": "us-west-2",
"resources": [
"auto-scaling-group-arn",
"instance-arn"
],
"detail": {
"StatusCode": "InProgress",
"Description": "Launching a new EC2 instance: i-12345678",
"AutoScalingGroupName": "my-auto-scaling-group",
"ActivityId": "87654321-4321-4321-4321-210987654321",
"Details": {
"Availability Zone": "us-west-2b",
"Subnet ID": "subnet-12345678"
},
"RequestId": "12345678-1234-1234-1234-123456789012",
"StatusMessage": "",
"EndTime": "yyyy-mm-ddThh:mm:ssZ",
"EC2InstanceId": "i-1234567890abcdef0",
"StartTime": "yyyy-mm-ddThh:mm:ssZ",
"Cause": "description-text"
}
}
Edit 2 - my Lambda code so far
'use strict'
const ssm = new (require('aws-sdk/clients/ssm'))()
exports.handler = async (event) => {
const instanceId = event.detail.EC2InstanceId
var params = {
DocumentName: "AWS-RunShellScript",
InstanceIds: [ instanceId ],
TimeoutSeconds: 30,
Parameters: {
commands: ["/path/to/my/ec2/script.sh"],
workingDirectory: [],
executionTimeout: ["15"]
}
};
const data = await ssm.sendCommand(params).promise()
const response = {
statusCode: 200,
body: "Run Command success",
};
return response;
}
Yes, but through Lambda
EventBridge -> Lambda (using SSM api) -> EC2
Thank you #Sándor Bakos for helping me out!! My JavaScript ended up not working for some reason, so I ended up just using part of the python code linked in the comments.
1. add ssm:SendCommand permission:
After I let Lambda create a basic role during function creation, I added an inline policy to allow Systems Manager's SendCommand. This needs access to your documents/*, instances/* and managed-instances/*
2. code - python 3.9
import boto3
import botocore
import time
def lambda_handler(event=None, context=None):
try:
client = boto3.client('ssm')
instance_id = event['detail']['EC2InstanceId']
command = '/path/to/my/script.sh'
client.send_command(
InstanceIds = [ instance_id ],
DocumentName = 'AWS-RunShellScript',
Parameters = {
'commands': [ command ],
'executionTimeout': [ '60' ]
}
)
You can do this without using lambda, as I just did, by using eventbridge's input transformers.
I specified a new automation document that called the document I was trying to use (AWS-ApplyAnsiblePlaybooks).
My document called out the InstanceId as a parameter and is passed this by the input transformer from EventBridge. I had to pass the event into lambda just to see how to parse the JSON event object to get the desired instance ID - this ended up being
$.detail.EC2InstanceID
(it was coming from an autoscaling group).
I then passed it into a template that was used for the runbook
{"InstanceId":[<instance>]}
This template was read in my runbook as a parameter.
This was the SSM playbook inputs I used to run the AWS-ApplyAnsiblePlaybook Document, I just mapped each parameter to the specified parameters in the nested playbook:
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
See the document below for reference. They used a document that was already set up to receive the variable
https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-eventbridge-input-transformers.html
This is the full automation playbook I used, most of the parameters are defaults from the nested playbook:
{
"description": "Runs Ansible Playbook on Launch Success Instances",
"schemaVersion": "0.3",
"assumeRole": "<Place your automation role ARN here>",
"parameters": {
"InstanceId": {
"type": "String",
"description": "(Required) The ID of the Amazon EC2 instance."
}
},
"mainSteps": [
{
"name": "RunAnsiblePlaybook",
"action": "aws:runCommand",
"inputs": {
"InstanceIds": ["{{ InstanceId }}"],
"DocumentName": "AWS-ApplyAnsiblePlaybooks",
"Parameters": {
"SourceType": "S3",
"SourceInfo": {"path": "https://testansiblebucketab.s3.amazonaws.com/"},
"InstallDependencies": "True",
"PlaybookFile": "ansible-test.yml",
"ExtraVariables": "SSM=True",
"Check": "False",
"Verbose": "-v",
"TimeoutSeconds": "3600"
}
}
}
]
}
I'm trying to use an existing role (present in the AWS account) in a cloudformation template to setup a lambda function, i plan to be use this across multiple AWS accounts.
In the CF template, I'm using Parameters to set the name of the Role and then using Ref in the Role property for the Lambda function. This is what my template looks like,
"Parameters" : {
"ExistingRoleName" : {
"Type" : "String",
"Default" : "MyCustomRole"
}
"Resources" : {
"CustomLambdaFunction" : {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"MemorySize" : "128",
"Role" : { "Ref" : "ExistingRoleName" },
}
},
...
However, the CF template fails with the following error :
Properties validation failed for resource CustomLambdaFunction with message: #/Role: failed validation constraint for keyword [pattern]
Is this because Lambda resource in Cloudformation needs the role arn instead of RoleName as i seen in this docaws-resource-lambda-function
Based on which i updated the CF like so,
"Resources" : {
"CustomLambdaFunction" : {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"MemorySize" : "128",
"Role" : "arn:aws:iam::AccountID:role/MyCustomRole",
}
},
However, i still see the same error.
Properties validation failed for resource CustomLambdaFunction with message: #/Role: failed validation constraint for keyword [pattern]
I was wondering if i'm missing something here ?
The Ref of an IAM Role “returns the resource name”, not its ARN. But you can use GetAtt on the Arn attribute of the role instead.
In JSON:
{"Fn::GetAtt": ["MyRole", "Arn"]}
In YAML:
!GetAtt MyRole.Arn
Format to reference the iam role arn
"Role" : { "Fn::Sub" : "arn:aws:iam::${AWS::AccountId}:role/MyCustomRole" }
In yaml if you are pointing to an already existing role the syntax is:
function:
...
role: !Sub arn:aws:iam::${AWS::AccountId}:role/MyRoleName
Somehow I have forgotten the !Sub in the beginning
This is what worked for me,
"Role": { "Fn::Join" : [ "", [ "arn:aws:iam::", { "Ref" : "AWS::AccountId" }, ":role/MyCustomRole" ] ] }
I was getting the same problem with below syntax -
"Resources" : {
"CustomLambdaFunction" : {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Role" : "arn:aws:iam::<account-id>:role/MyCustomRole",
}
},
I solved it like this -
The issue was that when inserting my AWS account ID in place of "account-id", I was keeping it in the same format as is given on the AWS console i.e. xxxx-xxxx-xxxx. However, the "account-id" space expects "\d{12}" format, i.e. 12 digits only. Removing the '-' in between digits solved the problem for me.
I'm trying to create this API Gateway (gist) with Authorizer, and ANY method.
I run into this error:
The following resource(s) failed to create: [BaseLambdaExecutionPolicy, ApiGatewayDeployment]
I've checked the parameters passed into this template from my other stacks and they're correct. I've checked this template and it's valid.
My template is modified from this template with "Runtime": "nodejs8.10".
This is the same stack (gist) which is created successfully using swagger 2. I just want to replace swagger 2 with AWS::ApiGateway::Method.
Update 6 Jun 2019:
I tried to create the whole nested stack using the working version of the API Gateway stack, then create another API Gateway with the template that doesn't work with the parameters I get from the nested stack, then I have this:
The REST API doesn't contain any methods (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException; Request ID: ID)
But I did specify the method in my template following AWS docs:
"GatewayMethod": {
"Type" : "AWS::ApiGateway::Method",
"DependsOn": ["LambdaRole", "ApiGateway"],
"Properties" : {
"ApiKeyRequired" : false,
"AuthorizationType" : "Cognito",
"HttpMethod" : "ANY",
"Integration" : {
"IntegrationHttpMethod" : "ANY",
"Type" : "AWS",
"Uri" : {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${LambdaFunction.Arn}/invocations"
}
},
"MethodResponses" : [{
"ResponseModels": {
"application/json": "Empty"
},
"StatusCode": 200
}],
"RequestModels" : {"application/json": "Empty"},
"ResourceId" : {
"Fn::GetAtt": ["ApiGateway", "RootResourceId"]
},
"RestApiId" : {
"Ref": "ApiGateway"
}
}
},
Thanks to #John's suggestion. I've tried to create the nested stack with the version that worked and pass in the parameters for the version that doesn't work.
The reason for that error is:
CloudFormation might try to create Deployment before it creates Method
from balaji's answer here.
So this is what I did:
"methodANY": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"AuthorizationType": "COGNITO_USER_POOLS",
...},
"ApiGatewayDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"DependsOn": "methodANY",
...
I also found this article on cloudonaut.io by Michael Wittig helpful.
Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.
If I have two cloudformation stacks, how do I references a resource in one stack from the other stack?
In the example below I have a stack that creates an EBS volume and want to reference that via the Ref: key in the second stack for my EC2 instance but I keep getting a rollback since it can't see that resource from the first stack:
"Template format error: Unresolved resource dependencies"
I already tried the DependsOn clause but it didn't work. Do I need to pass information via Parameters?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"CubesNetworking": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/mybucket/cf_network.json"
}
},
"CubesInstances": {
"DependsOn": ["CubesNetworking"],
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "https://s3.amazonaws.com/mybucket/cf_instances.json"
}
}
}
}
In each of your nested stacks, you should have an output section. Then you can get those values in your calling stack (the one you have listed above) with syntax like:
{ "Fn::GetAtt" : [ "CubesNetworking", "Outputs.VolumeID" ] }
You then pass the values into your other nested stacks via Parameters:
"Parameters" : {
"VolumeId" : { "Fn::GetAtt" : [ "CubesNetworking", "Outputs.VolumeID" ] }
You still want the DependsOn since you need the volume created before the instance.
Edit, Mid-2017:
CloudFormation has introduced the ability to export values from one stack, and reference them in other stacks that do not have to be nested.
So your output can specify an export:
Outputs:
Desc:
Value: !Ref CubesNetworking.VolumeID
Export:
Name: some-unique-name
Then in another stack:
Fn::ImportValue: some-unique-name