I'm trying to use the Serverless Framework to deploy a Kinesis Firehose that outputs to an ElasticSearch domain.
Since the Firehose needs the ES domain to already exist before it can be created, I am running into this error:
An error occurred: MyFirehoseStream - Domain
arn:aws:es:us-east-1:1234567890:domain/my-elastic-search is still
being created.
Is there a way to make the Firehose creation wait until after the ES domain creation is complete?
Just in case its helpful, here are the relevant parts of my serverless.yml file:
fyi, I'm using the serverless-pseudo-parameters plugin to use #{AWS::Region} and #{AWS::AccountId}
resources:
Resources:
MyFirehoseStream:
Type: "AWS::KinesisFirehose::DeliveryStream"
Properties:
DeliveryStreamName: "MyFirehoseStream"
DeliveryStreamType: "DirectPut"
ElasticsearchDestinationConfiguration:
BufferingHints:
IntervalInSeconds: 300
SizeInMBs: 5
DomainARN: "arn:aws:es:#{AWS::Region}:#{AWS::AccountId}:domain/my-elastic-search"
IndexName: "myindex"
IndexRotationPeriod: "NoRotation"
RetryOptions:
DurationInSeconds: 300
RoleARN: { "Fn::GetAtt": ["FirehoseBackupBucketRole", "Arn" ] }
S3BackupMode: "FailedDocumentsOnly"
S3Configuration:
BucketARN: { "Fn::GetAtt": ["FirehoseBackupBucket", "Arn" ] }
BufferingHints:
IntervalInSeconds: 300
SizeInMBs: 5
CompressionFormat: "GZIP"
RoleARN: { "Fn::GetAtt": ["FirehoseBackupBucketRole", "Arn" ] }
TypeName: "mytype"
MyElasticSearch:
Type: "AWS::Elasticsearch::Domain"
Properties:
AccessPolicies: ${file(./iam_policies/elastic-search.json)}
DomainName: "my-elastic-search"
ElasticsearchVersion: 6.2
ElasticsearchClusterConfig:
InstanceCount: "1"
InstanceType: "t2.small.elasticsearch"
EBSOptions:
EBSEnabled: true
Iops: 0
VolumeSize: 10
VolumeType: "gp2"
UPDATE:
I have this fixed now, so in case the specifics are helpful for anyone:
I changed the DomainARN property to { "Fn::GetAtt": ["MyElasticSearch", "DomainArn" ] }.
The reason I was originally generating the ARN dynamically was because with "Fn::GetAtt" I originally tried to use just Arn instead of DomainArn, which didn't work. Coincidentally though, DomainArn has been deprecated in the latest version, so if you are using the latest version, Arn actually would be correct.
Cloudformation resources support the DependsOn attribute.
resources:
Resources:
MyFirehoseStream:
Type: "AWS::KinesisFirehose::DeliveryStream"
DependsOn: MyElasticSearch
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html
Related
I have an organization account with several managed accounts underneath it. Each managed account has multiple VPCs in them. One of the VPC in each managed account will have a tag "ServiceName":"True" while the others in that account will have a "ServiceName":"False" tag instead.
I'm trying to create a stackset with a stack dedicated to create a security group with ingress rules attached to it and I need to dynamically assign the "VpcId" property of that security group to be the "VpcId" of VPC with the "ServiceName":"True" tag in that account.
Obviously, if I don't specify a VPC ID in the VpcId field, it creates the security group but attach it to the default VPC of that account. I can't specify manually a VPC either since it's going to be ran in multiple accounts. Leaving me with the only option available to search and assign VPCs by running some sort of function to extract the "VpcId".
The stack itself works fine as I ran it in a test environment while specifying a VPC ID. So, it's just a matter getting that "VpcId" dynamically.
In the end, I'm looking to do something that would resemble this:
{
"Parameters": {
"MyValidVPCID": {
"Description": "My Valid VPC ID where ServiceName tag equals true. Do some Lambda Kung Fu to get the VPC ID using something that would let me parse the equivalent of aws ec2 describe-vpcs command.",
"Type": "String"
}
},
"Resources": {
"SG": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Security Group Desc.",
"Tags": [
{
"Key": "Key1",
"Value": "ABC"
},
{
"Key": "Key2",
"Value": "DEF"
}
],
"VpcId" : { "Ref" : "MyValidVPCID" }
}
},
"SGIngressRule01":
{
"Type": "AWS::EC2::SecurityGroupIngress",
"DependsOn": "SG",
"Properties": {
"GroupId" : { "Fn::GetAtt": [ "SG", "GroupId" ] },
"Description": "Rule 1 description",
"IpProtocol": "tcp",
"FromPort": 123,
"ToPort": 456,
"CidrIp": "0.0.0.0/0"
}
}
}
I really don't know if it's a feasible approach or what would be the extra steps needed to recuperate that VpcId based on the tag. That's why if I could get some input from people used to work with CloudFormation, it would help me a lot.
getting that "VpcId" dynamically.
You have to use custom resource for that. You would have to create it as a lambda function which would take any input arguments you want, and using AWS SDK, would query or modify the VPC/Security groups in your stack.
Thanks Marcin for pointing me in the right direction with the custom resources. For those who are wondering what the basic code to make it work looks like, it looks something like this:
Resources:
FunctionNameLambdaFunctionRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: FunctionNameLambdaFunctionRole
Path: "/"
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
FunctionNameLambdaFunctionRolePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: admin3cx
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "*"
Resource: "*"
Roles:
- Ref: FunctionNameLambdaFunctionRole
FunctionNameLambdaFunctionCode:
Type: "AWS::Lambda::Function"
DeletionPolicy: Delete
DependsOn:
- FunctionNameLambdaFunctionRole
Properties:
FunctionName: FunctionNameLambdaFunctionCode
Role: !GetAtt FunctionNameLambdaFunctionRole.Arn
Runtime: python3.7
Handler: index.handler
MemorySize: 128
Timeout: 30
Code:
ZipFile: |
import boto3
import cfnresponse
ec2 = boto3.resource('ec2')
client = boto3.client('ec2')
def handler(event, context):
responseData = {}
filters =[{'Name':'tag:ServiceName', 'Values':['True']}]
vpcs = list(ec2.vpcs.filter(Filters=filters))
for vpc in vpcs:
responseVPC = vpc.id
responseData['ServiceName'] = responseVPC
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "CustomResourcePhysicalID")
FunctionNameLambdaFunctionInvocationCode:
Type: "Custom::FunctionNameLambdaFunctionInvocationCode"
Properties:
ServiceToken: !GetAtt FunctionNameLambdaFunctionCode.Arn
SGFunctionName:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Description
VpcId: !GetAtt FunctionNameLambdaFunctionInvocationCode.ServiceName
...
Some stuff has been redacted and I made the switch to YAML. The code will be refined obviously. The point was just to make sure I was able to get a return value based on a filter in a Lambda function inside a CloudFormation stack.
I'm trying to create some resources using Cloudformation with serverless framework, In which I need to substitute resource name from another resource. Tried to use !Sub but still I couldn't get Arn of another resource created.
Tried all the approaches in this stackoverflow question How to use Sub and GetAtt functions at the same time in CloudFormation template? to no avail.
I appreciate any help.
Resources:
BasicParameter:
Type: AWS::SSM::Parameter
Properties:
Name: /data/config-name
Type: String
Value:
Fn::Base64:
!Sub |
{
"filter_configs": [{
"stream_name": !GetAtt tpRecordDeliveryStream.Arn,
"events": [
{
"name": "event_name1",
"stream": "streamname1"
},
{
"name": "event_name2"
}
]
}]
}
Description: Configuration for stream filters
Tags:
project: projectname
team: data
owner: owner_name
This was resolved by using serverless-pseudo-parameters serverless plugin. Serverless framework also uses ${} placeholder and it conflicts with Cloudformation placeholders. serverless-pseudo-parameters solves that by allowing us to replace those place holders with #{} which are replaced during sls deploy with cloud formation templates
Resources:
streamConfig:
Type: AWS::SSM::Parameter
Properties:
Name: config_name
Type: String
Value:
Fn::Base64: |
{
"filter_configs": [{
"firehose_stream_arn": "#{tpRecordDeliveryStream.Arn}",
"events": [
{
"name": "config0",
"filter1": "value1"
},
{
"name": "config1"
}
]
}]
}
Description: Configuration for stream filters
Since you have !Sub |, instead of
"stream_name": !GetAtt tpRecordDeliveryStream.Arn,
the following should be enough
"stream_name": "${tpRecordDeliveryStream.Arn}"
The alternative using !Sub in array notation:
Value:
Fn::Base64:
!Sub
- |
{
"filter_configs": [{
"stream_name": "${tpRecordDeliveryStreamArn}",
"events": [
{
"name": "event_name1",
"stream": "streamname1"
},
{
"name": "event_name2"
}
]
}]
}
- tpRecordDeliveryStreamArn: !GetAtt tpRecordDeliveryStream.Arn
I want to create a EC2 instance type t3.medium on all environments and m5.large on production.
I'm using .ebextensions (YAML) like so:
option 1:
Mappings:
EnvironmentMap:
"production":
TheType: "m5.large"
SecurityGroup: "foo"
...
"staging":
TheType: "t3.medium"
SecurityGroup: "bar"
...
option_settings:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: "aws-elasticbeanstalk-ec2-role"
InstanceType: !FindInMap
- EnvironmentMap
- !Ref 'AWSEBEnvironmentName'
- TheType
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Option 2:
InstanceType: {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Option 3:
InstanceType:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "EC2InstanceType"]}
Results
Option 1 fails with Invalid Yaml (but I took this from this AWS example.
Option 2 and 3 fail with the same problem.
The FindInMap function is not "called":
Invalid option value: '{"Fn::FindInMap":["EnvironmentMap","EC2InstanceType"]},{"Ref":"AWSEBEnvironmentName"}' (Namespace: 'aws:autoscaling:launchconfiguration', OptionName: 'InstanceType'): Value is not one of the allowed values: [c1.medium, c1.xlarge, c3.2xlarge, ....
It tries to interpret the whole function/thing as a string.
For the SecurityGroups property it works, for InstanceType it does not.
I can't do it dynamically and I can't find how to achieve this neither on AWS doc, SO, or anywhere else. I would assume this is simple stuff. What am I missing?
EDIT:
Option 4: using conditionals
Conditions:
IsProduction: !Equals [ !Ref AWSEBEnvironmentName, production ]
option_settings:
aws:autoscaling:launchconfiguration:
InstanceType: !If [ IsProduction, m5.large, t3.medium ]
SecurityGroups:
- {"Fn::FindInMap": ["EnvironmentMap", {"Ref": "AWSEBEnvironmentName"}, "SecurityGroup"]}
Error: YAML exception: Invalid Yaml: could not determine a constructor for the tag !Equals in...
But this comes from documentation on conditions and if.
EDIT 2:
I eventually found out that the option InstanceType is obsolute and we should use:
aws:ec2:instances
InstanceTypes: "t3.medium"
But alas, this does not solve the problem either because I cannot use the replacement functions here as well (Fn:findInMap).
The reason why FindInMap does not work in option_settings is the fact that only four intrinsic functions are allowed there (from docs):
Ref
Fn::GetAtt
Fn::Join
Fn::GetOptionSetting
I'm not convinced that SecurityGroups worked. I think your script failed before FindInMap in SecurityGroups got chance to be evaluated.
However, I tried to find a way using Resources. The closes I got was with the following config file:
Mappings:
EnvironmentMap:
production:
TheType: "t3.medium"
staging:
TheType: "t2.small"
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
InstanceType:
? "Fn::FindInMap"
:
- EnvironmentMap
-
Ref: "AWSEBEnvironmentName"
- TheType
Although this is a step closer, it ultimately fails as well. The reason is that when EB is jointing our Resources config file with its own template, it produces the following:
"InstanceType": {
"Ref": "InstanceType", # <--- this should NOT be here :-(
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
instead of
"InstanceType": {
"Fn::FindInMap": [
"EnvironmentMap",
{
"Ref": "AWSEBEnvironmentName"
},
"TheType"
]
},
And this happens because the original InstanceType (before the joint operation) is:
"InstanceType":{"Ref":"InstanceType"},
Therefore, EB instead of replacing InstanceType with our custom InstanceType provided in our config file, it just merges them.
I'm fighting with wired case.
I need to push cloudformation stacks dynamically parameterized with terraform.
My resource looks like this.
resource "aws_cloudformation_stack" "eks-single-az" {
count = length(var.single_az_node_groups)
name = "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
template_body = <<EOF
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
Resources:
ASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
VPCZoneIdentifier: ["${var.private_subnet_ids[count.index]}"]
MinSize: "${lookup(var.single_az_node_groups[count.index], "asg_min", "0")}"
MaxSize: "${lookup(var.single_az_node_groups[count.index], "asg_max", "10")}"
HealthCheckType: EC2
TargetGroupARNs: [] < - here is error.
MixedInstancesPolicy:
InstancesDistribution:
OnDemandBaseCapacity: "0"
OnDemandPercentageAboveBaseCapacity: "${lookup(var.single_az_node_groups[count.index], "on_demand_percentage", "0")}"
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: "${aws_launch_template.eks-single-az[count.index].id}"
Version: "${aws_launch_template.eks-single-az[count.index].latest_version}"
Overrides:
-
InstanceType: m5.large
Tags:
- Key: "Name"
Value: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}"
PropagateAtLaunch: true
- Key: "kubernetes.io/cluster/${var.cluster_name}"
Value: "owned"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/enabled"
Value: "true"
PropagateAtLaunch: true
- Key: "k8s.io/cluster-autoscaler/${var.cluster_name}"
Value: "true"
PropagateAtLaunch: true
UpdatePolicy:
AutoScalingRollingUpdate:
MinSuccessfulInstancesPercent: 80
MinInstancesInService: "${lookup(data.external.desired_capacity.result, "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}", "0")}"
PauseTime: PT4M
SuspendProcesses:
- HealthCheck
- ReplaceUnhealthy
- AZRebalance
- AlarmNotification
- ScheduledActions
WaitOnResourceSignals: true
EOF
depends_on = [
aws_launch_template.eks-single-az
]
}
I need to put target groups arn from list containing json objects:
single_az_node_groups = [
{
"name" : "workload-az1",
"instance_type" : "t2.micro",
"asg_min" : "1",
"asg_max" : "7",
"target_group_arns" : "arnA, arnB, arnC"
},
...
]
I tried everything. Problem is that i tried many terraform functions and all the time terraform is addding some double-quotes which cloudformation does not support or terraform won't process the template_body becuase of missing quotes..
Do you know meybe some sneaky trick how to achive that ?
When building strings that represent serialized data structures, it's much easier to use Terraform's built-in serialization functions to construct the result, rather than trying to produce a valid string using string templates.
In this case, we can use jsonencode to construct a JSON string representing the template_body from a Terraform object value, which then allows using all of the Terraform language expression features to build it:
template_body = jsonencode({
Description: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
Resources: {
ASG: {
Type: "AWS::AutoScaling::AutoScalingGroup",
Properties: {
AutoScalingGroupName: "eks-${var.cluster_name}-${var.single_az_node_groups[count.index].name}",
VPCZoneIdentifier: [var.private_subnet_ids[count.index]],
MinSize: lookup(var.single_az_node_groups[count.index], "asg_min", "0"),
MaxSize: lookup(var.single_az_node_groups[count.index], "asg_max", "10"),
HealthCheckType: "EC2",
TargetGroupArns: flatten([
for g in local.single_az_node_groups : [
split(", ", g.target_group_arns)
]
]),
# etc, etc
},
},
},
})
As you can see above, by using jsonencode for the entire data structure we can then use Terraform expression operators to build the values. For TargetGroupArns in the above example I used the flatten function along with a for expression to transform the nested local.single_az_node_groups data structure into a flat list of target group ARN strings.
CloudFormation supports both JSON and YAML, and Terraform also has a yamlencode function that you could potentially use instead of jsonencode here. I chose jsonencode both because yamlencode is currently marked as experimental (the exact YAML formatting it produces may change in a later release) and because Terraform has special support for JSON formatting in the plan output where it can show a structural diff of the data structure inside, rather than a string-based diff.
I have following CF Template
{
"Conditions":{
"CreatedProdStage" : {...}
}
...
"Resources":{
"GetMethod": {
...
},
"ApiDeployement":{
...
},
"ProdStage":{
"Type":"AWS::ApiGateway::Stage",
"Condition":"CreatedProdStage",
"Properties": {
"DeploymentId":"...",
"RestApiId":"...",
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod":{"Ref":"GetMethod"},
"ResourcePath":"/"
}]
}
}
}
}
And I am getting error
Invalid method setting path:
/~1/st-GetMetho-xxxAUMMRWxxx/caching/enabled. Must be one of:
[/deploymentId, /description,
/cacheClusterEnabled/cacheClusterSize/clientCertificateId/{resourcePath}/{httpMethod}/metrics/enabled,
/{resourcePath}/{httpMethod}/logging/dataTrace,
/{resourcePath}/{httpMethod}/logging/loglevel,
/{resourcePath}/{httpMethod}/throttling/burstLimit/{resourcePath}/{httpMethod}/throttling/rateLimit/{resourcePath}/{httpMethod}/caching/ttlInSeconds,
/{resourcePath}/{httpMethod}/caching/enabled,
/{resourcePath}/{httpMethod}/caching/dataEncrypted,
/{resourcePath}/{httpMethod}/caching/requireAuthorizationForCacheControl,
/{resourcePath}/{httpMethod}/caching/unauthorizedCacheControlHeaderStrategy,
///metrics/enabled, ///logging/dataTrace, ///logging/loglevel,
///throttling/burstLimit ///throttling/rateLimit
///caching/ttlInSeconds, ///caching/enabled,
///caching/dataEncrypted,
///caching/requireAuthorizationForCacheControl,
///caching/unauthorizedCacheControlHeaderStrategy, /va
Am I missing something? I thought ResourcePath and HttpMethod are the only required attributes
You first need to enable caching on the stage with the CacheClusterEnabled property. This will allow you to set up caching for methods as you have done in you MethodSettings:
...
"ProdStage":{
"Type":"AWS::ApiGateway::Stage",
"Condition":"CreatedProdStage",
"Properties": {
"DeploymentId":"...",
"RestApiId":"...",
"CacheClusterEnabled": true
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod":{"Ref":"GetMethod"},
"ResourcePath":"/"
}]
}
}
Then you will need to fix the given error. Your ResourcePath to match one of those listed in the error output. Those are not listed in the documentation, so it's a bit confusing what you need to use. What you currently have is set up for the root path only. If you want all paths use "/*"
APIGateWay::MethodSettings (see ResourcePath) doc:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-apigateway-stage-methodsetting.html
If anyone is still arriving at this, but is NOT using cache, I have provided an example for setting throttling and logging on the whole API. I could not figure it out until I started playing around with the ResourcePath and HttpMethod, and noticed the error changing.
Please note that I used * for both path and method and USED QUOTATIONS. It will fail without quotations.
ProdStage:
Type: AWS::ApiGateway::Stage
Properties:
StageName: Prod
RestApiId: !Ref StunningDisco
DeploymentId: !Ref StunningDiscoDeployment
MethodSettings:
- ResourcePath: '/*'
HttpMethod: '*'
LoggingLevel: INFO
DataTraceEnabled: True
ThrottlingBurstLimit: '10'
ThrottlingRateLimit: '10.0'
StunningDiscoDomainMapping:
Type: 'AWS::ApiGateway::BasePathMapping'
DependsOn: ProdStage
Properties:
DomainName: !Ref StunningDiscoDomain
RestApiId: !Ref StunningDisco
Stage: !Ref ProdStage
StunningDiscoDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn: [StunningDiscoRootEndpoint, LightsInvokeEndpoint]
Properties:
RestApiId: !Ref StunningDisco
Try setting the HttpMethod to a string instead of a reference:
"MethodSettings":[{
"CachingEnabled":true,
"HttpMethod": "GET",
"ResourcePath":"/"
}]
}