I have a requirement to select all the rules in AWS Config while deploying the resources in newly created account through Cloudformation. But I don't know how to select all the AWS Managed rules as in Console through Cloudformation. Any help would be very helpful.
AWSTemplateFormatVersion: 2010-09-09
Description: Enable AWS Config
Parameters:
AllSupported:
Type: String
Default: True
Description: Indicates whether to record all supported resource types.
AllowedValues:
- True
- False
IncludeGlobalResourceTypes:
Type: String
Default: True
Description: Indicates whether AWS Config records all supported global resource types.
AllowedValues:
- True
- False
ResourceTypes:
Type: List<String>
Description: A list of valid AWS resource types to include in this recording group, such as AWS::EC2::Instance or AWS::CloudTrail::Trail.
Default: <All>
DeliveryChannelName:
Type: String
Default: <Generated>
Description: The name of the delivery channel.
Frequency:
Type: String
Default: 24hours
Description: The frequency with which AWS Config delivers configuration snapshots.
AllowedValues:
- 1hour
- 3hours
- 6hours
- 12hours
- 24hours
Conditions:
IsAllSupported: !Equals
- !Ref AllSupported
- True
IsGeneratedDeliveryChannelName: !Equals
- !Ref DeliveryChannelName
- <Generated>
Mappings:
Settings:
FrequencyMap:
1hour : One_Hour
3hours : Three_Hours
6hours : Six_Hours
12hours : Twelve_Hours
24hours : TwentyFour_Hours
Resources:
ConfigBucket:
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Type: AWS::S3::Bucket
Properties:
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
ConfigBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ConfigBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: AWSConfigBucketPermissionsCheck
Effect: Allow
Principal:
Service:
- config.amazonaws.com
Action: s3:GetBucketAcl
Resource:
- !Sub "arn:${AWS::Partition}:s3:::${ConfigBucket}"
- Sid: AWSConfigBucketDelivery
Effect: Allow
Principal:
Service:
- config.amazonaws.com
Action: s3:PutObject
Resource:
- !Sub "arn:${AWS::Partition}:s3:::${ConfigBucket}/AWSLogs/${AWS::AccountId}/*"
- Sid: AWSConfigBucketSecureTransport
Action:
- s3:*
Effect: Deny
Resource:
- !Sub "arn:${AWS::Partition}:s3:::${ConfigBucket}"
- !Sub "arn:${AWS::Partition}:s3:::${ConfigBucket}/*"
Principal: "*"
Condition:
Bool:
aws:SecureTransport:
false
ConfigRecorderRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- config.amazonaws.com
Action:
- sts:AssumeRole
Path: /
ManagedPolicyArns:
- !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AWS_ConfigRole"
ConfigRecorder:
Type: AWS::Config::ConfigurationRecorder
DependsOn:
- ConfigBucketPolicy
Properties:
RoleARN: !GetAtt ConfigRecorderRole.Arn
RecordingGroup:
AllSupported: !Ref AllSupported
IncludeGlobalResourceTypes: !Ref IncludeGlobalResourceTypes
ResourceTypes: !If
- IsAllSupported
- !Ref AWS::NoValue
- !Ref ResourceTypes
ConfigDeliveryChannel:
Type: AWS::Config::DeliveryChannel
DependsOn:
- ConfigBucketPolicy
Properties:
Name: !If
- IsGeneratedDeliveryChannelName
- !Ref AWS::NoValue
- !Ref DeliveryChannelName
ConfigSnapshotDeliveryProperties:
DeliveryFrequency: !FindInMap
- Settings
- FrequencyMap
- !Ref Frequency
S3BucketName: !Ref ConfigBucket
ConfigRuleForVolumeTags:
DependsOn: ConfigRecorder
Type: AWS::Config::ConfigRule
Properties:
InputParameters:
tag1Key: CostCenter
Scope:
ComplianceResourceTypes:
- "AWS::EC2::Volume"
Source:
Owner: AWS
SourceIdentifier: "REQUIRED_TAGS"
# Like this I need all the AWS Managed rules
You can't do this. There are no loops in cloudformation. But you could create a macro if you want such a functionality.
I am trying to set-up a websocket gateway to a Lambda function in AWS. When I do this manually I can successfully deploy the websocket and try it out using wscat. However I would like to build the architecture up using CloudFormation.
The structure of my CloudFormation yaml file looks like this:
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
lambdaRole:
Type: String
Default: ...
backendRole:
Type: String
Default: ...
lambdaImage:
Type: String
Default: ...
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code:
ImageUri: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${backendRole}:${lambdaImage}
Description: lambda connect function
FunctionName: myLambdaFunction
MemorySize: 128
Role: !Sub arn:aws:iam::${AWS::AccountId}:role/${lambdaRole}
Timeout: 3
PackageType: Image
MyWebSocket:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: MyWebSocket
ProtocolType: WEBSOCKET
RouteSelectionExpression: $request.body.action
MyIntegration:
Type: AWS::ApiGatewayV2::Integration
Properties:
ApiId: !Ref MyWebSocket
Description: Lambda Integration
IntegrationType: AWS_PROXY
IntegrationUri: !Join
- ''
- - 'arn:'
- !Ref 'AWS::Partition'
- ':apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- !GetAtt MyLambdaFunction.Arn
- /invocations
IntegrationMethod: POST
MyConnectRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId: !Ref MyWebSocket
RouteKey: $connect
Target: !Join
- /
- - integrations
- !Ref MyIntegration
MyDefaultRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId: !Ref MyWebSocket
RouteKey: $default
Target: !Join
- /
- - integrations
- !Ref MyIntegration
MyResponseRoute:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId: !Ref MyWebSocket
RouteKey: add
RouteResponseSelectionExpression: $default
Target: !Join
- /
- - integrations
- !Ref MyIntegration
MyRouteResponse:
Type: AWS::ApiGatewayV2::RouteResponse
Properties:
RouteId: !Ref MyResponseRoute
ApiId: !Ref MyWebSocket
RouteResponseKey: $default
MyIntegrationResponse:
Type: AWS::ApiGatewayV2::IntegrationResponse
Properties:
IntegrationId: !Ref MyIntegration
IntegrationResponseKey: /201/
ApiId: !Ref MyWebSocket
testStage:
Type: AWS::ApiGatewayV2::Stage
DependsOn:
- MyConnectRoute
- MyDefaultRoute
- MyResponseRoute
Properties:
ApiId: !Ref MyWebSocket
StageName: testStage
MyDeployment:
Type: AWS::ApiGatewayV2::Deployment
Properties:
ApiId: !Ref MyWebSocket
Description: My Deployment
StageName: !Ref testStage
The stack is build without any errors and (almost) everything looks like I intended. However other than in the manually build version the integration of the Lambda function into the websocket does not seem to add the required trigger for the Lambda function. When I manually add a Lambda function to an API Gateway, this automatically adds the trigger.
What do I need to change in my CloudFormation Yaml file to also add the trigger to the Lambda function?
Trigger automatically added to Lambda function when Lambda function is manually added to API Gateway:
No trigger added when Lambda function is added to API Gateway using CloudFormation:
It looks like you may just need to include the Lambda permission for api gateway.
LambdaFunctionPermission:
Type: "AWS::Lambda::Permission"
Properties:
Action: "lambda:InvokeFunction"
Principal: apigateway.amazonaws.com
FunctionName: !Ref MyLambdaFunction
DependsOn: MyWebSocket
If you are having issues with Cloudformation creation it would probably be helpful to also create an account config for logging and chain the dependsOn order of the gateway resources like this.
ApiGwAccountConfig:
Type: "AWS::ApiGateway::Account"
Properties:
CloudWatchRoleArn: !GetAtt "rRoleForCloudtrail.Arn"
rRoleForCloudtrail:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action:
- sts:AssumeRole
Policies:
-
PolicyName: "apigateway-cloudwatch"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:DescribeLogGroups
- logs:DescribeLogStreams
- logs:PutLogEvents
- logs:GetLogEvents
- logs:FilterLogEvents
Resource: "*"
testStage:
Type: AWS::ApiGatewayV2::Stage
Properties:
ApiId: !Ref MyWebSocket
StageName: testStage
DefaultRouteSettings:
DetailedMetricsEnabled: true
LoggingLevel: INFO
DataTraceEnabled: true
DependsOn: ApiGwAccountConfig
MyDeployment:
Type: AWS::ApiGatewayV2::Deployment
DependsOn:
- MyConnectRoute
- MyDefaultRoute
- MyResponseRoute
Properties:
ApiId: !Ref MyWebSocket
Description: My Deployment
StageName: !Ref testStage
I guess you need AWS::Lambda::Permission resource, e.g.:
LambdaPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName: !GetAtt MyLambdaFunction.Arn
Principal: apigateway.amazonaws.com
SourceArn:
Fn::Join:
- ''
- - 'arn:aws:execute-api:'
- Ref: AWS::Region
- ":"
- Ref: AWS::AccountId
- ":"
- Ref: MyWebSocket
- "/*"
It may need some tweaking - it works with AWS::ApiGateway::RestApi, not sure if there is difference for AWS::ApiGatewayV2::Api.
I am adding route 53 to my DBCluster and keep running into the error: Attribute: ReadEndpoint.Address was not found for resource: <DBCluster-name>
The entire stack is created via cloudformation.
Also, it should be noted that this is for Serverless Aurora in case that matters?
Here is my code:
AWSTemplateFormatVersion: 2010-09-09
Description: RDS Aurora serverless template
Parameters:
CustomFunctionArn:
Default: arn:aws:lambda:us-west-2:123456789:function:vault-secrets-read-lambda-prod
Description: The ARN of the lambda function to retrieve password from Vault
Type: String
DBName:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: Name of the database
Type: String
DBMasterUsername:
AllowedPattern: '[a-zA-Z][a-zA-Z0-9]*'
Description: The master user name for the DB instance
Type: String
DBScalingAutoPauseEnabled:
AllowedValues:
- 'true'
- 'false'
Default: 'true'
Description: Pause all DB instances after some inactivity
Type: String
DBScalingMaxCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 8
Description: The maximum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingMinCapacity:
AllowedValues:
- 2
- 4
- 8
- 16
- 32
- 64
- 192
- 384
Default: 2
Description: The minimum capacity for an Aurora DB cluster in serverless DB engine mode
Type: Number
DBScalingSecondsUntilAutoPause:
Default: 300
Description: Auto pause after consecutive seconds of inactivity
MinValue: 300
MaxValue: 86400
Type: Number
Env:
AllowedValues:
- prod
- qa
- dev
Type: String
Description: Environment
VaultPath:
Default: secret/dev/dbPassword
Type: String
SnapshotId:
Description: snapshot ID to restore DB cluster from
Type: String
Conditions:
EnableAutoPause:
!Equals [!Ref DBScalingAutoPauseEnabled, 'true']
DoNotUseSnapshot: !Equals
- !Ref SnapshotId
- ''
Mappings:
Configuration:
prod:
HostedZoneEnv: mydomain.com
HostedZoneId: 'XXX'
SecurityGroup: sg-123321
SubnetGroups:
- subnet-123
- subnet-456
- subnet-789
VPCId: vpc-555
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: prod
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
qa:
HostedZoneEnv: mydomain-qa.com
HostedZoneId: 'XXX'
SecurityGroup: sg-321123
SubnetGroups:
- subnet-098
- subnet-765
- subnet-432
VPCId: vpc-345543
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: qa
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
dev:
HostedZoneEnv: mydomain-dev.com
HostedZoneId: 'XXX'
SecurityGroup: sg-f3453f
SubnetGroups:
- subnet-dsf24327
- subnet-82542gsda
- subnet-casaf2344
VPCId: vpc-23dfsf
Tags:
- Key: Name
Value: my-db
- Key: environment
Value: dev
- Key: component
Value: rds-aurora
- Key: classification
Value: internal
Resources:
AuroraSG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allows access to RDS
GroupName: !Sub '${AWS::StackName}-aurora-rds-${Env}'
SecurityGroupIngress:
- IpProtocol: -1
CidrIp: 0.0.0.0/0
FromPort: 5432
ToPort: 5432
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcId: !FindInMap [Configuration, !Ref Env, VPCId]
GetValuefromVault:
Type: Custom::CustomResource
Properties:
ServiceToken: !Ref CustomFunctionArn
VaultKeyPath: !Ref VaultPath
DBCluster:
Type: 'AWS::RDS::DBCluster'
DeletionPolicy: Snapshot
UpdateReplacePolicy: Snapshot
Properties:
BackupRetentionPeriod: 7
DBClusterParameterGroupName: default.aurora-postgresql10
DBSubnetGroupName: !Ref DBSubnetGroup
DatabaseName: !Ref DBName
DeletionProtection: false
# EnableHttpEndpoint: true
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: '10.7'
KmsKeyId: !If [DoNotUseSnapshot, !Ref KMSkey, !Ref 'AWS::NoValue']
MasterUserPassword: !If [DoNotUseSnapshot, !GetAtt 'GetValuefromVault.ValueFromVault', !Ref 'AWS::NoValue']
MasterUsername: !If [DoNotUseSnapshot, !Ref DBMasterUsername, !Ref 'AWS::NoValue']
Port: 5432
ScalingConfiguration:
AutoPause: !If [EnableAutoPause, true, false]
MaxCapacity: !Ref DBScalingMaxCapacity
MinCapacity: !Ref DBScalingMinCapacity
SecondsUntilAutoPause: !Ref DBScalingSecondsUntilAutoPause
SnapshotIdentifier: !If [DoNotUseSnapshot, !Ref 'AWS::NoValue', !Ref SnapshotId]
StorageEncrypted: true
Tags: !FindInMap [Configuration, !Ref Env, Tags]
VpcSecurityGroupIds:
- !GetAtt [AuroraSG, GroupId]
- !FindInMap [Configuration, !Ref Env, SecurityGroup]
DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription: !Sub '${AWS::StackName}-${Env}'
SubnetIds: !FindInMap [Configuration, !Ref Env, SubnetGroups]
Tags: !FindInMap [Configuration, !Ref Env, Tags]
KmsAlias:
Type: AWS::KMS::Alias
Properties:
AliasName: !Sub 'alias/${AWS::StackName}-${Env}-aurora-rds'
TargetKeyId: !Ref KMSkey
KMSkey:
Type: AWS::KMS::Key
Properties:
KeyPolicy:
Id: key-consolepolicy-3
Version: 2012-10-17
Statement:
- Sid: Enable IAM User Permissions
Effect: Allow
Principal:
AWS: !Sub arn:aws:iam::${AWS::AccountId}:root
Action: 'kms:*'
Resource: '*'
RecordSet:
Type: AWS::Route53::RecordSet
DependsOn: DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -writer-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.Endpoint.Address
TTL: '60'
Type: CNAME
ReadRecordSet:
Type: 'AWS::Route53::RecordSet'
DependsOn:
- DBCluster
Properties:
HostedZoneId: !FindInMap [Configuration, !Ref Env, HostedZoneId]
Name: !Join ['', [!Ref DBName, -reader-db, ., !FindInMap [Configuration, !Ref Env, HostedZoneEnv], .]]
ResourceRecords:
- !GetAtt DBCluster.ReadEndpoint.Address
TTL: '60'
Type: CNAME
Outputs:
AuroraHost:
Value: !GetAtt [DBCluster, Endpoint.Address]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'Host' ]]
AuroraSG:
Value: !GetAtt AuroraSG.GroupId
Export:
Name: !Join [":", [ !Ref "AWS::StackName", AuroraSG ]]
KMS:
Value: !GetAtt [KMSkey, Arn]
Export:
Name: !Join [":", [ !Ref "AWS::StackName", 'KMS' ]]
DNSName:
Description: 'The connection endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.Endpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-DNSName'
ReadDNSName:
Description: 'The reader endpoint for the DB cluster.'
Value: !GetAtt 'DBCluster.ReadEndpoint.Address'
Export:
Name: !Sub '${AWS::StackName}-ReadDNSName'
Some things i have tried:
Create new stack: FAIL
Create new stack without ReadRecordSet: FAIL
Create new stack without RecordSet (old name for read recordset): FAIL
Create new stack without RecordSet (new name for read recordset): FAIL
Add a DependsOn to ReadRecordSet (for first RecordSet): FAIL
Enabling HTTP endpoint on Cluster: FAIL
Update TTL to 60: FAIL Update TTL to 0: FAIL
The RecordSet seems to be creating okay (I tested that by adding a DependsOn: - RecordSet in the ReadRecordSet to allow RecordSet to create first), so it's the ReadRecordSet that is failing and can't find ReadEndpoint.Address
Not sure what I am missing here, been googling like mad and don't see much about this error. Any help is appreciated!
It turns out that Aurora Serverless doesn't require ReadRecordSet, so that entire section is only applicable to provisioned DB, so ReadEndpoint doesn't exist indeed. Unfortunately AWS documentation doesn't mention that explicitly.
I'm trying my hand at Cloudformation nested stacks. The idea is that I create a VPC, S3 bucket, Codebuild project, and Codepipeline pipeline using Cloudformation.
My Problem: Cloudformation is saying that the following parameters (outputted by child stacks) require values:
Vpc
PrivateSubnet1
PrivateSubnet2
PrivateSubnet3
BucketName
These params should have values as the value exists when I look at a completed child stack in the console.
I'll just show the templates for the parent, s3, and codepipeline. With regards to these three templates the problem is that I am unable to use an output BucketName from S3Stack in my CodePipelineStack
My Code:
cfn-main.yaml
AWSTemplateFormatVersion: 2010-09-09
Description: root template for codepipeline poc
Parameters:
BucketName:
Type: String
VpcName:
Description: name of the vpc
Type: String
Default: sandbox
DockerUsername:
Type: String
Description: username for hub.docker
Default: seanturner026
DockerPassword:
Type: String
Description: password for hub.docker
Default: /codebuild/docker/password
Environment:
Type: String
Description: environment
AllowedValues:
- dev
- prod
Default: dev
Vpc:
Type: AWS::EC2::VPC::Id
PrivateSubnet1:
Type: AWS::EC2::Subnet::Id
PrivateSubnet2:
Type: AWS::EC2::Subnet::Id
PrivateSubnet3:
Type: AWS::EC2::Subnet::Id
GithubRepository:
Type: String
Description: github repository
Default: aws-codepipeline-poc
GithubBranch:
Type: String
Description: github branch
Default: master
GithubOwner:
Type: String
Description: github owner
Default: SeanTurner026
GithubToken:
Type: String
Description: github token for codepipeline
NoEcho: true
Resources:
VpcStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
VpcName: !Ref VpcName
TemplateURL: resources/vpc.yaml
S3Stack:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: resources/s3.yaml
CodeBuildStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
Environment: !Ref Environment
DockerUsername: !Ref DockerUsername
DockerPassword: !Ref DockerPassword
Vpc: !GetAtt VpcStack.Outputs.VpcId
PrivateSubnet1: !GetAtt VpcStack.Outputs.PrivateSubnetId1
PrivateSubnet2: !GetAtt VpcStack.Outputs.PrivateSubnetId2
PrivateSubnet3: !GetAtt VpcStack.Outputs.PrivateSubnetId3
TemplateURL: resources/codebuild.yaml
CodePipelineStack:
Type: AWS::CloudFormation::Stack
Properties:
Parameters:
Environment: !Ref Environment
GithubRepository: !Ref GithubRepository
GithubBranch: !Ref GithubBranch
GithubOwner: !Ref GithubOwner
GithubToken: !Ref GithubToken
S3: !GetAtt S3Stack.Outputs.BucketName
TemplateURL: resources/codepipeline.yaml
s3.yaml
AWSTemplateFormatVersion: 2010-09-09
Description: s3 bucket for aws codepipeline poc
Resources:
S3:
Type: "AWS::S3::Bucket"
Properties:
BucketName: "aws-sean-codepipeline-poc"
Outputs:
BucketName:
Description: S3 bucket name
Value: !Ref S3
codepipeline.yaml -- Please see ArtifactStore. This is where cloudformation is seeing my parameter BucketName as value-less.
AWSTemplateFormatVersion: 2010-09-09
Description: codepipeline for aws codepipeline poc
Parameters:
BucketName:
Type: String
Environment:
Type: String
Description: environment
AllowedValues:
- dev
- prod
Default: dev
GithubRepository:
Type: String
Description: github repository
Default: aws-codepipeline-poc
GithubBranch:
Type: String
Description: github branch
Default: master
GithubOwner:
Type: String
Description: github owner
Default: SeanTurner026
GithubToken:
Type: String
Description: github token for codepipeline
NoEcho: true
Resources:
CodePipelineRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: !Join
- ""
- - !Ref AWS::StackName
- "-code-pipeline-role-"
- !Ref Environment
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: "Allow"
Principal:
Service: "codepipeline.amazonaws.com"
Action: "sts:AssumeRole"
CodePipelinePolicy:
Type: "AWS::IAM::Policy"
Properties:
PolicyName: !Join
- ""
- - !Ref AWS::StackName
- "-code-pipeline-policy-"
- !Ref Environment
PolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- s3:putObject
- s3:getObject
- codebuild:*
Resource:
- "*"
Roles:
- !Ref CodePipelineRole
Pipeline:
Type: "AWS::CodePipeline::Pipeline"
Properties:
Name: !Join
- ""
- - "code-pipeline-poc-"
- !Ref AWS::StackName
ArtifactStore:
Location: !Ref BucketName
Type: S3
RestartExecutionOnUpdate: true
RoleArn: !Join
- ""
- - "arn:aws:iam::"
- !Ref AWS::AccountId
- ":role/"
- !Ref CodePipelineRole
Stages:
- Name: checkout-source-code
Actions:
- Name: SourceAction
RunOrder: 1
ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: 1
Configuration:
Owner: !Ref GithubOwner
Repo: !Ref GithubRepository
Branch: !Ref GithubBranch
PollForSourceChanges: true
OAuthToken: !Ref GithubToken
OutputArtifacts:
- Name: source-code
- Name: docker-build-push
Actions:
- Name: build-push-job
RunOrder: 1
InputArtifacts:
- Name: source-code
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Ref BuildPushJob
OutputArtifacts:
- Name: build-push-job
Sorry if this is too verbose. If missed above, the problem is that ArtifactStore in the codepipeline.yaml is seeing my parameter BucketName as value-less, despite the value being outputted by S3Stack.
You pass the parameter as S3 but the template is expecting it as BucketName.
I am trying to deploy a CloudFormation template for two lambdas and an elastic IP.
I am not sure how to fix this error in my template:
Fn::GetAtt object requires two non-empty parameters
Would you be able to give me some insights please ?
What are some best practices to debug a CloudFormation template ?
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description:
Parameters:
PrivateSubnets:
Type: AWS::SSM::Parameter::Value<List<AWS::EC2::Subnet::Id>>
Default: /vpc/subnets/private-ids
VpcId:
Type: AWS::SSM::Parameter::Value<AWS::EC2::VPC::Id>
Default: /vpc/id
LogLevel:
Type: String
AllowedValues:
- debug
- warn
- info
- error
- critical
Default: debug
Xray:
Type: String
AllowedValues:
- Active
- PassThrough
Default: Active
Globals:
Function:
AutoPublishAlias: live
Runtime: nodejs10.x
Tracing:
Ref: Xray
Environment:
Variables:
NODE_ENV: production
LOG_LEVEL:
Ref: LogLevel
ALB_ENDPOINT:
Fn::GetAtt:
- LoadBalancer
- DNSName
SECRETS_ID:
Ref: ServiceSecrets
Resources:
ServiceSecrets:
Type: AWS::SecretsManager::Secret
Properties:
KmsKeyId:
Fn::ImportValue: kms-default-key-id
Description:
Ref: AWS::StackName
SecretString: '{ "refresh_token": "abc", "id_token": "abc" }'
LambdaPolicyCommon:
Type: AWS::IAM::ManagedPolicy
Properties:
Path: /
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- secretsmanager:GetSecretValue
Resource:
Ref: ServiceSecrets
- Effect: Allow
Action:
- kms:GenerateDataKey
- kms:Decrypt
Resource:
Fn::ImportValue: kms-default-key-arn
LambdaFunctionCronJob:
Type: AWS::Serverless::Function
Properties:
ReservedConcurrentExecutions: 1
CodeUri: s3://s3bucket-881
Handler: index.handler
Timeout: 60
Policies:
- Ref: LambdaPolicyCommon
- Fn::ImportValue: iam-policy-lambda-common-arn
VpcConfig:
SubnetIds:
Ref: PrivateSubnets
SecurityGroupIds:
- Ref: LoadBalancerSecurityGroup
Events:
Cron:
Type: Schedule
Properties:
Schedule: rate(1 minute)
LambdaFunctionProxy:
Type: AWS::Serverless::Function
Properties:
ReservedConcurrentExecutions: 1
CodeUri: s3://cicd-buildartifactsbucket-3d345d2c
Handler: index.handler
Timeout: 60
Policies:
- Ref: LambdaPolicyCommon
- Fn::ImportValue: iam-policy-lambda-common-arn
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Scheme: internal
Subnets:
Ref: PrivateSubnets
SecurityGroups:
- Ref: LoadBalancerSecurityGroup
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
DependsOn: AlbLambdaFunctionInvokePermission
Properties:
TargetType: lambda
Targets:
- Id:
Fn::GetAtt:
- LambdaFunctionCronJob
- Alias
HttpListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- TargetGroupArn:
Ref: TargetGroup
Type: forward
LoadBalancerArn:
Ref: LoadBalancer
Port: 80
Protocol: HTTP
LoadBalancerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: group description
VpcId:
Ref: VpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
AlbLambdaFunctionInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt:
- LambdaFunctionProxy
- Alias
Action: lambda:InvokeFunction
Principal: elasticloadbalancing.amazonaws.com
Try to change:
Fn::GetAtt:
- LambdaFunctionProxy
- Alias
To:
Fn::GetAtt:
- "LambdaFunctionProxy"
- "Alias"
You should either use full function name:
Fn::GetAtt: [ LoadBalancer, DNSName ]
or short form:
!GetAtt LoadBalancer.DNSName
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getatt.html
From AWS Support
For serverless function alias can be referenced as "Ref": "MyLambdaFunction.Alias"
Since for serverless function cloudformation does not accept the format logical name, attribute name it is finding ''alias" attribute as empty
and it does not resolve it
It will need to be mentionned like this:
Properties:
FunctionName: !Ref LambdaFunctionProxyToHc.Alias