Make my DBCluster DependOn serverless vpc plugin - amazon-web-services

I want to create a VPC with a DBCluster in it using the serverless-vpc-plugin. If I do it in two steps, first the VPC, and then the cluster, everything works. But if I do it simutaneously serverless fails, complaining that the DBSubnetGroup has not been created yet.
I tried makind the DBCluster DependsOn: VPC but nothing. Here are the relevant parts:
service: vpn
frameworkVersion: '2'
custom:
stage: ${opt:stage, self:provider.stage}
region: ${opt:region, self:provider.region}
vpcConfig:
createNatGateway: 1
createNetworkAcl: true
subnetGroups:
- rds
provider:
name: aws
runtime: nodejs14.x
lambdaHashingVersion: 20201221
resources:
Resources:
ClusterSecret:
Type: AWS::SecretsManager::Secret
Properties:
[...]
AuroraDBCluster:
Type: AWS::RDS::DBCluster
DependsOn: VPC
Properties:
DatabaseName: [...]
DBClusterIdentifier: [...]
DBSubnetGroupName: ${self:service}-${self:custom.stage}
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: "10.14"
MasterUsername: [...]
MasterUserPassword: [...]
plugins:
- serverless-vpc-plugin
- serverless-offline

DependsOn: RDSSubnetGroup instead of DependsOn: VPC did the job

Related

Cannot create PostgreSQL with CloudFormation but works with web interface

I'm trying to create a Serverless V2 Aurora PostgreSQL cluster and an instance with CloudFormation.
It works fine when using the AWS web interface but when using CloudFormation (trough Serverless) I get
Error: CREATE_FAILED: auroraCluster (AWS::RDS::DBCluster) Resource
handler returned message: "The engine mode serverless you requested is
currently unavailable.
CF Template:
# Database
auroraCluster:
Type: AWS::RDS::DBCluster
Properties:
AutoMinorVersionUpgrade: 'true'
AvailabilityZones:
- eu-north-1a
- eu-north-1b
- eu-north-1c
DatabaseName:
publisher
DeletionProtection: !If [isProd, 'true', 'false']
Engine: aurora-postgresql
EngineMode: serverless
EngineVersion: '14.6'
auroraInstance:
Type: AWS::RDS::DBInstance
Properties:
AllowMajorVersionUpgrade: !If [isProd, 'true', 'false']
AutoMinorVersionUpgrade: 'true'
AvailabilityZone: !Sub ${AWS::Region}a
DBClusterIdentifier: !Ref auroraCluster
DBInstanceIdentifier: ${self:service}-rds-${sls:stage}
DeleteAutomatedBackups: !If [isProd, 'false', 'true']
DeletionProtection: !If [isProd, 'true', 'false']
Engine: aurora-postgresql
ManageMasterUserPassword: 'true'
MasterUsername: postgres
MasterUserSecret:
SecretArn: !Ref secretRds
Serverless aurora for postgress 14.6 is only supported for serverless v2. This requires different setup then you have. For example, you have to provide ServerlessV2ScalingConfiguration, delete EngineMode and use DBInstanceClass set to db.serverless. For example:
Resources:
auroraCluster:
Type: AWS::RDS::DBCluster
Properties:
#AutoMinorVersionUpgrade: 'true'
DatabaseName:
publisher
Engine: aurora-postgresql
EngineVersion: '14.6'
MasterUsername: "trdyd"
MasterUserPassword: "gfsdg344231"
ServerlessV2ScalingConfiguration:
MinCapacity: 1
MaxCapacity: 4
auroraInstance:
Type: 'AWS::RDS::DBInstance'
Properties:
Engine: aurora-postgresql
DBInstanceClass: db.serverless
DBClusterIdentifier: !Ref auroraCluster
Obviously you have to adjust the above example, which works, to what exactly you need, taking into account serverless v2 capabilities.

AWS CloudFormation service was unable to place a task because no container instance met all of its requirements

I am trying to create an ECS using a ci/cd pipeline. I have defined TaskDefination and ECSService along with VPC.
The cloudformation created the cluster and got stuck ECSService creation.
I went to the ECSService event, I found the error 'service my-service-name was unable to place a task because no container instance met all of its requirements. Reason: No Container Instances were found in your cluster. For more information, see the Troubleshooting section.'
Am I missing someting in my pipeline?
Here is my TaskDefination and ECSService
AWSTemplateFormatVersion: 2010-09-09
Description: The CloudFormation template for the Fargate ECS Cluster.
Parameters:
Stage:
Type: String
ContainerPort:
Type: Number
ImageURI:
Type: String
Resources:
# Create an ECS Cluster
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'Cluster']]
# Create a VPC
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsHostnames: True
EnableDnsSupport: True
# Create a Subnet
SubnetA:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: 10.0.0.0/24
VpcId: !Ref VPC
AvailabilityZone: !Join ['', [!Ref "AWS::Region", 'a']]
# Create a Subnet
SubnetB:
Type: AWS::EC2::Subnet
Properties:
CidrBlock: 10.0.1.0/24
VpcId: !Ref VPC
AvailabilityZone: !Join ['', [!Ref "AWS::Region", 'b']]
# Create Access Role for ECS-Tasks
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'ExecutionRole']]
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
# Create a TaskDefinition with container details
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
NetworkMode: awsvpc
RequiresCompatibilities:
- 'EC2'
TaskRoleArn: !Ref ExecutionRole
ExecutionRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'Container']]
Image: !Ref ImageURI
Cpu: 1024
Memory: 1024
PortMappings:
- ContainerPort: !Ref ContainerPort
HostPort: !Ref ContainerPort
# Create an ECS Service and add created Cluster, TaskDefintion, Subnets, TargetGroup and SecurityGroup
ECSService:
Type: AWS::ECS::Service
Properties:
ServiceName: !Join ['-', [!Ref Stage, !Ref 'AWS::AccountId', 'ECSService']]
Cluster: !Ref Cluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
LaunchType: EC2
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- !Ref SubnetA
- !Ref SubnetB
I have tried answers of already posted questions. In most of cases people get this error on AWS web interface. For me ECS is working using Web interface. I am not able to get it working using my pipeline.
You have to explicitly provision EC2 container instances for your ECS tasks. Your current TF code does not create any EC2 instances for used by your ECS cluster and tasks.
'No container instances found' is the error. You have created an empty cluster with no instances in the cluster
You can manually do it as per this doc
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html

How can I provide a variable for `AWS::RDS::DBInstance` in cloudformation?

I am using cloudformation to provision RDS aurora to AWS and using AWS::RDS::DBCluster and AWS::RDS::DBInstance resources in the template. I have different environments, e.g. dev, uat and prod. Each environment has different number of db instances under the cluster. How can I set the number of db instances as a variable in the cloudformation template?
Below is my template for AWS::RDS::DBInstance. As you can see there are three instances in the template. It is only for production not dev. How can I use a parameter to indicate the number of instances? For example, deploy 1 instance in dev and 3 for prod.
AuroraDBFirstInstance:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceClass: ${self:provider.postgresqlInstanceClass}
Engine: aurora-postgresql
EngineVersion: ${self:provider.postgresqlEngineVersion}
DBClusterIdentifier: !Ref AuroraDBCluster
PubliclyAccessible: ${self:provider.publiclyAccessible}
AuroraDBSecondInstance:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceClass: ${self:provider.postgresqlInstanceClass}
Engine: aurora-postgresql
EngineVersion: ${self:provider.postgresqlEngineVersion}
DBClusterIdentifier: !Ref AuroraDBCluster
PubliclyAccessible: ${self:provider.publiclyAccessible}
AuroraDBThirdInstance:
Type: AWS::RDS::DBInstance
Properties:
DBInstanceClass: ${self:provider.postgresqlInstanceClass}
Engine: aurora-postgresql
EngineVersion: ${self:provider.postgresqlEngineVersion}
DBClusterIdentifier: !Ref AuroraDBCluster
PubliclyAccessible: ${self:provider.publiclyAccessible}
You can add it as a parameter and pass as you run the stack, you can even make a mappings like this:
Environment:
Type: String
AllowedValues:
- dev
- uat
- prod
Mappings:
EnvironmentToDb
dev:
Cluster: 1
uat:
Cluster: 2
prod:
Cluster: 3
Then you can reference it using:
DBClusterIdentifier: !FindInMap [EnvironmentToDb, !Ref 'Environment', Cluster]

How do I configure my PUBLIC AWS custom domain to resolve to a lambda that is configured for my VPC?

What is working:
Using the serverless framework:
I have configured an AWS VPC
I have an Amazon Aurora database configured
for my VPC
I have an AWS API Gateway lambda that is configured for my
VPC
When I deploy my lambda, I am able to access it publicly via the AWS
generated URL: XXX.execute-api.us-east-1.amazonaws.com/prod/outages
In my Lambda,I run a very simple query that proves I can connect to
my database.
This all works fine.
What is NOT working:
I have registered a domain with AWS/Route 53 and added a cert (e.g. *.foo.com)
I use the serverless-domain-manager plugin to make my lambda available via my domain (e.g. api.foo.com/outages resolves to XXX.execute-api.us-east-1.amazonaws.com/prod/outages)
This works fine if my lambda is NOT configured for my VPC
But when my lambda IS configured for my VPC, the custom domain api.foo.com/outages does NOT resolve to XXX.execute-api.us-east-1.amazonaws.com/prod/outages
I other words: I can NOT access api.foo.com/outages publicly.
What I need is:
1 - XXX.execute-api.us-east-1.amazonaws.com/prod/outages is available publicly (this works)
2 - My custom domain, api.foo.com/outages points to the SAME lambda as XXX.execute-api.us-east-1.amazonaws.com/prod/outages (in my VPC) and is available publicly (not working. I get: {"message":"Forbidden"})
virtual-private-cloud.yml
service: virtual-private-cloud
provider:
name: aws
region: us-east-1
stage: ${opt:stage, dev}
custom:
appVersion: 0.0.0
VPC_CIDR: 10
resources:
Resources:
ServerlessVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: ${self:custom.VPC_CIDR}.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
InstanceTenancy: default
ServerlessSubnetA:
DependsOn: ServerlessVPC
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: ServerlessVPC
AvailabilityZone: ${self:provider.region}a
CidrBlock: ${self:custom.VPC_CIDR}.0.0.0/24
ServerlessSubnetB:
DependsOn: ServerlessVPC
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: ServerlessVPC
AvailabilityZone: ${self:provider.region}b
CidrBlock: ${self:custom.VPC_CIDR}.0.1.0/24
ServerlessSubnetC:
DependsOn: ServerlessVPC
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: ServerlessVPC
AvailabilityZone: ${self:provider.region}c
CidrBlock: ${self:custom.VPC_CIDR}.0.2.0/24
Outputs:
VPCDefaultSecurityGroup:
Value:
Fn::GetAtt:
- ServerlessVPC
- DefaultSecurityGroup
Export:
Name: VPCDefaultSecurityGroup-${self:provider.stage}
SubnetA:
Description: 'Subnet A.'
Value: !Ref ServerlessSubnetA
Export:
Name: vpc-subnet-A-${self:provider.stage}
SubnetB:
Description: 'Subnet B.'
Value: !Ref ServerlessSubnetB
Export:
Name: vpc-subnet-B-${self:provider.stage}
SubnetC:
Description: 'Subnet C.'
Value: !Ref ServerlessSubnetC
Export:
Name: vpc-subnet-C-${self:provider.stage}
database-service.yml
service: database-service
provider:
name: aws
region: us-east-1
stage: ${opt:stage, dev}
environment:
stage: ${opt:stage, dev}
plugins:
- serverless-plugin-ifelse
custom:
appVersion: 0.0.1
AURORA:
DB_NAME: database${self:provider.stage}
USERNAME: ${ssm:/my-db-username~true}
PASSWORD: ${ssm:/my-db-password~true}
HOST:
Fn::GetAtt: [AuroraRDSCluster, Endpoint.Address]
PORT:
Fn::GetAtt: [AuroraRDSCluster, Endpoint.Port]
serverlessIfElse:
- If: '"${opt:stage}" == "prod"'
Set:
resources.Resources.AuroraRDSCluster.Properties.EngineMode: provisioned
ElseSet:
resources.Resources.AuroraRDSCluster.Properties.EngineMode: serverless
resources.Resources.AuroraRDSCluster.Properties.ScalingConfiguration.MinCapacity: 1
resources.Resources.AuroraRDSCluster.Properties.ScalingConfiguration.MaxCapacity: 4
ElseExclude:
- resources.Resources.AuroraRDSInstanceParameter
- resources.Resources.AuroraRDSInstance
resources:
Resources:
AuroraSubnetGroup:
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: "Aurora Subnet Group"
SubnetIds:
- 'Fn::ImportValue': vpc-subnet-A-${self:provider.stage}
- 'Fn::ImportValue': vpc-subnet-B-${self:provider.stage}
- 'Fn::ImportValue': vpc-subnet-C-${self:provider.stage}
AuroraRDSClusterParameter:
Type: AWS::RDS::DBClusterParameterGroup
Properties:
Description: Parameter group for the Serverless Aurora RDS DB.
Family: aurora5.6
Parameters:
character_set_database: "utf32"
AuroraRDSCluster:
Type: "AWS::RDS::DBCluster"
Properties:
MasterUsername: ${self:custom.AURORA.USERNAME}
MasterUserPassword: ${self:custom.AURORA.PASSWORD}
DBSubnetGroupName:
Ref: AuroraSubnetGroup
Engine: aurora
EngineVersion: "5.6.10a"
DatabaseName: ${self:custom.AURORA.DB_NAME}
BackupRetentionPeriod: 3
DBClusterParameterGroupName:
Ref: AuroraRDSClusterParameter
VpcSecurityGroupIds:
- 'Fn::ImportValue': VPCDefaultSecurityGroup-${self:provider.stage}
# this is needed for non-serverless mode
AuroraRDSInstanceParameter:
Type: AWS::RDS::DBParameterGroup
Properties:
Description: Parameter group for the Serverless Aurora RDS DB.
Family: aurora5.6
Parameters:
sql_mode: IGNORE_SPACE
max_connections: 100
wait_timeout: 900
interactive_timeout: 900
# this is needed for non-serverless mode
AuroraRDSInstance:
Type: "AWS::RDS::DBInstance"
Properties:
DBInstanceClass: db.t2.small
DBSubnetGroupName:
Ref: AuroraSubnetGroup
Engine: aurora
EngineVersion: "5.6.10a"
PubliclyAccessible: false
DBParameterGroupName:
Ref: AuroraRDSInstanceParameter
DBClusterIdentifier:
Ref: AuroraRDSCluster
Outputs:
DatabaseName:
Description: 'Database name.'
Value: ${self:custom.AURORA.DB_NAME}
Export:
Name: DatabaseName-${self:provider.stage}
DatabaseHost:
Description: 'Database host.'
Value: ${self:custom.AURORA.HOST}
Export:
Name: DatabaseHost-${self:provider.stage}
DatabasePort:
Description: 'Database port.'
Value: ${self:custom.AURORA.PORT}
Export:
Name: DatabasePort-${self:provider.stage}
outage-service.yml
service: outage-service
package:
individually: true
plugins:
- serverless-bundle
- serverless-plugin-ifelse
- serverless-domain-manager
custom:
appVersion: 0.0.12
stage: ${opt:stage}
domains:
prod: api.foo.com
test: test-api.foo.com
dev: dev-api.foo.com
customDomain:
domainName: ${self:custom.domains.${opt:stage}}
stage: ${opt:stage}
basePath: outages
custom.customDomain.certificateName: "*.foo.com"
custom.customDomain.certificateArn: 'arn:aws:acm:us-east-1:395671985612:certificate/XXXX'
createRoute53Record: true
serverlessIfElse:
- If: '"${opt:stage}" == "prod"'
Set:
custom.customDomain.enabled: true
ElseSet:
custom.customDomain.enabled: false
provider:
name: aws
runtime: nodejs12.x
stage: ${opt:stage}
region: us-east-1
environment:
databaseName: !ImportValue DatabaseName-${self:provider.stage}
databaseUsername: ${ssm:/my-db-username~true}
databasePassword: ${ssm:/my-db-password~true}
databaseHost: !ImportValue DatabaseHost-${self:provider.stage}
databasePort: !ImportValue DatabasePort-${self:provider.stage}
functions:
hello:
memorySize: 2048
timeout: 30
handler: functions/handler.hello
vpc:
securityGroupIds:
- 'Fn::ImportValue': VPCDefaultSecurityGroup-${self:provider.stage}
subnetIds:
- 'Fn::ImportValue': vpc-subnet-A-${self:provider.stage}
- 'Fn::ImportValue': vpc-subnet-B-${self:provider.stage}
- 'Fn::ImportValue': vpc-subnet-C-${self:provider.stage}
environment:
functionName: getTowns
events:
- http:
path: outage
method: get
cors:
origin: '*'
headers:
- Content-Type
- authorization
resources:
- Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:custom.stage}-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:custom.stage}-ApiGatewayRestApiRootResourceId
I believe you might have missed to add VPC endpoints for API Gateway.
Ensure you create a VPC interface endpoint for API GW and use it in the API you create. This will allow API requests to lambda running within VPC
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html#apigateway-private-api-create-interface-vpc-endpoint
https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/
Hope this helps !!
SOLUTION: We found a solution, in our case the solution is that you can't use wildcard certificates. So if you make a certificate for the exact API URL it does work. Obviously this is not a nice solution for everyone and everything but when you do have this problem and must deploy you can make it work this way.
Also maybe good to mention we did not have any additional problems with the below mentioned case:
I also wanted to add a post I found that might also have lead to this setup not working even if we do get the network to play nice. But again if you or anyone has a working setup like the above mentioned one let me know how you did it.
TLDR
If anyone wants to understand what was going on with API Gateway, take
a look at this thread.
It basically says that API Gateway processes regular URLs (like
aaaaaaaaaaaa.execute-api.us-east-1.amazonaws.com) differently than how
it processes Custom Domain Name URLs (like api.myservice.com). So when
API Gateway forwards your API request to your Lambda Function, your
Lambda Function will receive different path values, depending on which
type of your URL you used to invoke your API.
source:
Understanding AWS API Gateway Custom Domain Names

How to deploy AWS elasticsearch using serverless.yml

I need to use AWS elasticsearch service but also want to automate it. We have serverless configuration. How can we create an AWS elasticsearch service using serverless.yml?
You can add CloudFormation resources to the "resources" section. For ElasticSearch this would look something like this.
service: aws-nodejs
provider:
name: aws
runtime: nodejs6.10
functions:
hello:
handler: handler.hello
environment:
elasticURL:
Fn::GetAtt: [ ElasticSearchInstance , DomainEndpoint ]
resources:
Resources:
ElasticSearchInstance:
Type: AWS::Elasticsearch::Domain
Properties:
EBSOptions:
EBSEnabled: true
VolumeType: gp2
VolumeSize: 10
ElasticsearchClusterConfig:
InstanceType: t2.small.elasticsearch
InstanceCount: 1
DedicatedMasterEnabled: false
ZoneAwarenessEnabled: false
ElasticsearchVersion: 5.3
to add to Jens' answer, you might want the output
you can add this to your serverless.yml config
Outputs:
DomainArn:
Value: !GetAtt ElasticsearchDomain.DomainArn
DomainEndpoint:
Value: !GetAtt ElasticsearchDomain.DomainEndpoint
SecurityGroupId:
Value: !Ref mySecurityGroup
SubnetId:
Value: !Ref subnet