I am trying to use CDK to define a Serverless Postgres Aurora cluster but keep running into issues with regards to the VPC subnets either being "invalid" or "not existing", depending on which db cluster construct I attempt to use. In my setup, I have 2 Stacks: 1 for the VPC, and 1 for the RDS.
This is the contents of my Vpc Stack:
const vpc = new Vpc(this, 'Vpc');
const privateSubnetIds = vpc.selectSubnets({
subnetType: SubnetType.PRIVATE
}).subnetIds;
const rdsSecurityGroup = new SecurityGroup(this, 'RdsSecurityGroup', {
securityGroupName: 'rds-security-group',
allowAllOutbound: true,
description: `RDS cluster security group`,
vpc: vpc
});
...
// The rest of the file defines exports.
Case 1:
Initially, I tried using the CfnDBCluster as the DatabaseCluster does not allow you to directly define engineMode: 'serverless' and enableHttpEndpoint: true. Below is the contents of the RDS Stack using the CfnDBCluster construct:
// The beginning of the file imports all the VPC exports from the VPC Stack:
// subnetIds (for the private subnet), securityGroupId
...
const databaseSecret = new DatabaseSecret(this, 'secret', {
username: 'admin'
});
const secretArn = databaseSecret.secretArn;
const dbSubnetGroup = new CfnDBSubnetGroup(this, "DbSubnetGroup", {
dbSubnetGroupDescription: `Database cluster subnet group`,
subnetIds: subnetIds
});
const dbCluster = new CfnDBCluster(this, 'DbCluster', {
dbClusterIdentifier: 'aurora-cluster',
engine: 'aurora-postgresql',
engineMode: 'serverless',
databaseName: DB_NAME,
masterUsername: databaseSecret.secretValueFromJson('username').toString(),
masterUserPassword: databaseSecret.secretValueFromJson('password').toString(),
enableHttpEndpoint: true,
scalingConfiguration: {
autoPause: true,
minCapacity: 1,
maxCapacity: 16,
secondsUntilAutoPause: 300
},
vpcSecurityGroupIds: [securityGroupId],
dbSubnetGroupName: dbSubnetGroup.dbSubnetGroupName
});
Using the CfnDBCluster construct, I get the following error:
Some input subnets in :[subnet-044631b3e615d752c,subnet-05c2881d9b13195ef,subnet-03c63ec89ae49a748] are invalid. (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterValue; Request ID: 5c4e6237-6527-46a6-9ed4-1bc46c38dce0)
I am able to verify that those Subnets do exist before the RDS Stack is run.
Case 2:
After failing to get the CfnDBCluster example above working, I tried using the DatabaseCluster construct with raw overrides. Below is the contents of the RDS Stack using the DatabaseCluster construct:
// The beginning of the file imports all the VPC exports from the VPC Stack:
// subnetIds (for the private subnet), securityGroupId, vpcId, AZs, vpc (using Vpc.fromAttributes)
...
const dbCluster = new DatabaseCluster(this, 'DbCluster', {
engine: DatabaseClusterEngine.auroraPostgres({
version: AuroraPostgresEngineVersion.VER_10_7
}),
masterUser: {
username: databaseSecret.secretValueFromJson('username').toString(),
password: databaseSecret.secretValueFromJson('password')
},
instanceProps: {
vpc: vpc,
vpcSubnets: {
subnetType: SubnetType.PRIVATE
}
},
});
const cfnDbCluster = dbCluster.node.defaultChild as CfnDBCluster;
cfnDbCluster.addPropertyOverride('DbClusterIdentifier', 'rds-cluster');
cfnDbCluster.addPropertyOverride('EngineMode', 'serverless');
cfnDbCluster.addPropertyOverride('DatabaseName', DB_NAME);
cfnDbCluster.addPropertyOverride('EnableHttpEndpoint', true);
cfnDbCluster.addPropertyOverride('ScalingConfiguration.AutoPause', true);
cfnDbCluster.addPropertyOverride('ScalingConfiguration.MinCapacity', 1);
cfnDbCluster.addPropertyOverride('ScalingConfiguration.MaxCapacity', 16);
cfnDbCluster.addPropertyOverride('ScalingConfiguration.SecondsUntilAutoPause', 300);
cfnDbCluster.addPropertyOverride('VpcSecurityGroupIds', subnetIds);
Using the DatabaseCluster construct, I get the following error:
There are no 'Private' subnet groups in this VPC. Available types:
I am able to verify that the VPC does have a Private subnet, I also verified that it was properly imported and that the Subnets all have the expected tags i.e. key: 'aws-cdk:subnet-type' value: 'Private'
This issue has me blocked and confused, I cannot figure out why either of these issues are manifesting and would appreciate any guidance offered on helping resolve this issue.
References:
DatabaseCluster Construct
CfnDBCluster Construct
Database Cluster CloudFormation Properties
Escape Hatches
Notes:
I am using CDK version 1.56.0 with Typescript
In case you visiting this page after getting-
Some input subnets in :[subnet-XXXX,subnet-YYYY,subnet-ZZZZ] are invalid.
You probably checked and confirmed that these subnets do not exist and knock your head struggling to find where the hell these subnets are coming from.
The reason CDK still point to these subnets is since cdk.context.json is still contains values from last deployments.
From the docs-
Context values are key-value pairs that can be associated with a stack
or construct. The AWS CDK uses context to cache information from your
AWS account, such as the Availability Zones in your account or the
Amazon Machine Image (AMI) IDs used to start your instances.
Replace all the JSON content to a valid one ( {} ) and re-deploy the stack.
Related
AWS CDK Getting Error when try to initialize a new VPC with private isolated subnet used in Fargate Cluster. (#aws-cdk/ -- #1.174.0 - Version).
this.vpc = new ec2.Vpc(this, `horizonCloudVpc`, {
cidr: '10.0.0.0/16',
vpcName: `horizonCloudVpc-${envName}`,
enableDnsHostnames: true,
enableDnsSupport: true,
maxAzs: 2,
subnetConfiguration: [
{
name: 'public-subnet',
subnetType: ec2.SubnetType.PUBLIC,
cidrMask: 24,
},
{
name: 'isolated-subnet',
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
cidrMask: 24,
},
]
});
const clusterAdmin = new Role(this, 'eksClusterMasterRole', {
roleName: `clusterMasterRole-${envName}`,
assumedBy: new AccountRootPrincipal(),
});
const cluster = new eks.FargateCluster(this, 'horizonCloudEks', {
version: eks.KubernetesVersion.V1_21,
vpc: vpc,
clusterName: `horizonCloudEks-${envName}`,
endpointAccess: eks.EndpointAccess.PUBLIC,
mastersRole: clusterAdmin,
});
Error from Deploy -
/home/runner/work/horizon/horizon/cdk/node_modules/#aws-cdk/aws-ec2/lib/vpc.ts:606
throw new Error(`There are no '${subnetType}' subnet groups in this VPC. Available types: ${availableTypes}`);
^
Error: There are no 'Private' subnet groups in this VPC. Available types: Isolated,Deprecated_Isolated,Public
I might think it requires a PRIVATE_WITH_NAT subnet also.
Thanks!
This is happening because EKS is trying to make the cluster use Private and Public subnets in the VPC, and there are no Private subnets in it.
From the eks.FargateCluster docs:
vpcSubnets?
Type: SubnetSelection[] (optional, default: All public and private subnets)
To change this default behavior, change the vpcSubnets constructor prop to the appropriate value. For example, to make the cluster only use Public subnets:
vpcSubnets: [{ subnetType: ec2.SubnetType.PUBLIC }]
I have a CDK project that creates a CodePipeline which deploys an application on ECS. I had it all previously working, but the VPC was using a NAT gateway, which ended up being too expensive. So now I am trying to recreate the project without requiring a NAT gateway. I am almost there, but I have now run into issues when the ECS service is trying to start tasks. All tasks fail to start with the following error:
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 5 time(s): failed to fetch secret
At this point I've kind of lost track of the different things I have tried, but I will post the relevant bits here as well as some of my attempts.
const repository = ECR.Repository.fromRepositoryAttributes(
this,
"ecr-repository",
{
repositoryArn: props.repository.arn,
repositoryName: props.repository.name,
}
);
// vpc
const vpc = new EC2.Vpc(this, this.resourceName(props, "vpc"), {
maxAzs: 2,
natGateways: 0,
enableDnsSupport: true,
});
const vpcSecurityGroup = new SecurityGroup(this, "vpc-security-group", {
vpc: vpc,
allowAllOutbound: true,
});
// tried this to allow the task to access secrets manager
const vpcEndpoint = new EC2.InterfaceVpcEndpoint(this, "secrets-manager-task-vpc-endpoint", {
vpc: vpc,
service: EC2.InterfaceVpcEndpointAwsService.SSM,
});
const secrets = SecretsManager.Secret.fromSecretCompleteArn(
this,
"secrets",
props.secrets.arn
);
const cluster = new ECS.Cluster(this, this.resourceName(props, "cluster"), {
vpc: vpc,
clusterName: `api-cluster`,
});
const ecsService = new EcsPatterns.ApplicationLoadBalancedFargateService(
this,
"ecs-service",
{
taskSubnets: {
subnetType: SubnetType.PUBLIC,
},
securityGroups: [vpcSecurityGroup],
serviceName: "api-service",
cluster: cluster,
cpu: 256,
desiredCount: props.scaling.desiredCount,
taskImageOptions: {
image: ECS.ContainerImage.fromEcrRepository(
repository,
this.ecrTagNameParameter.stringValue
),
secrets: getApplicationSecrets(secrets), // returns
logDriver: LogDriver.awsLogs({
streamPrefix: "api",
logGroup: new LogGroup(this, "ecs-task-log-group", {
logGroupName: `${props.environment}-api`,
}),
logRetention: RetentionDays.TWO_MONTHS,
}),
},
memoryLimitMiB: 512,
publicLoadBalancer: true,
domainZone: this.hostedZone,
certificate: this.certificate,
redirectHTTP: true,
}
);
const scalableTarget = ecsService.service.autoScaleTaskCount({
minCapacity: props.scaling.desiredCount,
maxCapacity: props.scaling.maxCount,
});
scalableTarget.scaleOnCpuUtilization("cpu-scaling", {
targetUtilizationPercent: props.scaling.cpuPercentage,
});
scalableTarget.scaleOnMemoryUtilization("memory-scaling", {
targetUtilizationPercent: props.scaling.memoryPercentage,
});
secrets.grantRead(ecsService.taskDefinition.taskRole);
repository.grantPull(ecsService.taskDefinition.taskRole);
I read somewhere that it probably has something to do with Fargate version 1.4.0 vs 1.3.0, but I'm not sure what I need to change to allow the tasks to access what they need to run.
You need to create an interface endpoints for Secrets Manager, ECR (two types of endpoints), CloudWatch, as well as a gateway endpoint for S3.
Refer to the documentation on the topic.
Here's an example in Python, it'd work the same in TS:
vpc.add_interface_endpoint(
"secretsmanager_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.SECRETS_MANAGER,
)
vpc.add_interface_endpoint(
"ecr_docker_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER,
)
vpc.add_interface_endpoint(
"ecr_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR,
)
vpc.add_interface_endpoint(
"cloudwatch_logs_endpoint",
service=ec2.InterfaceVpcEndpointAwsService.CLOUDWATCH_LOGS,
)
vpc.add_gateway_endpoint(
"s3_endpoint",
service=ec2.GatewayVpcEndpointAwsService.S3
)
Keep in mind that interface endpoints cost money as well, and may not be cheaper than a NAT.
Is it possible to create a serverless RDS cluster via CDK without NAT Gateway? The NAT Gateway base charge is pretty expensive for a development environment. I'm also not interested in setting up a NAT instance. I'm attaching a Lambda in the VPC with the RDS instance like this.
// VPC
const vpc = new ec2.Vpc(this, 'MyVPC');
// RDS
const dbCluster = new rds.ServerlessCluster(this, 'MyAuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
defaultDatabaseName: 'DbName',
vpc,
});
Yes, you can. You may have to add some VPC endpoints like Secrets Manager so password rotation can be done, but it is possible. You will need to create a VPC with subnets that have no NAT gateway too.
// VPC
const vpc = new ec2.Vpc(this, 'MyVPC', {
natGateways: 0,
subnetConfiguration: [
{
cidrMask: 24,
name: 'public',
subnetType: ec2.SubnetType.PUBLIC,
},
{
cidrMask: 28,
name: 'rds',
subnetType: ec2.SubnetType.ISOLATED,
}
]
});
// RDS
const dbCluster = new rds.ServerlessCluster(this, 'MyAuroraCluster', {
engine: rds.DatabaseClusterEngine.AURORA_MYSQL,
defaultDatabaseName: 'DbName',
vpcSubnets: {
subnetType: ec2.SubnetType.ISOLATED,
},
vpc,
});
If you want Secrets Manager controlled password, use:
vpc.addInterfaceEndpoint('SecretsManagerEndpoint', {
service: ec2.InterfaceVpcEndpointAwsService.SECRETS_MANAGER,
});
dbCluster.addRotationSingleUser();
Serverless clusters cannot be placed into a public subnet.
This is a hard and documented limitation of RDS Serverless.
This is my current CDK stack:
const vpc = new ec2.Vpc(this, "vpc-staging", {
maxAzs: 1,
enableDnsHostnames: true,
enableDnsSupport: true,
cidr: '10.10.0.0/16',
subnetConfiguration: []
});
const publicSubnet = new ec2.Subnet(this, 'public-subnet', {
cidrBlock: '10.10.10.0/24',
vpcId: vpc.vpcId,
mapPublicIpOnLaunch: true
})
To the above I am trying to add an ECS cluster like so:
const cluster = new ecs.Cluster(this, 'EcsCluster', { vpc });
cluster.addCapacity('DefaultAutoScalingGroup', {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T2, ec2.InstanceSize.MICRO)
})
When running cdk diff this is the error that I get:
(node:48942) ExperimentalWarning: The fs.promises API is experimental
/Users/me/src/wow/aws/node_modules/#aws-cdk/aws-ec2/lib/vpc.js:201
throw new Error(`There are no '${subnetType}' subnet groups in this VPC. Available types: ${availableTypes}`);
^
Error: There are no 'Public' subnet groups in this VPC. Available types:
What is it that that I am missing from my config?
mapPublicIpOnLaunch: true is not sufficient for a subnet to be considered Public.
You also need an Internet Gateway which is attached to your VPC. In addition, route tables should be setup to route internet traffic 0.0.0.0/0 to the gateway.
General information about VPC, public and private subnets is here.
Hope this helps.
The mapPublicIpOnLaunch parameter has a default set to true, so you could simply not set any subnet at all by simply removing the subnetConfiguration line from the vpc and deleting the subnet object to leave the default creation.
If you really want to set them, add subnetType: ec2.SubnetType.PUBLIC to your subnet.
Also, I think it's better if you keep the subnet config on the vpc construct to wire everything on the vpc creation:
const vpc = new ec2.Vpc(this, "vpc-staging", {
maxAzs: 1,
enableDnsHostnames: true,
enableDnsSupport: true,
cidr: '10.10.0.0/16',
subnetConfiguration: [
{
cidrMask: 24, // this is optional as it divides equally if not set
name: 'public-subnet',
subnetType: ec2.SubnetType.PUBLIC,
},
{
cidrMask: 24,
name: 'private-subnet',
subnetType: ec2.SubnetType.PRIVATE,
},
...
]
});
Also, I'm not 100% sure, but this code might fail because ECS need at least two availability zones to work.
Hi I am working on aws cdk. I am trying to get existing non-default vpc. I tried below options.
vpc = ec2.Vpc.from_lookup(self, id = "VPC", vpc_id='vpcid', vpc_name='vpc-dev')
This results in below error
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_vpc_attributes(self, 'VPC', vpc_id='vpc-839227e7', availability_zones=['ap-southeast-2a','ap-southeast-2b','ap-southeast-2c'])
This results in
[Error at /LocationCdkStack-cdkstack] Request has expired.
[Warning at /LocationCdkStack-cdkstack/TaskDef/mw-service] Proper policies need to be attached before pulling from ECR repository, or use 'fromEcrRepository'.
Found errors
Other method I tried is
vpc = ec2.Vpc.from_lookup(self, id = "VPC", is_default=True) // This will get default vpc and this will work
Can someone help me to get non-default vpc in aws cdk? Any help would be appreciated. Thanks
Take a look at aws_cdk.aws_ec2 documentation and at CDK Runtime Context.
If your VPC is created outside your CDK app, you can use
Vpc.fromLookup(). The CDK CLI will search for the specified VPC in the
the stack’s region and account, and import the subnet configuration.
Looking up can be done by VPC ID, but more flexibly by searching for a
specific tag on the VPC.
Usage:
# Example automatically generated. See https://github.com/aws/jsii/issues/826
from aws_cdk.core import App, Stack, Environment
from aws_cdk import aws_ec2 as ec2
# Information from environment is used to get context information
# so it has to be defined for the stack
stack = MyStack(
app, "MyStack", env=Environment(account="account_id", region="region")
)
# Retrieve VPC information
vpc = ec2.Vpc.from_lookup(stack, "VPC",
# This imports the default VPC but you can also
# specify a 'vpcName' or 'tags'.
is_default=True
)
Update with a relevant example:
vpc = ec2.Vpc.from_lookup(stack, "VPC",
vpc_id = VPC_ID
)
Update with typescript example:
import ec2 = require('#aws-cdk/aws-ec2');
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{isDefault: true});
More info here.
For AWS CDK v2 or v1(latest), You can use:
// You can either use vpcId OR vpcName and fetch the desired vpc
const getExistingVpc = ec2.Vpc.fromLookup(this, 'ImportVPC',{
vpcId: "VPC_ID",
vpcName: "VPC_NAME"
});
here is simple example
//get VPC Info form AWS account, FYI we are not rebuilding we are referencing
const DefaultVpc = Vpc.fromVpcAttributes(this, 'vpcdev', {
vpcId:'vpc-d0e0000b0',
availabilityZones: core.Fn.getAzs(),
privateSubnetIds: 'subnet-00a0de00',
publicSubnetIds: 'subnet-00a0de00'
});
const yourService = new lambda.Function(this, 'SomeName', {
code: lambda.Code.fromAsset("lambda"),
handler: 'handlers.your_handler',
role: lambdaExecutionRole,
securityGroup: lambdaSecurityGroup,
vpc: DefaultVpc,
runtime: lambda.Runtime.PYTHON_3_7,
timeout: Duration.minutes(2),
});
We can do it easily using ec2.vpc.fromLookup.
https://kuchbhilearning.blogspot.com/2022/10/httpskuchbhilearning.blogspot.comimport-existing-vpc-in-aws-cdk.html
The following dictates how to use the method.