My goal is to have an AWS code commit repo, that on push to main branch will run a code pipeline CI/CD process to deploy a single node app to AWS.
I went through the setup to get this working with fargate via CDK using ApplicationLoadBalancedFargateService, but ultimately ran into issues because the ALB requires two availability zones, and I don't want to run two instance of my app (I'm not concerned with high availability, and in this case it's a chat bot that I don't want "logged on" twice).
Does anyone have any recommendations here? Perhaps EBS is the service I want? (I've gone down that path pre-container, but maybe I should revisit?)
I also read about the code deploy agent and EC2, but that seems more of a manual process, where I'm hoping to be able to automated the creation of all resources with CDK.
resolution: I believe this is a case of me not understanding fargate well enough, shoutout #Victor Smirnov for helping break down everything for me.
There is in fact only a single task registered when my CDK stack builds.
I think the issue I ran into was I'd use the CDK codepipeline ECS deploy action, which would start deploying a second task before deregistering the first (which I think is just a fargate "feature" to avoid downtime, ie blue/green deploy). I mistakenly expected only a single container to be running at a given time, but that's just not how Services work.
I think Victor had a good point about the health checks as well. It took me a few tries to get all the ports lined up, and when they were misaligned and health checks were failing I'd see the "old failed task" getting "deregistered" alongside the "new task that hadn't failed yet" which made me think I had two concurrent tasks running.
Below is an example of the ApplicationLoadBalancedFargateService pattern used to create the Fargate service with one running task. I deployed the stack when I wrote the answer to the question.
The Application load balancer has three availability zones because my VPC has three public subnets. It means that the loader balancer itself has the IP addresses in three different zones.
The load balancer has only one target. There is no requirement that the load balancer should have a target in each zone.
I put everything in the public VPC zone because I do not have NAT. You might want to have your Fargate tasks in the private zone for better security.
I added the health check path with default values because, most likely, you will want to define a custom URL for your service. We can omit the default definition.
import { App, RemovalPolicy, Stack } from 'aws-cdk-lib'
import { Certificate, CertificateValidation } from 'aws-cdk-lib/aws-certificatemanager'
import { Vpc } from 'aws-cdk-lib/aws-ec2'
import { Cluster, ContainerImage, LogDriver } from 'aws-cdk-lib/aws-ecs'
import { ApplicationLoadBalancedFargateService } from 'aws-cdk-lib/aws-ecs-patterns'
import { ApplicationProtocol } from 'aws-cdk-lib/aws-elasticloadbalancingv2'
import { LogGroup } from 'aws-cdk-lib/aws-logs'
import { HostedZone } from 'aws-cdk-lib/aws-route53'
import { env } from 'process'
function createStack (scope, id, props) {
const stack = new Stack(scope, id, props)
const logGroup = new LogGroup(stack, 'LogGroup', { logGroupName: 'so-service', removalPolicy: RemovalPolicy.DESTROY })
const vpc = Vpc.fromLookup(stack, 'Vpc', { vpcName: 'BlogVpc' })
const domainZone = HostedZone.fromLookup(stack, 'ZonePublic', { domainName: 'victorsmirnov.blog' })
const domainName = 'service.victorsmirnov.blog'
const certificate = new Certificate(stack, 'SslCertificate', {
domainName,
validation: CertificateValidation.fromDns(domainZone)
})
const cluster = new Cluster(stack, 'Cluster', {
clusterName: 'so-cluster',
containerInsights: true,
enableFargateCapacityProviders: true,
vpc,
})
const service = new ApplicationLoadBalancedFargateService(stack, id, {
assignPublicIp: true,
certificate,
cluster,
cpu: 256,
desiredCount: 1,
domainName,
domainZone,
memoryLimitMiB: 512,
openListener: true,
protocol: ApplicationProtocol.HTTPS,
publicLoadBalancer: true,
redirectHTTP: true,
targetProtocol: ApplicationProtocol.HTTP,
taskImageOptions: {
containerName: 'nginx',
containerPort: 80,
enableLogging: true,
family: 'so-service',
image: ContainerImage.fromRegistry('nginx'),
logDriver: LogDriver.awsLogs({ streamPrefix: 'nginx', logGroup })
}
})
service.targetGroup.configureHealthCheck({
path: '/',
})
return stack
}
const app = new App()
createStack(app, 'SingleInstanceAlbService', {
env: { account: env.CDK_DEFAULT_ACCOUNT, region: env.CDK_DEFAULT_REGION }
})
Content for the cdk.json and package.json for completeness.
{
"app": "node question.js",
"context": {
"#aws-cdk/core:newStyleStackSynthesis": true
}
}
{
"name": "alb-single-instance",
"version": "0.1.0",
"dependencies": {
"aws-cdk-lib": "^2.37.1",
"cdk": "^2.37.1",
"constructs": "^10.1.76"
},
"devDependencies": {
"rimraf": "^3.0.2",
"snazzy": "^9.0.0",
"standard": "^17.0.0"
},
"scripts": {
"cdk": "cdk",
"clean": "rimraf cdk.out dist",
"format": "standard --fix --verbose | snazzy",
"test": "standard --verbose | snazzy"
},
"type": "module"
}
This should be enough to have a fully functional setup where everything is configured automatically using the CDK.
Maybe you do not need the load balancer because there is no need to balance traffic for only one task. You can set up the Service discovery for your service and use the DNS name for your task without a load balancer. This should save money if you want.
Your application can still be in one AZ. The fact that ALB requires two AZs is only related to ALB itself. So you do not have to create any extra instance of your application in other AZ if you don't want. Though it could be a good idea for high-availability.
Related
I love the idea behind the AWS CDK, but I'm struggling to create a Cloud9 Environment using it.
Every time the below code runs, an "Error while creating Cloud9" error message pops up in the AWS console, followed by "CREATE_FAILED" in the local terminal. First instincts suggest that the way to implement cloud9 is to establish a connection between ec2 and cloud9 – but I don't have any idea how to do that – has anybody successfully used the CDK to create a Cloud9 environment? Any advice would be greatly appreciated.
AWS Console error message
VSCode terminal error message
scroll down a little further in ther terminal and this message is at the end:
Stack Deployments Failed: Error: The stack named MjwFirstCdkStack failed creation,
it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE: Cannot
create the AWS Cloud9 environment. There was a problem connecting to the environment.
The code used
import * as cdk from "aws-cdk-lib";
import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as cloud9 from "aws-cdk-lib/aws-cloud9";
import { Construct } from "constructs";
export class MjwFirstCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// create a vpc
const vpc = new ec2.Vpc(this, "my-vpc-id", {
natGateways: 1,
maxAzs: 2,
ipAddresses: ec2.IpAddresses.cidr("10.0.0.0/16"),
subnetConfiguration: [
{
name: "private-subnet-1",
subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
cidrMask: 24,
},
{
name: "public-subnet-1",
subnetType: ec2.SubnetType.PUBLIC,
cidrMask: 24,
},
],
});
// create a cloud9 env
const myCloud9Environment = new cloud9.CfnEnvironmentEC2(
this,
"MyCloud9Environment",
{
name: "MyCloud9EnvironmentName",
instanceType: "t2.micro",
automaticStopTimeMinutes: 60,
subnetId: vpc.privateSubnets[0].subnetId,
}
);
}
}
have you tried deploy to the public subnet, refer to the vpc requirements for Cloud9 here
note that you need to delete the previously failed deployment before you run "cdk deploy" again. this can be done from the CloudFormation console
I'm trying to create a cdk stack containing an ApplicationLoadBalancedFargateService (docs). I want it placed in my VPC which exclusively contains private subnets.
When I try to deploy my stack I get an error message saying:
Error: There are no 'Public' subnet groups in this VPC. Available types: Isolated
Which well... in theory is correct, but why does it break my deployment?
Here an extract of my code my code:
// Get main VPC and subnet to use
const mainVpc = ec2.Vpc.fromLookup(this, 'MainVpc', {
vpcName: this.VPC_NAME
});
// Fargate configuration
const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this,
'CdkDocsFargateService', {
serviceName: 'docs-fargate-service',
memoryLimitMiB: 512,
desiredCount: 1,
cpu: 256,
vpc: mainVpc,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(this.IMAGE_NAME),
containerPort: 80,
},
});
I was able to achieve the desired outcome manually from the management console. What am I doing wrong when using CDK?
I solved the problem by setting publicLoadBalancer: false in the properties of the ApplicationLoadBalancedFargateService.
I am using AWS CDK to create a CloudFormation Stack with a RDS Aurora Cluster Database, VPC, Subnet, RouteTable and Security Group resources. And another Stack with a couple of Lambdas, API Gateway, IAM Roles and Policies and many other resources.
The CDK deployment works fine and I can see both stack created in CloudFormation with all the resources. But I had issues trying to connect with the RDS Database so I added a CfnOutput to check the connection string and realised that the RDS port was not resolved from it's original number-encoded token, while the hostname is resolved properly? So, I'm wondering why this is happening...
This is how I'm setting the CfnOutput:
new CfnOutput(this, "mysql-messaging-connstring", {
value: connectionString,
description: "Mysql connection string",
exportName: `${prefix}-mysqlconnstring`
});
The RDS Aurora Database Cluster is created in a method called createDatabaseCluster:
const cluster = new rds.DatabaseCluster(scope, 'Database', {
engine: rds.DatabaseClusterEngine.auroraMysql({ version: rds.AuroraMysqlEngineVersion.VER_5_7_12 }),
credentials: dbCredsSecret,
instanceProps: {
instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.SMALL),
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
vpc: vpc,
publiclyAccessible: true,
securityGroups: [ clusterSG ]
},
instances: 1,
instanceIdentifierBase: dbInstanceName,
});
This createDatabaseCluster method returns the connection string:
return `server=${cluster.instanceEndpoints[0].hostname};user=${username};password=${password};port=${cluster.instanceEndpoints[0].port};database=${database};`;
In this connection string, the DB credentials are retrieved from a secret in AWS Secrets Manager and stored in username and password variables to be used in the return statement.
The actual observed value of the CfnOutput is as follow:
As a workaround, I can just specify the port to be used but I want to understand what's the reason why this number-encoded token is not being resolved.
I'm currently migrating an AWS stack defined in Cloudformation (CFT) to CDK. The goal is not to trigger a replacement of viral resources, but I'm stuck with my Application Load Balancer.
In the old CFT stack the ALB is defined as:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
without the "Type" Property set which allows the following values: application | gateway | network
Anyways the resulting Resource in AWS Console has the Type set to "application".
In CDK I create the ALB like:
new ApplicationLoadBalancer(this, 'alb', {
vpc,
internetFacing: true,
vpcSubnets: {
subnets: vpc.publicSubnets,
},
securityGroup: this.securityGroup,
});
unfortunately this triggers a replacement because "Type": "application" is now set explicitly.
Is there any way around this? My next guess would be to try an Cfn Construct...
The most convenient solution I found was to just delete the property that is set implicitly in the L2 Construct.
const alb = new ApplicationLoadBalancer(this, 'alb', {
vpc,
internetFacing: true,
vpcSubnets: {
subnets: vpc.publicSubnets,
},
securityGroup: mySg
});
// First cast Construct to its underlying Cfn Construct
// Then delete property
(alb.node.defaultChild as CfnLoadBalancer).addDeletionOverride('Properties.Type');
More information can be found here: AWS Documentation: Escape hatches
Say I have a docker-compose file like the following:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 80:80
I want to be able to deploy it to AWS Fargate ideally (although I'm frustrated enough that I'd take ECS or anything else that works) - right now I don't care about volumes, scaling or anything else that might have complexity, I'm just after the minimum so I can begin to understand what's going on. Only caveat is that it needs to be in code - an automated deployment I can spin up from a CI server.
Is CloudFormation the right tool? I can only seem to find examples that are literally a thousand lines of yaml or more, none of them work and they're impossible to debug.
You could use AWS cdk tool to write your infrastructure as code. It's basically a meta framework to create cloudformation templates. Here would be a minimal example to deploy nginx to a loadbalanced ecs fargate service with autoscaling, but you could just remove the last to expressions. The code gets more complicated quickly, when you need more control about what to start
import cdk = require('#aws-cdk/cdk');
import ec2 = require('#aws-cdk/aws-ec2');
import ecs = require('#aws-cdk/aws-ecs');
import ecr = require('#aws-cdk/aws-ecr');
export class NginxStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.VpcNetwork(this, 'MyApiVpc', {
maxAZs: 1
});
const cluster = new ecs.Cluster(this, 'MyApiEcsCluster', {
vpc: vpc
});
const lbfs = new ecs.LoadBalancedFargateService(this, 'MyApiLoadBalancedFargateService', {
cluster: cluster,
cpu: '256',
desiredCount: 1,
// The tag for the docker image is set dynamically by our CI / CD pipeline
image: ecs.ContainerImage.fromDockerHub("nginx"),
memoryMiB: '512',
publicLoadBalancer: true,
containerPort: 80
});
const scaling = lbfs.service.autoScaleTaskCount({
maxCapacity: 5,
minCapacity: 1
});
scaling.scaleOnCpuUtilization('MyApiCpuScaling', {
targetUtilizationPercent: 10
});
}
}
I added the link to a specific cdk version, because the most recent build for the docs is a little bit broken.
ECS uses "Task Definitions" instead of docker-compose. In Task Definitions, you define which image and ports to use. We can use docker-compose as well, if we use AWS CLI. But I haven't tried it yet.
So you can create an ECS Fargate based cluster first and then create a Task or Service using the task definition. This will bring up the containers in Fargate.