Say I have a docker-compose file like the following:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 80:80
I want to be able to deploy it to AWS Fargate ideally (although I'm frustrated enough that I'd take ECS or anything else that works) - right now I don't care about volumes, scaling or anything else that might have complexity, I'm just after the minimum so I can begin to understand what's going on. Only caveat is that it needs to be in code - an automated deployment I can spin up from a CI server.
Is CloudFormation the right tool? I can only seem to find examples that are literally a thousand lines of yaml or more, none of them work and they're impossible to debug.
You could use AWS cdk tool to write your infrastructure as code. It's basically a meta framework to create cloudformation templates. Here would be a minimal example to deploy nginx to a loadbalanced ecs fargate service with autoscaling, but you could just remove the last to expressions. The code gets more complicated quickly, when you need more control about what to start
import cdk = require('#aws-cdk/cdk');
import ec2 = require('#aws-cdk/aws-ec2');
import ecs = require('#aws-cdk/aws-ecs');
import ecr = require('#aws-cdk/aws-ecr');
export class NginxStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.VpcNetwork(this, 'MyApiVpc', {
maxAZs: 1
});
const cluster = new ecs.Cluster(this, 'MyApiEcsCluster', {
vpc: vpc
});
const lbfs = new ecs.LoadBalancedFargateService(this, 'MyApiLoadBalancedFargateService', {
cluster: cluster,
cpu: '256',
desiredCount: 1,
// The tag for the docker image is set dynamically by our CI / CD pipeline
image: ecs.ContainerImage.fromDockerHub("nginx"),
memoryMiB: '512',
publicLoadBalancer: true,
containerPort: 80
});
const scaling = lbfs.service.autoScaleTaskCount({
maxCapacity: 5,
minCapacity: 1
});
scaling.scaleOnCpuUtilization('MyApiCpuScaling', {
targetUtilizationPercent: 10
});
}
}
I added the link to a specific cdk version, because the most recent build for the docs is a little bit broken.
ECS uses "Task Definitions" instead of docker-compose. In Task Definitions, you define which image and ports to use. We can use docker-compose as well, if we use AWS CLI. But I haven't tried it yet.
So you can create an ECS Fargate based cluster first and then create a Task or Service using the task definition. This will bring up the containers in Fargate.
Related
My goal is to have an AWS code commit repo, that on push to main branch will run a code pipeline CI/CD process to deploy a single node app to AWS.
I went through the setup to get this working with fargate via CDK using ApplicationLoadBalancedFargateService, but ultimately ran into issues because the ALB requires two availability zones, and I don't want to run two instance of my app (I'm not concerned with high availability, and in this case it's a chat bot that I don't want "logged on" twice).
Does anyone have any recommendations here? Perhaps EBS is the service I want? (I've gone down that path pre-container, but maybe I should revisit?)
I also read about the code deploy agent and EC2, but that seems more of a manual process, where I'm hoping to be able to automated the creation of all resources with CDK.
resolution: I believe this is a case of me not understanding fargate well enough, shoutout #Victor Smirnov for helping break down everything for me.
There is in fact only a single task registered when my CDK stack builds.
I think the issue I ran into was I'd use the CDK codepipeline ECS deploy action, which would start deploying a second task before deregistering the first (which I think is just a fargate "feature" to avoid downtime, ie blue/green deploy). I mistakenly expected only a single container to be running at a given time, but that's just not how Services work.
I think Victor had a good point about the health checks as well. It took me a few tries to get all the ports lined up, and when they were misaligned and health checks were failing I'd see the "old failed task" getting "deregistered" alongside the "new task that hadn't failed yet" which made me think I had two concurrent tasks running.
Below is an example of the ApplicationLoadBalancedFargateService pattern used to create the Fargate service with one running task. I deployed the stack when I wrote the answer to the question.
The Application load balancer has three availability zones because my VPC has three public subnets. It means that the loader balancer itself has the IP addresses in three different zones.
The load balancer has only one target. There is no requirement that the load balancer should have a target in each zone.
I put everything in the public VPC zone because I do not have NAT. You might want to have your Fargate tasks in the private zone for better security.
I added the health check path with default values because, most likely, you will want to define a custom URL for your service. We can omit the default definition.
import { App, RemovalPolicy, Stack } from 'aws-cdk-lib'
import { Certificate, CertificateValidation } from 'aws-cdk-lib/aws-certificatemanager'
import { Vpc } from 'aws-cdk-lib/aws-ec2'
import { Cluster, ContainerImage, LogDriver } from 'aws-cdk-lib/aws-ecs'
import { ApplicationLoadBalancedFargateService } from 'aws-cdk-lib/aws-ecs-patterns'
import { ApplicationProtocol } from 'aws-cdk-lib/aws-elasticloadbalancingv2'
import { LogGroup } from 'aws-cdk-lib/aws-logs'
import { HostedZone } from 'aws-cdk-lib/aws-route53'
import { env } from 'process'
function createStack (scope, id, props) {
const stack = new Stack(scope, id, props)
const logGroup = new LogGroup(stack, 'LogGroup', { logGroupName: 'so-service', removalPolicy: RemovalPolicy.DESTROY })
const vpc = Vpc.fromLookup(stack, 'Vpc', { vpcName: 'BlogVpc' })
const domainZone = HostedZone.fromLookup(stack, 'ZonePublic', { domainName: 'victorsmirnov.blog' })
const domainName = 'service.victorsmirnov.blog'
const certificate = new Certificate(stack, 'SslCertificate', {
domainName,
validation: CertificateValidation.fromDns(domainZone)
})
const cluster = new Cluster(stack, 'Cluster', {
clusterName: 'so-cluster',
containerInsights: true,
enableFargateCapacityProviders: true,
vpc,
})
const service = new ApplicationLoadBalancedFargateService(stack, id, {
assignPublicIp: true,
certificate,
cluster,
cpu: 256,
desiredCount: 1,
domainName,
domainZone,
memoryLimitMiB: 512,
openListener: true,
protocol: ApplicationProtocol.HTTPS,
publicLoadBalancer: true,
redirectHTTP: true,
targetProtocol: ApplicationProtocol.HTTP,
taskImageOptions: {
containerName: 'nginx',
containerPort: 80,
enableLogging: true,
family: 'so-service',
image: ContainerImage.fromRegistry('nginx'),
logDriver: LogDriver.awsLogs({ streamPrefix: 'nginx', logGroup })
}
})
service.targetGroup.configureHealthCheck({
path: '/',
})
return stack
}
const app = new App()
createStack(app, 'SingleInstanceAlbService', {
env: { account: env.CDK_DEFAULT_ACCOUNT, region: env.CDK_DEFAULT_REGION }
})
Content for the cdk.json and package.json for completeness.
{
"app": "node question.js",
"context": {
"#aws-cdk/core:newStyleStackSynthesis": true
}
}
{
"name": "alb-single-instance",
"version": "0.1.0",
"dependencies": {
"aws-cdk-lib": "^2.37.1",
"cdk": "^2.37.1",
"constructs": "^10.1.76"
},
"devDependencies": {
"rimraf": "^3.0.2",
"snazzy": "^9.0.0",
"standard": "^17.0.0"
},
"scripts": {
"cdk": "cdk",
"clean": "rimraf cdk.out dist",
"format": "standard --fix --verbose | snazzy",
"test": "standard --verbose | snazzy"
},
"type": "module"
}
This should be enough to have a fully functional setup where everything is configured automatically using the CDK.
Maybe you do not need the load balancer because there is no need to balance traffic for only one task. You can set up the Service discovery for your service and use the DNS name for your task without a load balancer. This should save money if you want.
Your application can still be in one AZ. The fact that ALB requires two AZs is only related to ALB itself. So you do not have to create any extra instance of your application in other AZ if you don't want. Though it could be a good idea for high-availability.
Having multiple different applications, I would like to use ECR Lifecycle policy to clear old images. However since all images are at same place, I can't just wipe out images based on count / date.
I'm aware CDK now pushes all images into one ECR repository (this answer). I don't want to overcomplicate my CDK deployment with additional step with creating and pushing Docker image separately.
Is there any way how to (either):
create custom ECR repository and push image to it (without jumping around separate docker push)
tag images way it's usable to ECR lifecycle policy
... both just when simply using ApplicationLoadBalancedFargateService?
This is code for setting up one of my services:
const fargateService =
new ecsPatterns.ApplicationLoadBalancedFargateService(
this,
"FargateService",
{
serviceName: `LeApp-${envId}`,
cluster: cluster,
// ...
taskImageOptions: {
image: ecs.ContainerImage.fromAsset("../"),
containerName: "leapp-container",
family: "leapp",
// ...
},
propagateTags: ecs.PropagatedTagSource.SERVICE,
}
);
I'm trying to create a cdk stack containing an ApplicationLoadBalancedFargateService (docs). I want it placed in my VPC which exclusively contains private subnets.
When I try to deploy my stack I get an error message saying:
Error: There are no 'Public' subnet groups in this VPC. Available types: Isolated
Which well... in theory is correct, but why does it break my deployment?
Here an extract of my code my code:
// Get main VPC and subnet to use
const mainVpc = ec2.Vpc.fromLookup(this, 'MainVpc', {
vpcName: this.VPC_NAME
});
// Fargate configuration
const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this,
'CdkDocsFargateService', {
serviceName: 'docs-fargate-service',
memoryLimitMiB: 512,
desiredCount: 1,
cpu: 256,
vpc: mainVpc,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(this.IMAGE_NAME),
containerPort: 80,
},
});
I was able to achieve the desired outcome manually from the management console. What am I doing wrong when using CDK?
I solved the problem by setting publicLoadBalancer: false in the properties of the ApplicationLoadBalancedFargateService.
I'm using a slightly customized Terraform configuration to generate my Kubernetes cluster on AWS. The configuration includes an EFS instance attached to the cluster nodes and master. In order for Kubernetes to use this EFS instance for volumes, my Kubernetes YAML needs the id and endpoint/domain of the EFS instance generated by Terraform.
Currently, my Terraform outputs the EFS id and DNS name, and I need to manually edit my Kubernetes YAML with these values after terraform apply and before I kubectl apply the YAML.
How can I automate passing these Terraform output values to Kubernetes?
I don't know what you mean by a yaml to set up an Kubernetes cluster in AWS. But then, I've always set up my AWS clusters using kops. Additionally I don't understand why you would want to mount an EFS to the master and/or nodes instead of to the containers.
But in direct answer to your question: you could write a script to output your Terraform outputs to a Helm values file and use that to generate the k8s config.
I stumbled upon this question when searching for a way to get TF outputs to envvars specified in Kubernetes and I expect more people do. I also suspect that that was really your question as well or at least that it can be a way to solve your problem. So:
You can use the Kubernetes Terraform provider to connect to your cluster and then use the kubernetes_config_map resources to create configmaps.
provider "kubernetes" {}
resource "kubernetes_config_map" "efs_configmap" {
"metadata" {
name = "efs_config" // this will be the name of your configmap
}
data {
efs_id = "${aws_efs_mount_target.efs_mt.0.id}"
efs_dns = "${aws_efs_mount_target.efs_mt.0.dns_name}"
}
}
If you have secret parameters use the kubernetes_secret resource:
resource "kubernetes_secret" "some_secrets" {
"metadata" {
name = "some_secrets"
}
data {
s3_iam_access_secret = "${aws_iam_access_key.someresourcename.secret}"
rds_password = "${aws_db_instance.someresourcename.password}"
}
}
You can then consume these in your k8s yaml when setting your environment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-app-deployment
spec:
selector:
matchLabels:
app: some
template:
metadata:
labels:
app: some
spec:
containers:
- name: some-app-container
image: some-app-image
env:
- name: EFS_ID
valueFrom:
configMapKeyRef:
name: efs_config
key: efs_id
- name: RDS_PASSWORD
valueFrom:
secretKeyRef:
name: some_secrets
key: rds_password
I am quite new to AWS and want to know how to achieve following task with CloudFormation.
I want to spin up an EC2 instance with tomcat and deploy a java application on it. This java application will perform some operation. Once the operation is done, I want to delete all the resources created by this CloudFormation stack.
All these activities should be automatic. For example -- I will create the CloudFormation stack JSON file. At particular time of a day, a job should be kicked off (I don't know where in AWS to configure such job or how). But I know through Jenkins we can create a CloudFormation stack that will create all resources.
Then, after some time (lets say 2 hrs), another job should kick off and delete all resources created by CloudFormation.
Is this possible in AWS? If yes, any hints on how to do this?
Just to confirm, what you intend to do is have an EC2 instance get created on a schedule, and then have it shut down after 2 hours. The common way of accomplishing that is to use an Auto-Scaling Group (ASG) with a ScheduledAction to scale up and a ScheduledAction to scale down.
ASGs have a "desired capacity" (the number of instances in the ASG). You would want this to be "0" by default, change it to "1" at your desired time, and change it back to "0" two hours after that. What that will do is automatically start and subsequently terminate your EC2 instance on your schedule.
They also use a LaunchConfiguration, which is a template for your EC2 instances that will start on the schedule.
MyASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
LaunchConfigurationName: !Ref MyLaunchConfiguration
MaxSize: 1
MinSize: 0
DesiredCapacity: 0
ScheduledActionUp:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 1
Recurrence: "0 7 * * *"
ScheduledActionDown:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 0
Recurrence: "0 9 * * *"
MyLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: ami-xxxxxxxxx # <-- Specify the AMI ID that you want
InstanceType: t2.micro # <-- Chaneg the instance size if you want
KeyName: my-key # <-- Change to the name of an EC2 SSH key that you've added
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum install -y aws-cfn-bootstrap
# ...
# ... run some commands to set up the instance, if you need to
# ...
Metadata:
AWS::CloudFormation::Init:
config:
files:
"/etc/something/something.conf":
mode: 000600
owner: root
group: root
content: !Sub |
#
# Add the content of a config file, if you need to
#
Depending on what you want your instances to interact with, you might also need to add a Security Group and/or an IAM Instance Profile along with an IAM Role.
If you're using Jenkins to deploy the program that will run, you would add a step to bake an AMI, build and push a docker image, or take whatever other action you need to deploy your application to the place that it will be used by your instance.
I note that in your question you say that you want to delete all of the resources created by CloudFormation. Usually, when you deploy a stack like this, the stack remains deployed. The ASG will remain there until you decide to remove the stack, but it won't cost anything when you're not running EC2 instances. I think I understand your intent here, so the advice that I'm giving aligns with that.
You can use Lambda to execute events on a regular schedule.
Write a Lambda function that calls CloudFormation to create your stack of resources. You might even consider including a termination Lambda function in your CloudFormation stack and configure it to run on a schedule (2 hours after the stack was created) to delete the stack that the termination Lambda function itself is part of (have not tried this, but believe that it will work). Or you could trigger stack deletion from cron on the EC2 instance running your Java app, of course).
If all you want is an EC2 instance, it's probably easier to simply create the EC2 instance rather than a CloudFormation stack.
Something (eg an AWS Lambda function triggered by Amazon CloudWatch Events) calls the EC2 API to create the instance
User Data is passed to the EC2 instance to install the desired software OR use a custom AMI with all software pre-installed
Have the instance terminate itself when it has finished processing -- this could be as simple as calling the Operating System to shutdown the machine, with the EC2 Shutdown Behavior set to Terminate.