AWS CDK, Fargate Service and ECR Lifecycle policy of created image - amazon-web-services

Having multiple different applications, I would like to use ECR Lifecycle policy to clear old images. However since all images are at same place, I can't just wipe out images based on count / date.
I'm aware CDK now pushes all images into one ECR repository (this answer). I don't want to overcomplicate my CDK deployment with additional step with creating and pushing Docker image separately.
Is there any way how to (either):
create custom ECR repository and push image to it (without jumping around separate docker push)
tag images way it's usable to ECR lifecycle policy
... both just when simply using ApplicationLoadBalancedFargateService?
This is code for setting up one of my services:
const fargateService =
new ecsPatterns.ApplicationLoadBalancedFargateService(
this,
"FargateService",
{
serviceName: `LeApp-${envId}`,
cluster: cluster,
// ...
taskImageOptions: {
image: ecs.ContainerImage.fromAsset("../"),
containerName: "leapp-container",
family: "leapp",
// ...
},
propagateTags: ecs.PropagatedTagSource.SERVICE,
}
);

Related

How to provide Docker Credentials for AWS CodeBuild automatic image pull

I have a CodeBuild project that pulls an image from a public Docker repository. I'm running into the known issue of too many pulls, so I want to login to Docker and pull the image because I have a valid Docker license.
However, I can't seem to find any documentation on how to set my credentials in CodeBuild. The only examples I see, are logging in via the buildspec.yml and then pulling the docker image. This does not work for me because I'm setting the docker image in the CodeBuild configuration.
I'm using CDK and this is my current CodeBuild configuration:
const myCodeBuild = new codeBuild.Project(this, 'myCodeBuild', {
source: githubsrc,
secondarySources: [ githubsrc2 ],
role: new BuildRole(this, 'myCodeBuildRole').role,
buildSpec: codeBuild.BuildSpec.fromObject(buildSpec),
environment: {
buildImage: codeBuild.LinuxBuildImage.fromDockerRegistry('salesforce/salesforcedx:latest-rc-full'
},
});
This creates a CodeBuild project that will automatically use the provided Docker Image. There is never a chance to login before it is pulled.
fromDockerRegistry supports authentication. To use it, create a Secrets Manager secret that contains the username and password fields with your Docker Hub credentials and pass it to the function. (Documentation reference for the secret format)
Using the example from the docs:
environment: {
buildImage: codebuild.LinuxBuildImage.fromDockerRegistry('my-registry/my-repo', {
secretsManagerCredentials: secrets,
}),
},
secrets is your Secrets Manager secret here.

Can I select a container image from private ECR repository in my CloudFormation template?

Hi I was wondering if it would be possible to select a docker imagem from my private respository in ECR using cloud formation yml for later use when configuring my task definition on an ECS service, something like this:
ContainerImage:
Description: "Container image"
Type: AWS::ECR::PrivateRepository
The only way to do it is by development of a custom resource. The resource would be a lambda function which would use AWS SDK, such as boto3, to query your ECR, and return a list of available images to your stack for further use.

aws cdk push image to ecr

I am trying to do something that seems fairly logical and straight forward.
I am using the AWS CDK to provision an ecr repo:
repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)
I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.
I do this in the same service code with:
assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)
The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default location aws-cdk/assets
How do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ?
AWS CDK depricated the repositoryName property on DockerImageAsset. There are a few issues on GitHub referencing the problem. See this comment from one of the developers:
At the moment the CDK comes with 2 asset systems:
The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.
The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. #eladb, can you help me remember why we chose to do it this way?
There is a request for a new construct that will allow you to deploy to a custom ECR repository at (aws-ecr-assets) ecr-deployment #12597.
Use Case
I would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.
There is also a third party solution at https://github.com/wchaws/cdk-ecr-deployment if you do not want to wait for the CDK team to implement the new construct.

Is there a way to use multiple aws profiles to deploy(update) serverless stack?

We have a team of 3 to 4 members so we wanted to do serverless deploy or update functions or resources using our own personnel AWS credentials without creating new stack but just updating the existing resources. Is there a way to do that? I am aware that we can set up --aws-profile and different profiles for different stages. I am also aware that we cloud just divide the resources into microservices and just deploy or update our own parts. Any help is appreciated.
This can be done as below:
Add the profile configuration as below, i ha e named it as devProfile.
service: new-service
provider:
name: aws
runtime: nodejs12.x
stage: dev
profile: devProfile
Each individual would set their credentials under their own machine as below:
aws configure --profile devProfile
If you have different credentials for different stage, then above serverless snippet can be implemented in parameterized way as below:
serverless.yml
custom:
stages:
- local
- dev
- prod
# default stage/environment
defaultStage: local
# default AWS region
defaultRegion: us-east-1
# config file / region / stage
configFile: ${file(./config/${opt:region,self:provider.region}/${self:provider.stage}.yml)}
Provider:
...
stage: ${opt:stage, self:custom.defaultStage}
...
profile: ${self:custom.configFile.aws.profile}
...
Create config/us-east-1/dev.yml
aws:
profile: devProfile
and config/us-east-1/prod.yml
aws:
profile: prodProfile
It sounds like you already know what to do but need a sanity check. So I'll tell you how I, and everyone else I know, handles this.
We prefix commands with AWS_PROFILE env var declared and we use --stage names.
E.g. AWS_PROFILE=mycompany sls deploy --stage shailendra.
Google aws configure for examples on how to set up awscli that uses the AWS_PROFILE var.
We also name the --stage with a unique ID, e.g. your name. This way, you and your colleagues all have individual CloudFormation stacks that work independently of eachother and there will be no conflicts.

How can I deploy nginx to AWS fargate in code?

Say I have a docker-compose file like the following:
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 80:80
I want to be able to deploy it to AWS Fargate ideally (although I'm frustrated enough that I'd take ECS or anything else that works) - right now I don't care about volumes, scaling or anything else that might have complexity, I'm just after the minimum so I can begin to understand what's going on. Only caveat is that it needs to be in code - an automated deployment I can spin up from a CI server.
Is CloudFormation the right tool? I can only seem to find examples that are literally a thousand lines of yaml or more, none of them work and they're impossible to debug.
You could use AWS cdk tool to write your infrastructure as code. It's basically a meta framework to create cloudformation templates. Here would be a minimal example to deploy nginx to a loadbalanced ecs fargate service with autoscaling, but you could just remove the last to expressions. The code gets more complicated quickly, when you need more control about what to start
import cdk = require('#aws-cdk/cdk');
import ec2 = require('#aws-cdk/aws-ec2');
import ecs = require('#aws-cdk/aws-ecs');
import ecr = require('#aws-cdk/aws-ecr');
export class NginxStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new ec2.VpcNetwork(this, 'MyApiVpc', {
maxAZs: 1
});
const cluster = new ecs.Cluster(this, 'MyApiEcsCluster', {
vpc: vpc
});
const lbfs = new ecs.LoadBalancedFargateService(this, 'MyApiLoadBalancedFargateService', {
cluster: cluster,
cpu: '256',
desiredCount: 1,
// The tag for the docker image is set dynamically by our CI / CD pipeline
image: ecs.ContainerImage.fromDockerHub("nginx"),
memoryMiB: '512',
publicLoadBalancer: true,
containerPort: 80
});
const scaling = lbfs.service.autoScaleTaskCount({
maxCapacity: 5,
minCapacity: 1
});
scaling.scaleOnCpuUtilization('MyApiCpuScaling', {
targetUtilizationPercent: 10
});
}
}
I added the link to a specific cdk version, because the most recent build for the docs is a little bit broken.
ECS uses "Task Definitions" instead of docker-compose. In Task Definitions, you define which image and ports to use. We can use docker-compose as well, if we use AWS CLI. But I haven't tried it yet.
So you can create an ECS Fargate based cluster first and then create a Task or Service using the task definition. This will bring up the containers in Fargate.