I'm using ECS through ecs-cli to deploy my API.
I start by launching a cluster of spot instances using this command :
sudo ecs-cli up --region MY REGION --keypair MY KEY PAIR --instance-type t2.micro --capability-iam --size 1 --cluster MY CLUSTER NAME --spot-price 0.01
Then, using the following docker-compose.yml and ecs-params.yml files :
version: '3'
services:
selenium:
image: selenium/standalone-chrome
...etc
api:
image: myapithatusesselenium/myapithatusesselenium
ports:
- 3000:3000
links:
- selenium
...etc
version: 1
task_definition:
task_execution_role: ROLE ID
services:
selenium:
cpu_shares: 600
mem_limit: 700000000
api:
repository_credentials:
credentials_parameter: REPO CREDENTIALS
cpu_shares: 400
mem_limit: 300000000
I'm deploying a service with a load balancer using this command :
sudo ecs-cli compose --file docker-compose.yml --ecs-params ecs-params.yml --project-name MY PROJECT NAME service up --cluster MY CLUSTER NAME --target-group-arn LOAD BALANCER RESSOURCE ID --container-name api --container-port 3000
So, When my API is under a lot of load (When it starts notifying me that the API is going down) I add additional instances by scaling using these commands:
# 1 - scale the number of ec2 instances in the cluster
sudo ecs-cli scale --size 3 --capability-iam
# 2 - scale the number of tasks
sudo ecs-cli compose --file docker-compose.yml --project-name MY PROJECT NAME service scale 3
As you can see the number of tasks and ec2 instances is the same because each container can handle a single task.
When there isn't a lot of load I reduce the size again.
What I need right now is a way to make this automatic (Auto scaling in and out). I can't figure out how to do that.
Thank you !
ECS doesn't nativly autoscaling. You have to use the application autoscaling service for that. You'll need to use the regular aws CLI and call register-scalable-target, and then create a scaling policy
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html
https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html
https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html
Related
I am trying to create basic gitlab CICD pipeline which will deploy my node.js based backend to AWS kops based k8s cluster.For that I have created gitlab-ci.yml file which will use for deploy whole CICD pipeline, however I am getting confused with how to get kubernetes cluster IP address so I can use it in gitlab-ci.yml to set as - kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
where I want CLUSTER_ADDRESS to configure with gitlab in gitlab-ci.yml.
Any help would be appreciated.
variables:
DOCKER_DRIVER: overlay2
REGISTRY: $CI_REGISTRY
IMAGE_TAG: $CI_REGISTRY_IMAGE
K8S_DEPLOYMENT_NAME: deployment/$CI_PROJECT_NAME
CONTAINER_NAME: $CI_PROJECT_NAME
stages:
- build
- build-docker
- deploy
build-docker:
image: docker:latest
stage: build-docker
services:
- docker:dind
tags:
- privileged
only:
- Test
script:
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $REGISTRY
- docker build --network host -t $IMAGE_NAME:$IMAGE_TAG -t $IMAGE_NAME:latest .
- docker push $IMAGE_NAME:$IMAGE_TAG
- docker push $IMAGE_NAME:latest
deploy-k8s-(stage):
image:
name: kubectl:latest
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=default
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
If your current kubeconfig context is set to the cluster in question, you can run the following to get the cluster address you want:
kubectl config view --minify --raw \
--output 'jsonpath={.clusters[0].cluster.server}'
You can add --context <cluster name> if not.
In most cases this will be https://api.<cluster name>.
I"m running an EC2 cluster on AWS ECS.
I launch my service like so:
ecs-cli compose -f docker-compose-base.yml -f docker-compose-prod.yml --ecs-profile root service up --create-log-groups
In my ecs-params.yml file I specified desiredCount: 2:
version: 1
task_definition:
services:
api:
desiredCount: 2
However, it always get a default desired count of 1:
INFO[0000] Using ECS task definition TaskDefinition="api:5"
WARN[0000] No log groups to create; no containers use 'awslogs'
INFO[0016] (service api) has started 1 tasks: (task decf9405-63b1-4ddf-ba12-69018299e157). timestamp="2020-05-16 12:03:46 +0000 UTC"
INFO[0077] Service status desiredCount=1 runningCount=1 serviceName=api
How do I change the default desired count without having to run service scale N command?
That's right, the default desired count is 1. You'll need to use ecs-cli compose service scale This command sets the desired count of the service to the specified count. Here's the full syntax, where n is the desired count:
ecs-cli compose service scale [--deployment-max-percent n] [--deployment-min-healthy-percent n] [--timeout value] n [--help]
And here's an example that sets desired count to 2:
ecs-cli compose --project-name hello-world --file hello-world.yml service scale 2
I have below data with me -
minio:
image: minio/minio:latest
#ports:
# - '9000:9000'
volumes:
- ./data/storage:/data
environment:
MINIO_ACCESS_KEY: minio
MINIO_SECRET_KEY: minio123
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
I want to manually create task definition in FARGATE ECS and then add containers in it.[No Coding]
Where can I specify volumes specified above inside containers ?
To answer your query specific to volumes, you would have to specify the volumes in a task definition which is used to run a task in AWS Fargate. You can have a look at this documentation. This also lists the limitation when it comes to storage in AWS Fargate. AWS Fargate does not support any way to have persistent storage except EFS which was launched recently.
If your use case allows EFS check out this blog which demonstrates Amazon Elastic Container Service & AWS Fargate, now support Amazon Elastic File System
I created a kubernetes cluster and linked it with eks.
I created also an helm chart and .gitla-ci.yml.
I want to add a new step to deploy my app using helm to the cluster, but I don't find a recent tutorial. All tutorials use gitlab-auto devops.
The image is hosted on gitlab.
How could I do to achieve this task ?
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: test
USER_GITLAB: kosted
APP_NAME: mebooks
REPO: gara-mebooks
MAVEN_CLI_OPTS: "-s .m2/settings.xml --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
stages:
- deploy
k8s-deploy:
stage: deploy
image: dtzar/helm-kubectl:3.1.2
only:
- develop
script:
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo $KUBE_URL
- kubectl config set-cluster gara-eks-cluster --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM"
- kubectl get pods
In the gitlab console I got
The connection to the server localhost:8080 was refused - did you
specify the right host or port? Running after_script 00:01 Uploading
artifacts for failed job 00:02 ERROR: Job failed: exit code 1
1 - Create arn role or user on IAM from your aws console
2 - connect to your bastion and add the arn role/user in the ConfigMap aws-auth
you can follow this to understand how it works (you are not the creator of the cluster paragraph) : https://aws.amazon.com/fr/premiumsupport/knowledge-center/eks-api-server-unauthorized-error/
3- In your gitlab ci you just have to add this if it is a user you have created :
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- aws --profile default configure set aws_access_key_id "your access id"
- aws --profile default configure set aws_secret_access_key "your secret"
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
If you created a role note a user, you have just to do:
k8s-deploy:
stage: deploy
image: you need an image with aws + kubectl + helm
only:
- develop
script:
- aws --version
- helm version
- aws eks update-kubeconfig --name NAME-OF-YOUR-CLUSTER --region eu-west-3 -arn
- helm upgrade init
- helm upgrade --install my-chart ./my-chart-folder
Here I am adding my method, which is generic and can be used in any K8S environment without AWS CLI.
First, you need to convert your Kube Config to a base64 string:
cat ~/.kube/config | base64
Add the result string as a variable to your CI/CD pipeline settings of the project/group. In my example I used kube_config. Read more on how to add variables here.
Here is my CI YAML file:
stages:
# - build
# - test
- deploy
variables:
KUBEFOLDER: /root/.kube
KUBECONFIG: $KUBEFOLDER/config
k8s-deploy-job:
stage: deploy
image: dtzar/helm-kubectl:3.5.0
before_script:
- mkdir ${KUBEFOLDER}
- echo ${kube_config} | base64 -d > ${KUBECONFIG}
- helm version
- helm repo update
script:
- echo "Deploying application..."
- kubectl get pods
#- helm upgrade --install my-chart ./my-chart-folder
- echo "Application successfully deployed."
Inspired by:
https://about.gitlab.com/blog/2017/09/21/how-to-create-ci-cd-pipeline-with-autodeploy-to-kubernetes-using-gitlab-and-helm/
We have an application that uses docker compose that contains links.
I'm trying to deploy this using aws-cli on Amazon Fargate using this command:
ecs-cli compose --project-name myApp --file docker-compose-aws.yml --ecs-params fargate-ecs-params.yml --cluster myCluster --region us-east-1 up --launch-type FARGATE
When my fargate-ecs-params.yml has ecs_network_mode: awsvpc I get the error:
Links are not supported when networkMode=awsvpc
So I've tried changing to ecs_network_mode: awsvpc, however I then get the error:
Fargate only supports network mode ‘awsvpc’
My question is how do I create a task definition for Fargate with a compose file that contains links? Or is this not possible (and in that case then what are my alternatives?)
You can place both container in same task definitons they will automatically linked with each other.
After reading your final comment on the boot sequence and answering that question instead, I solved this (even in non-AWS) using the docker-compose depends.
Simple e.g.
services:
web:
depends_on:
- "web_db"
web_db:
image: mongo:3.6
container_name: my_mongodb
You should be able to remove the deprecated links and just use the hostnames that docker creates from the service container names. e.g. above the website would connect to the hostname: "my_mongodb".