Gitops - how is an artifact promoted between clusters in K8s? - cicd

Suppose we have 3 clusters per zone/region = dev, staging, prod.
Suppose we have 3 zones/region.
In gitops practice,
Q1) how does a good build of a commit-id get promoted between environments in a region.
Q2) how does a good build of a commit-id get promoted between zones ?
Thank you

Related

How to get the availability zone of Codebuild build during execution time?

I have an AWS Codebuild Project that is connected to a VPC. Now I'm trying to understand how can I get the availability zone of codebuild build during execution time. Is this possible?
It turns out there is an Environment Variable, CODEBUILD_VPC_AZ, that holds this value.
Based on AWS documentation there are two environment variables having the AWS Region where the build is running (AWS_DEFAULT_REGION and AWS_REGION). I don't think there is any possibility to get the actual availability zone.
But is there something specific that you want to achieve by knowing the availability zone? Maybe we can provide a solution to that.

Is "Zone" different among projects?

According to the documentation, it says a "zone" could be mapped to different cluster for different projects but is it true that a zone may map to a different cluster among projects?
I've never seen a zone mapping difference across projects. Also, since each zone provides different machine types, I'm not even sure if a zone could be mapped to different clusters among projects.
If it does, is there a way to find out which cluster my zone is mapped to like the one in AWS?
Thanks!
A cluster, as defined, is simply a set of physical servers, networks, disk, cooling. In short, a datacenter. It's impossible to know, it's google internal management.
A zone comes on top of one or several clusters. If the initial cluster (aka datacenter) is too small, Google can have chosen to expend it and if it's not possible to add another one. But at user point of view, it's invisible!
Google try to locate all the projects of the same organization in the same cluster, especially for security and performance reason in case of VPC peering or Shared VPC. However, it's not guaranteed. But, because your don't know this, you can't check it.
For example, if 2 projects are on 2 different clusters in the same region, there isn't issue. But if you create a VPC peering, it's not optimized. To solve this, Google can migrate Compute Engine from a cluster to another one, even without stopping the VM (it's called "live migration"), you aren't able to see anything of this VM placement.
Generally the cluster is consistent for a project. In case of huge resources usage, it could be different (HPC for example, or with requirement of 10k+ CPUs), but Googlers must have more detail in this case if you are a big CPU consumer
I tried to create a GKE regional cluster in europe-west3, with N2 cpu type, only available in 2 of the 3 zone and I got this error:

Best Practice GCP - GKE | Multiple services

We have different GCP projects namely for DEV/STAGE/PROD.
In DEV project we do have two services running in one cluster as part of Phase 1, in custom VPC network and subnet.
As the project is expanding which is called as Phase 2, we would adding more services to the DEV GCP project where the services would go from 2 services to 6.
The discussion currently we are having was that for phase 2, whether to have the services in :
- same cluster Or
- different cluster
Considering the ingress rules, and page routing policies, it would be great if veterans can give some leads , which of the above approach would be good for the project?
You can use the same cluster. If you have insufficient resources to deploy all the pods you need for the various services, consider scaling up the cluster instead of creating a new one. You may also want to consider node pool autoscaling or node auto provisionning.
There are really only 2 limitations on the number of services in a cluster: the total number of k8s objects (this is somewhere near 300k~400k and is a limitation of etcd), and the number of service IPs provided at cluster creation (the secondary range you assigned for services).
Aside from the above two limitations, I don't really see much of a reason to create new clusters for the new services. If you have in house design requirements, that is different, but fomr a purely k8s or GKE point of view, you can definitely continue to use the same cluster.

ECS with persistent data on EFS or EBS with CloudFormation

I am looking for some expert with AWS to help me with this thing. I've spent almost one week trying to deploy my backend docker image to AWS with no 100% of the desired behaviour.
Firstly I was suggested to try out the new Fargate service AWS recently provided. I managed to deploy everything that I needed but it quickly turned out that I need any kind of data persistence which is unavailable with Fargate for the moment from what I've read.
I found those templates which are very helpful because AWS is soooo big and overwhelming so I would do nothing without those and currently tried the deploy using EC2 instances. https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/ECS/EC2LaunchType
I've a question to this kind of deploy:
1: Why this deploy creates 2 instances of EBS for the cluster? One 8GB with snapshot and second with 22GB in size without snapshot.
2: Is it possible to reduce the size of those EBS volumes ?? If so how ?
3: Is it possible to have just only one of those EBS ?
4: Is it possible to mount volue from my docker backend image to those EBS volumes to persist data ? If so how ? I need to mount two volumes for my backend that is
/root/.local/share/Bisq /root/.local/share/bisq-api or ~/.local/share/Bisq ~/.local/share/bisq-api im not quite sure how this works with AWS. What are the paths compared to the steps on the local environment.
5: Would it be better to use EBS or EFS for data persistance? Problem with EFS is that i Can't find any related documents how to connect EFS to this kind of ECS deploy. Everything MUST be with use of CloudFormation templates
Overall the requirements that would match 100% desired behaviour are:
1: CloudFormation Template/tempalte's that deploy neccesarry services as less as possible in order to not build huge infrastructure to preserve the COSTs and ability to just click button and get external link to the backend service. (There can't be any manuall configuration everything must be automated)
2: Ability to stop/start EC2 instace for the backend container (EC2 will be running up from few minutes to hours per day / few days a month. (dependent on each user scenario how often he will be using the backend)
3: Ability to preserve the data when user stops the instance and then starts it in the future point in time.
I would apprecieate any help/suggestions because Im starting to lose my HEAD over everything what connects to AWS services. This is really huge problem to understand any use cases for AWS so I would appreciate help
Thank you!
1: Why this deploy creates 2 instances of EBS for the cluster? One 8GB with snapshot and second with 22GB in size without snapshot.
As per the docs:
Amazon ECS-optimized AMIs from version 2015.09.d and later launch with an 8-GiB volume for the operating system that is attached at /dev/xvda and mounted as the root of the file system. There is an additional 22-GiB volume that is attached at /dev/xvdcz that Docker uses for image and metadata storage.
Here the reference: ecs-ami-storage-config
2: Is it possible to reduce the size of those EBS volumes ?? If so how ?
Also from the same docs:
You can increase these default volume sizes by changing the block device mapping settings for your instances when you launch them; however, you cannot specify a smaller volume size than the default.
3: Is it possible to have just only one of those EBS ?
For this you can better use a custom AMI and configure it as needed.
4: Is it possible to mount volume from my docker backend image to those EBS volumes to persist data ? If so how ? I need to mount two volumes for my backend that is /root/.local/share/Bisq /root/.local/share/bisq-api or ~/.local/share/Bisq ~/.local/share/bisq-api im not quite sure how this works with AWS. What are the paths compared to the steps on the local environment.
This is configured in the task definition:
AWS::ECS::TaskDefinition
Type: AWS::ECS::TaskDefinition
Properties:
Volumes:
- Volume Definition
...
5: Would it be better to use EBS or EFS for data persistance? Problem with EFS is that i Can't find any related documents how to connect EFS to this kind of ECS deploy. Everything MUST be with use of CloudFormation templates
It depends on your use case scenario. Each one has advantages and disadvantages depending on your needs, so better read the docs for each and chose the best one accordingly. In the templates you found, you can customize the LaunchConfiguration UserData to run the attach commands. You can do all this in CloudFormation.
Additionally I will leave you this documentation for mounting automatically an EFS: Mounting Your Amazon EFS File System Automatically

How does ELB during auto scalling updates?

During rolling updates in ASG . There could be possibilities that certain number of instances could have latest code and the other may have the old . So in this case , how does the ELB behaves ? . Will it share traffic only to newly formed instances or it will share the load equally ?
Depends on the deployment strategy you choose to use.
In Place Deployment:
If your application/APIs can accept partial changes during deployment, you may choose to deploy upgrades to each instances or certain instances at a time until all instances are updated.
Blue Green Deployments:
You deploy updates to a completely different set of instances which are not live, rollout the updates and switch these new instances in the ELB.
These are fairly generic strategies but are available out of the box using AWS CodeDeploy.