How does EKS manage control plane? - amazon-web-services

From my understanding, EKS modified Kubernetes version from AWS. However, I read this link.
The control plane consists of at least two API server instances and three etcd instances that run across three Availability Zones within an AWS Region.
I try to summarize my understanding as this diagram.
I also question myself. How do they work with 3 instances of etc? How could ccm/cm/schedule work on EKS?

Related

Is there a distributed multi-cloud service mesh solution that's available? Something that cuts across GCP, AWS, Azure and even on-premise setup?

Is there a distributed multi-cloud service mesh solution that is available? A distributed service mesh that cuts across GCP, AWS, Azure and even on-premise setup?
Nathan Aw (Singapore)
Yes it is possible with istio multi cluster single mesh model.
According to istio documentation:
Multiple clusters
You can configure a single mesh to include multiple clusters. Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment:
Fault isolation and fail over: cluster-1 goes down, fail over to cluster-2.
Location-aware routing and fail over: Send requests to the nearest service.
Various control plane models: Support different levels of availability.
Team or project isolation: Each team runs its own set of clusters.
A service mesh with multiple clusters
Multicluster deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue.
You can configure inter-cluster communication based on the network and the options supported by your cloud provider. For example, if two clusters reside on the same underlying network, you can enable cross-cluster communication by simply configuring firewall rules.
Single mesh
The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name mysvc in the foo namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names.
A single mesh can span one or more clusters and one or more networks. Within a mesh, namespaces are used for tenancy.
Hope it helps.
An alternative could be using this tool with a Kubernetes cluster that spans through all of your selected cloud providers at the same time without the hassle of managing all of them separately

Best Practice GCP - GKE | Multiple services

We have different GCP projects namely for DEV/STAGE/PROD.
In DEV project we do have two services running in one cluster as part of Phase 1, in custom VPC network and subnet.
As the project is expanding which is called as Phase 2, we would adding more services to the DEV GCP project where the services would go from 2 services to 6.
The discussion currently we are having was that for phase 2, whether to have the services in :
- same cluster Or
- different cluster
Considering the ingress rules, and page routing policies, it would be great if veterans can give some leads , which of the above approach would be good for the project?
You can use the same cluster. If you have insufficient resources to deploy all the pods you need for the various services, consider scaling up the cluster instead of creating a new one. You may also want to consider node pool autoscaling or node auto provisionning.
There are really only 2 limitations on the number of services in a cluster: the total number of k8s objects (this is somewhere near 300k~400k and is a limitation of etcd), and the number of service IPs provided at cluster creation (the secondary range you assigned for services).
Aside from the above two limitations, I don't really see much of a reason to create new clusters for the new services. If you have in house design requirements, that is different, but fomr a purely k8s or GKE point of view, you can definitely continue to use the same cluster.

Setup cluster between multiple AWS accounts

I would like to setup a Ray cluster to use Rtune over 4 gpus on AWS. But each gpu belongs to a different member of our team. I have scoured available resources for an answer and found nothing. Help ?
In order to start a Ray cluster using instances that span multiple AWS accounts, you'll need to make sure that the AWS instances can communicate with each other over the relevant ports. To enable that, you will need to modify the AWS security groups for the instances (though be sure not to open up the ports to the whole world).
You can choose which ports are needed via the arguments --redis-port, --redis-shard-ports, --object-manager-port, and --node-manager-port to ray start on the head node and just --object-manager-port, and --node-manager-port on the non-head nodes. See the relevant documentation.
However, what you're trying to do sounds somewhat complex. It'd be much easier to use a single account if possible, in which case you could use the Ray autoscaler.

Autoscaling DC/OS agents on AWS

We have DC/OS running on AWS with a fixed number of master nodes and agent nodes as part of a POC. However, we'd like to have the cluster (agent nodes) autoscale according to load. So far, we've been unable to find any information about scaling on DC/OS docs. I've also had no luck so far in my web-searches.
If someone's got this working already, please let us know how you did it.
Thanks for your help!
Autoscaling the number of service instances by cpu, memory, or network load is possible: https://docs.mesosphere.com/1.8/usage/tutorials/autoscaling/
Autoscaling the number of DC/OS nodes by adding/removing nodes, however, is outside of the scope of DC/OS and specific to the IaaS it is deployed on. You can imagine that this wouldn't work on bare metal for obvious reasons. It's hypothetically possible, of course, but I haven't seen any existing automation for it.
The DC/OS AWS templates use easily scaled node groups, but it's not automatic. You might try looking for IaaS specific autoscalers that aren't DC/OS specific.
If you have an autoscaling group for your "private agent" nodes and you want to scale the number of nodes in times of heavy load, pick a CloudWatch metric that suits your needs (e.g. traffic on ELB) and scale by an autoscaling scaling policy:
http://docs.aws.amazon.com/autoscaling/latest/userguide/policy_creating.html
Then you can use one of the two ways described in https://docs.mesosphere.com/1.8/usage/tutorials/autoscaling/ to scale your apps within DC/OS (on scheduler level).

Kubernetes - adding more nodes

I have a basic cluster, which has a master and 2 nodes. The 2 nodes are part of an aws autoscaling group - asg1. These 2 nodes are running application1.
I need to be able to have further nodes, that are running application2 be added to the cluster.
Ideally, I'm looking to maybe have a multi-region setup, whereby aplication2 can be run in multiple regions, but be part of the same cluster (not sure if that is possible).
So my question is, how do I add nodes to a cluster, more specifically in AWS?
I've seen a couple of articles whereby people have spun up the instances and then manually logged in to install the kubeltet and various other things, but I was wondering if it could be done in more of an automatic way?
Thanks
If you followed this instructions, you should have an autoscaling group for your minions.
Go to AWS panel, and scale up the autoscaling group. That should do it.
If you did it somehow manually, you can clone a machine selecting an existing minion/slave, and choosing "launch more like this".
As Pablo said, you should be able to add new nodes (in the same availability zone) by scaling up your existing ASG. This will provision new nodes that will be available for you to run application2. Unless your applications can't share the same nodes, you may also be able to run application2 on your existing nodes without provisioning new nodes if your nodes are big enough. In some cases this can be more cost effective than adding additional small nodes to your cluster.
To your other question, Kubernetes isn't designed to be run across regions. You can run a multi-zone configuration (in the same region) for higher availability applications (which is called Ubernetes Lite). Support for cross-region application deployments (Ubernetes) is currently being designed.