The setup like the following :-
eks with multi AZ (3) and many node group , installed Rabbit MQ , for rabbitmq broker we are using specific node group.
is there any way to force all rabbit mq to be in same AZ and
I am confused by your question because the title says Fargate but then you talk about nodegroups (an EC2 construct). IF you are using Fargate you can control which AZ you are deploying Fargate pods into by creating a Fargate profile that only has the subnets that belong to the specific AZ you want to use. If you use multiple subnets across multiple AZ in your EKS Fargate profile we spread the pods (with a best effort algorithm). If you use one subnet in the Fargate profile we deploy all of them on that subnet.
Related
I have a few questions on EKS node groups.
I dont understand the concept of Node group and why is it required. Can't we create an EC2 and run kubeadm join for joining EC2 node to EKS Cluster. What advantage does node group hold.
Does node groups (be it managed or self-managed) have to exist in same VPC of EKS cluster. Is it not possible to create node group in another VPC. If so how?
managed node groups is a way to let AWS manage part of the lifecycle of the Kubernetes nodes. For sure you are allowed to continue to configure self managed nodes if you need/want to. To be fair you can also spin up a few EC2 instances and configure your own K8s control plane. It boils down to how much you wanted managed Vs how much you want to do yourself. The other extreme on this spectrum would be to use Fargate which is a fully managed experience (where there are no nodes to scale, configure, no AMIs etc).
the EKS cluster (control plane) lives in a separate AWS managed account/VPC. See here. When you deploy a cluster EKS will ask you which subnets (and which VPC) you want the EKS cluster to manifest itself (through ENIs that get plugged into your VPC/subnets). That VPC is where your self managed workers, your managed node groups and your Fargate profiles need to be plugged into. You can't use another VPC to add capacity to the cluster.
I did not quite understand the configuring of VPC "CIDR block" while creating fargate cluster. Based on the link https://www.datadoghq.com/blog/aws-fargate-metrics/, there is a fleet that runs outside my VPC that has the infrastructure to run my fargate tasks.
What I dont understand if I configure a dedicated VPC for my fargate cluster. How does it connect with dedicated AWS managed infrastructure for fargate.
I did not find any documentation with some explaination.
After googling for sometime, found this https://thenewstack.io/aws-fargate-through-the-lens-of-kubernetes/
The author states the VPC configured during fargate cluster creation acts as proxy and requests are fwded to EC2 instance running in VPC owned/managed by AWS. Configuring VPC serves the purpose of controlling the IP range of ENI attached to containers. This is based on my observation, need something more to back it up.
From what I've read so far:
EC2 ASG is a simple solution to scale your server with more copies of it with a load balancer in front of the EC2 instance pool
ECS is more like Kubernetes, which is used when you need to deploy multiple services in docker containers that works with each other internally to form a service, and auto scaling is a feature of ECS itself.
Are there any differences I'm missing here? Because ECS is almost always a superior choice to go with if they work as I understand.
You are right, in a very simple sense, EC2 Autoscaling Groups is a way to add/remove (register/unregister) EC2 instances to a Classic Load Balancer or Target Groups (ALB/NLB).
ECS has two type of scaling as does any Container orchestration platform:
Cluster Autoscaling: Add remove EC2 instances in a Cluster when tasks are pending to run
Service Autoscaling: Add/remove tasks in a service based on demand, uses Application AutoScaling service behind the scenes
I'm a bit confused about how aws EKS works, excuse me for my ignorance.
I have 4 VPC's one for shared services(gitlab ect..), one for dev, one for staging and one for prod.
There are multiple subnets on each vpc for different clients a,b,c.
Currently I just have pipelines that build images and deploy on an ec2 in a specific vpc/subnet. The pipeline ssh to the server based on gitlab-ci file.
I would like to change that and have a k8 cluster, where when the image updates the k8 deploys my image to the specified vpc and subnets. I know I can hook up my registry to the k8 and have it work on update thats not my question. My question is how does EKS work across VPC's and subnets.
Is this possible. It seems like the EKS cluster can only be in 1 vpc and can only deploy to those subnets?
Am I not understanding correctly?
You are correct.
EKS ControlPlane can only be running in a single VPC and can be associated with subnets in that VPC.
I have raised feature requests while back with AWS to support multi VPC and multi region EKS but no news about them so far.
I have a Fargate Service running in AWS. I use it to run multiple tasks. Some of the tasks connect to an RDS database to query the database.
How can I add the Fargate Service to my inboard rules of a Security Group for the RDS database? - Is there a way to associate an Elastic IP with the Fargate Cluster?
Might have misunderstood something here... But the ECS allows you specify a security group at the service level.
Go to https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html
And search for the --network-configuration parameter
So surely you just need to set the source on your inbound rule of the RDS security group to be that security group ID?
Fargate doesn't support associating Elastic IPs with clusters. Clusters which runs in Fargate mode operate on instances which are not yours, it's the opposite of classic ECS stacks. That means you can't manage networking of host instances.
There is a way to associate IP with stack by having a Network Load Balancer in front of cluster. Then you could add a rule which allows connect your cluster through NLB.