looking a guide to install kubernetes over AWS EC2 instances using kops Link I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes, is possible set an IP to some configuration file then my cluster is created with a specific IP in my control node and my etcd node???? if a control node is restarting and not have elastic IP its change, and a big number of issues starts. I want to prevent this problem, or at least after deploy change my control node IP.
I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes
The correct way, and the way almost every provisioning tool that I know of does this, is to use either an Elastic Load Balancer (ELB) or the new Network Load Balancer (NLB) to put an abstraction layer in front of the master nodes for exactly that reason. So it does one step better than just an EIP and assigns one EIP per Availability Zone (AZ), along with a stable DNS name. It's my recollection that the masters can also keep themselves in sync with the ELB (unknown about the NLB, but certainly conceptually possible), so if new ones come online they register with the ELB automatically
Then, a similar answer for the etcd nodes, and for the same reason, although as far as I know etcd has no such ability to keep the nodes in sync with the fronting ELB/NLB so that would need to be done with the script that provisions any new etcd nodes
At the time of writing this, there isn't any out-of-box solution from kops.
But you can try k8s-eip for this if your use case isn't critical. I wrote this tool for my personal cluster to save cost.
Related
I "inherited" an unmanaged EKS cluster with two nodegroups created through eksctl with Kubernetes version 1.15. I updated the cluster to 1.17 and managed to create a new nodegroup with eksctl and nodes successfully join the cluster (i had to update aws-cni to 1.6.x from 1.5.x to do so). However the the Classic Load Balancer of the cluster marks my two new nodes as OutOfService.
I noticed the Load Balancer Security Group was missing from my node Security Groups thus i added it to my two new nodes but nothing changed, still the nodes were unreachable from outside the EKS cluster. I could get my nodes change their state to InService by applying the Security Group of my two former nodes but manually inserting the very same inbound/outbound rules seems to sort no effect on traffic. Only the former nodegroup security group seems to work in this case. I reached a dead end and asking here because i can't find any additional information on AWS documentation. Anyone knows what's wrong?
My co-workers are launching GKE clusters and managing them from a pair of centralized VMs. The vms are in us-east4
When they launch GKE clusters in the same region (us-east4), all is well. They can access both the worker nodes and also the GKE Master addresses via the peering connection. However, they could not access the master nodes of a GKE cluster built in europe-west3. I built a VM in that region, and was successfully able to connect to port 443 of the master node IPs. Global routing is enabled for the VPC network and inter-region access of VMs and other services is no problem.
Seems very clear that GKE master nodes can only be accessed in the same region. But is this documented somewhere? I did open a support case on Monday, but having little luck getting any reasonable information back.
It seems like this is an expected behavior. Since I have reviewed here, I understood the following information about it, but you are right, there is nothing like this on it:
The private IP address of the master in a regional cluster only could be reachable from the subnetworks that are in the same region, or from on-premises devices that are connected to the same region.
Now, based on this, I would recommend you to set up a proxy on the same region where your GKE master is, in order to make all the requests coming from a different region, look like they come from the reachable region.
Please review this, it is an example about how to reach your master from a cluster in another region.
I want to update my docker image in existing running AWS ECS FARGATE task.
I could have go with new revision but when I run that task inside cluster, it is creating new public IP address.
I can't change my existing public IP. I only want to update my docker image of running task.
What could be the possible solution ?
Unfortunately if you're running the container as a publicly routable container it will always update the public IP address of the container whenever you update the task definition.
There is currently no support for elastic IP addresses in Fargate, which would be the solution you're looking for.
I would suggest if keeping the IP address is required that you look at re architecting your solution to the following:
Public facing Network Load Balancer with a static IP address
Fargate containers register to a target group of the Network Load Balancer.
Remember that if you had any kind of failure at the moment this might also cause your container to lose its IP address.
The EC2 machines are running behind the ELB with the same AMI Image.
My requirement is, currently there are 5 EC2 instances are running behind the ELB this is my Min count in Auto-scaling Group and I also associated Elastic IP with them so its easy to serving code on them via Ansible, But when traffic Goes up Auto-scaling add more machines behind the same ELB, Its very Headache to add newly added machine public IP manually in Ansible Host.
How can I get all the machines IP to my Ansible host?
That's the classic use of dynamic inventory. Ansible docs even call out this specific use case :)
They also provide a working example. Check this link
I have a working Kubernetes gossip-based cluster deployed on AWS using Kops.
I also have the dashboard running on localhost:8001 on my local machine.
If I understand correctly this URL https://kubernetes.io/docs/admin/authentication/ gives the different ways to expose the dashboard properly among other things.
Whats the easiest and simplest steps to expose the cluster's dashboard across internet?
What are the disadvantages of using the gossip based cluster?
Is it all right to stop my EC2 instances when I am not using the cluster? Are there any reconfiguration steps needed when the EC2 instances are restarted? Is there any sequence in which the EC2 instances must be restarted?
[I realised that 3 is a bad question and the autoscaling group will cause another ec2 instance to start for each stopped ec2 instance (the kubernetes people and kops people are too good and the jokes on me). That said how can I stop/start the kubernetes cluster when I am not using it]/when I need it
Just making sure some one else finds this useful.
If you are trying to save costs when not using the kops kubernettes cluster on aws there are these options.
1. Tear it down completely.(you can rebuild it later) OR
2. In the 2 autoscale groups edit the min,desired,max to 0. This will bring down the cluster(you can later revert these values and the cluster will be back on its own)
Thanks.
R