I have a working Kubernetes gossip-based cluster deployed on AWS using Kops.
I also have the dashboard running on localhost:8001 on my local machine.
If I understand correctly this URL https://kubernetes.io/docs/admin/authentication/ gives the different ways to expose the dashboard properly among other things.
Whats the easiest and simplest steps to expose the cluster's dashboard across internet?
What are the disadvantages of using the gossip based cluster?
Is it all right to stop my EC2 instances when I am not using the cluster? Are there any reconfiguration steps needed when the EC2 instances are restarted? Is there any sequence in which the EC2 instances must be restarted?
[I realised that 3 is a bad question and the autoscaling group will cause another ec2 instance to start for each stopped ec2 instance (the kubernetes people and kops people are too good and the jokes on me). That said how can I stop/start the kubernetes cluster when I am not using it]/when I need it
Just making sure some one else finds this useful.
If you are trying to save costs when not using the kops kubernettes cluster on aws there are these options.
1. Tear it down completely.(you can rebuild it later) OR
2. In the 2 autoscale groups edit the min,desired,max to 0. This will bring down the cluster(you can later revert these values and the cluster will be back on its own)
Thanks.
R
Related
I have a cluster with a mixture of services running on EC2 and Fargate, all being used internally. I am looking to deploy a new Fargate Service which is going to be publicly available over the Internet and will get around 5000 requests per minutes.
What factors do I need to consider so that I can choose if a new cluster should be created or if I can reuse the existing one? Would sharing of clusters also lead to security issues?
If your deployment is purely using Fargate, not EC2, then there's really no technical reason to split it into a separate ECS cluster, but there's also no reason to keep it in the same cluster. There's no added cost to create a new Fargate cluster, and logically separating your services into separate ECS clusters can help you monitor them separately in CloudWatch.
I'am new to the kubernetes and gcp. I have one cluster called A where my service is running. In order to run it needs an elasticsearch cluster. I would like to run the elasticsearch in the different GKE cluster so it won't have any impact on my current services. I created another cluster called B and now I have an issue I don't know how to make the service from the A cluster to connect to the elasticsearch pod from the B cluster.
Currently I've found two solutions:
a) make elasticsearch accessible on the public internet - I want to avoid such solution
b) add to the service another container that would authenticate to the B cluster and run kubectl proxy
None of them seems to be the perfect solution. How to solve such case ? Do you have any tutorials articles or tips how to solve it properly ?
edit:
important notes:
both clusters belong to the default vpc
clusters exist I cannot tear them down and recreate reconfigured
You can expose your service with type LoadBalancer. This will create an internal load balancer which other GKE cluster can reach.
This article details the steps.
This article explains the available service option on GKE in detail.
With these option you dont have to expose your service to public internet or destroy your GKE cluster.
Hope this helps.
looking a guide to install kubernetes over AWS EC2 instances using kops Link I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes, is possible set an IP to some configuration file then my cluster is created with a specific IP in my control node and my etcd node???? if a control node is restarting and not have elastic IP its change, and a big number of issues starts. I want to prevent this problem, or at least after deploy change my control node IP.
I want to install a Kubernetes cluster, but I want assign Elastic IP at least to my control and etcd nodes
The correct way, and the way almost every provisioning tool that I know of does this, is to use either an Elastic Load Balancer (ELB) or the new Network Load Balancer (NLB) to put an abstraction layer in front of the master nodes for exactly that reason. So it does one step better than just an EIP and assigns one EIP per Availability Zone (AZ), along with a stable DNS name. It's my recollection that the masters can also keep themselves in sync with the ELB (unknown about the NLB, but certainly conceptually possible), so if new ones come online they register with the ELB automatically
Then, a similar answer for the etcd nodes, and for the same reason, although as far as I know etcd has no such ability to keep the nodes in sync with the fronting ELB/NLB so that would need to be done with the script that provisions any new etcd nodes
At the time of writing this, there isn't any out-of-box solution from kops.
But you can try k8s-eip for this if your use case isn't critical. I wrote this tool for my personal cluster to save cost.
We have started using ECS and we are not quite sure if the behaviour we are experiencing is the correct one, and if it is, how to work around it.
We have setup a Beanstalk Docker Multicontainer environment which in the background uses ECS to manage everything, that has been working just fine. Yesterday, we created a standalone cluster in ECS "ecs-int", a task definition "ecs-int-task" and a service "ecs-int-service" associated to a load balance "ecs-int-lb" and we added one instance to the cluster.
When the service first ran, it worked fine and we were able to reach the docker service through the loadbalance. While we were playing with the instance security group that is associated to the cluster "ecs-int" we mistakenly removed the port rule where the container were running, and the health check started failing on the LB resulting it in draining the instance out from it. When it happened, for our surprise the service "ecs-int-service" and the task "ecs-int-task" automatically moved to the Beanstalk cluster and started running there creating an issue for our beanstalk app.
While setting up the service we setup the placement rule we set as "AZ Balanced Spread".
Should the service move around cluster? Shouldn't the service be attached only to the cluster it was originally created to? If this is the normal behaviour though, how can we set a rule so he service even if the instances for some reason fail the health check but to stick within the same cluster?
Thanks
I have re-created all the infrastructure again and the problem went away. As I suspected, services created for one cluster should not move to different cluster when instance(s) fail.
We have set up two Kubernetes clusters on aws one using the the kube-up script and other with the kops create cluster command. We find that the inter-pod communication in the kops created cluster is very slow making us think the routing is at problem. If anyone has experienced this issue and have as solution to suggest to reduce the latency its much appreciated.